Hacker Newsnew | past | comments | ask | show | jobs | submit | cobolcomesback's commentslogin

> it uses the Claude models but afaik it is constantly changing which one is using depending on the perceived difficulty

Claude Code does the same. You can disable it in Kiro by specifically setting the model to what you want rather than “auto” using /model.

Tbh I’ve found Kiro to be much better than Claude Code. The actual quality of results seems about the same, but I’ve had multiple instances where Claude Code get stuck because of errors making tool calls whereas Kiro just works. Personally I also just prefer the simplicity of Kiro’s UX over CC’s relative “flashy” TUI.


AWS sends out targeted notifications via email and via alerts in the console if you are a current customer of a service that is changing somehow. It’s pretty normal that they don’t send out a press release and blog post every time something changes - if they did it would be a mass overload of such releases.

And if you are spending enough, a rep will probably personally reach out to you. I've never personally been blindsided by price changes, they seem actually one of the better vendors about this.

DuckDB does as well. A super simplified explanation of duckdb is that it’s sqlite but columnar, and so is better for analytics of large datasets.

The schema is this: items(id INTEGER PRIMARY KEY, type TEXT, time INTEGER, by TEXT, title TEXT, text TEXT, url TEXT

Doesn't scream columnar database to me.


At a glance, that is missing (at least) a `parent` or `parent_id` attribute which items in HN can have (and you kind of need if you want to render comments), see http://hn.algolia.com/api/v1/items/46436741

Edges are a separate table

V2 scales to zero as of last year.

https://aws.amazon.com/blogs/database/introducing-scaling-to...

It only scales down after a period of inactivity though - it’s not pay-per-request like other serverless offerings. DSQL looks to be more cost effective for small projects if you can deal with the deviations from Postgres.


Ah, good to know, I hadn't seen that V2 update. Looks like a min 5m inactivity to auto-pause (i.e., scale to 0), and any connection attempt (valid or not) resumes the DB.


This thread is about using multi-machine clusters, and sqlite cannot be used for multi-machine clusters in k3s. etcd is the default when starting k3s in cluster mode [1].

[1] https://docs.k3s.io/datastore


No, this thread is about multiple containers across machines. What you describe is multi-master for the server. You can run multple agents across serveral nodes therefore clustering the container workload across multiple container hosting servers. Multi-master is something different.


The very first paragraph of the first comment you replied to is about multi-master HA. The second sentence in that comment is about “every machine is equal”. k3s with sqlite is awesome, but it cannot do that.


A dependency that forms the foundation of your build process, distribution mechanisms, and management of other dependencies is a materially different risk than a dependency that, say, colorizes terminal output.

I’m doubtful that alone motivated an acquisition, it was surely a confluence of factors, but Bun is definitely a significant dependency for Claude Code.


MIT code, let Bun continue develop it, once project is abandoned hire the developers.

If they don't want to maintain; GitHub fork with more motivated people.


> MIT code, let Bun continue develop it, once project is abandoned hire the developers.

Why go through the pain of letting it be abandoned and then hiring the developers anyway, when instead you can hire the developers now and prevent it from being abandoned in the first place (and get some influence in project priorities as well)?


Ironically these chips are being targeted at inference as well (the AWS CEO acknowledged the difficulties in naming things during the announcement).


Perhaps they should do their training on their Inferentia chips and see how that works out?


The same thing happened to AMD and Gaudi. They couldn't get training to work so they pivoted to inference.


AWS has built 20 data centers in Indiana full of half a million Trainium chips explicitly for Anthropic. Anthropic is using them heavily. The same press announcement that Anthropic has made about Google TPUs is the exact same one they made a year ago about Trainium. Hell, even in the Google TPU press release they explicitly mention how they are still using Trainium as well.


Can you link to the press releases? The only one I'm aware of by Anthropic says they will use Tranium for future LLMs, not that they are using them.


This is the Anthropic press release from last year saying they will use Trainium: https://www.anthropic.com/news/anthropic-amazon-trainium

This is the AWS press release from last month saying Anthropic is using 500k Trainium chips and will use 500k more: https://finance.yahoo.com/news/amazon-says-anthropic-will-us...

And this is the Anthropic press release from last month saying they will use more Google TPUs but also are continuing to use Trainium (see the last 2 paragraphs specifically): https://www.anthropic.com/news/expanding-our-use-of-google-c...


There is no press release saying that they are using 500k trainium chips. You can search on amazon's site.


In the subheading of https://www.aboutamazon.com/news/aws/aws-project-rainier-ai-... Unless there are other Project Ranier customers that it's counting.


This wouldn’t have specifically helped in this situation (EC2 reading from S3), but on the general topic of preventing unexpected charges from AWS:

AWS just yesterday launched flat rate pricing for their CDN (including a flat rate allowance for bandwidth and S3 storage), including a guaranteed $0 tier. It’s just the CDN for now, but hopefully it gets expanded to other services as well.

https://news.ycombinator.com/item?id=45975411


AWS just yesterday launched flat rate pricing for their CDN (including a flat rate allowance for bandwidth and S3 storage), including a guaranteed $0 tier.

https://news.ycombinator.com/item?id=45975411

I agree that it’s likely very technically difficult to find the right balance between capping costs and not breaking things, but this shows that it’s definitely possible, and hopefully this signals that AWS is interested in doing this in other services too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: