In the before-AI world, it mattered a lot where data centers were geographically located. They needed to be in the same general location as population centers for latency reasons, and they needed to be in an area that was near major fiber hubs (with multiple connections and providers) for connectivity and failover. They also needed cheap power. This means there’s only a few ideal locations in the US: places like Virginia, Oregon, Ohio, Dallas, Kansas City, Denver, SF are all big fiber hubs. Oregon for example also has cheap power and water.
Then you have the compounding effect where as you expand your data centers, you want them near your already existing data centers for inter-DC latency reasons. AWS can’t expand us-east-1 capacity by building a data center in Oklahoma because it breaks things like inter-DC replication.
Enter LLMs: massive need for expanded compute capacity, but latency and failover connectivity doesn’t really matter (the extra latency from sending a prompt to compute far away is dwarfed by the inference time, and latency for training matters even less). This opens up the new possibility for data centers to be placed in geographic places they couldn’t be before, and now the big priority’s just open land, cheap power, and water.
>Oregon for example also has cheap power and water.
Cheap for who? For the companies having billions upon billions of dollars shoved into their pockets while still managing to lose all that money?
Power won't be cheap after the datacenters move in. Then the price of power goes up for everyone, including the residents who lived there before the datacenter was built. The "AI" companies won't care, they'll just do another round of funding.
I travel for work often (used to do it every week but now it’s once-ish a month), and fly business every now and then. I don’t think I’ve ever met any fellow work flyers who wanted the flying experience to be more focused on the “business aspect”. The lounge is for relaxing, and the comfortable seat on the plane is so I can sleep and not be a zombie when I land. I’ll work when I get to the destination, not while traveling.
And the original link about investment in India is also about fulfillment jobs and even worse, “investing in AI”, aka building data centers, which contribute essentially no jobs at all.
Where are you seeing “American” jobs? Amazon workers in India were laid off too.
There are similar stories about Amazon investing in American cities too. Cherry picking a story that Amazon is renovating their office in India is ingenuine.
At minimum this thing should be installed in its own VM. I shudder to think of people running this on their personal machine…
I’ve been toying around with it and the only credentials I’m giving it are specifically scoped down and/or are new user accounts created specifically for this thing to use. I don’t trust this thing at all with my own personal GitHub credentials or anything that’s even remotely touching my credit cards.
That list of movies is just the movies that Amazon Studios has been the distributor for via Prime Video. Amazon didn’t necessarily produce or fund all of the movies in that list. It’s a bunch of cheap movies that are likely meant to be loss leaders for Prime Video subscriptions, which is something that very much does fit the style of Amazon. Netflix, Hulu, Apple TV all have a similar list of D-tier garbage just to fill their catalogs.
On the contrary to your points, Amazon has put out some pretty solid and well received original series. The Boys, Gen V, Fallout, Reacher, Mr and Mrs Smith, Invincible, have all done really well if not been hits.
Games is pretty trash though. I think they’re also going for a loss leader strategy there, but the platform they’re trying to promote (Luna) just isn’t there.
> it uses the Claude models but afaik it is constantly changing which one is using depending on the perceived difficulty
Claude Code does the same. You can disable it in Kiro by specifically setting the model to what you want rather than “auto” using /model.
Tbh I’ve found Kiro to be much better than Claude Code. The actual quality of results seems about the same, but I’ve had multiple instances where Claude Code get stuck because of errors making tool calls whereas Kiro just works. Personally I also just prefer the simplicity of Kiro’s UX over CC’s relative “flashy” TUI.
AWS sends out targeted notifications via email and via alerts in the console if you are a current customer of a service that is changing somehow. It’s pretty normal that they don’t send out a press release and blog post every time something changes - if they did it would be a mass overload of such releases.
And if you are spending enough, a rep will probably personally reach out to you. I've never personally been blindsided by price changes, they seem actually one of the better vendors about this.
At a glance, that is missing (at least) a `parent` or `parent_id` attribute which items in HN can have (and you kind of need if you want to render comments), see http://hn.algolia.com/api/v1/items/46436741
It only scales down after a period of inactivity though - it’s not pay-per-request like other serverless offerings. DSQL looks to be more cost effective for small projects if you can deal with the deviations from Postgres.
Ah, good to know, I hadn't seen that V2 update. Looks like a min 5m inactivity to auto-pause (i.e., scale to 0), and any connection attempt (valid or not) resumes the DB.
In the before-AI world, it mattered a lot where data centers were geographically located. They needed to be in the same general location as population centers for latency reasons, and they needed to be in an area that was near major fiber hubs (with multiple connections and providers) for connectivity and failover. They also needed cheap power. This means there’s only a few ideal locations in the US: places like Virginia, Oregon, Ohio, Dallas, Kansas City, Denver, SF are all big fiber hubs. Oregon for example also has cheap power and water.
Then you have the compounding effect where as you expand your data centers, you want them near your already existing data centers for inter-DC latency reasons. AWS can’t expand us-east-1 capacity by building a data center in Oklahoma because it breaks things like inter-DC replication.
Enter LLMs: massive need for expanded compute capacity, but latency and failover connectivity doesn’t really matter (the extra latency from sending a prompt to compute far away is dwarfed by the inference time, and latency for training matters even less). This opens up the new possibility for data centers to be placed in geographic places they couldn’t be before, and now the big priority’s just open land, cheap power, and water.
reply