The first patch release (released on launch day) says: "Messaging to distinguish particular users hitting their user quota limit from all users hitting the global capacity limits." So, collectively we're hitting the quota, its not just your quota. (One would think Google might know how to scale their services on launch day...)
The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."
With Ultra I hit that limit in 20 minutes with Gemini 3 low. When the rate limit cleared some hours later, I got one prompt before hitting limit again.
If by "Ultra", you're referring to the Google AI Ultra plan, then I just want to let you know that it doesn't actually take Google AI plans into consideration. It seems like the product will have its own separate subscription. At the moment, everyone is on the same free plan until they finalize their subscription pricing/model (https://antigravity.google/docs/plans).
On a separate note, I think the UX is excellent and the output I've been getting so far are really good. It really does feel like AI-native development. I know asking for a more integrated issue-tracking experience might be expanding the scope too much but that's really the biggest missing feature right now. That and, I don't like the fact that the "Review Changes" doesn't work if you're asking it to modify reports that are not in the current workspace that's open.
you'd hope so, the same way you'd hope that AI IDEs would not show these package/dependency folder contents when referencing files using @ - but i still get shown a bunch of shit that i would never need to reference by hand
One would think this would have been obvious when it fails on the first or second request already, yet people here all complain about rate limits.
When I downloaded it, it already came with the proper "Failed due to model provider overload" message.
When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.
I did not see functionality to have it test the app in the browser yet.
See https://irenezhang.net/papers/demikernel-sosp21.pdf for a more thorough paper on the Demikernel from 2021. There are some great ideas for improving the kernel interface while still allowing efficient DPDK-style pipelines.
Maybe. Seems odd that they'd use a vestigial 'static' directory in the request path, though. I didn't read it because the layout makes it useless on mobile browsers, but I have a feeling they mean that the whole site is coded into one binary like a self-contained ssg rather than the site only requiring one file to work.
Because that means every page request downloads all the static content. It’s generally nice for people if they only have to download the shared assets once.
The stylesheet is just under 3KB even with no minification or compression. At that size, the cost is negligible, and inlining will consistently be faster to load even than a warm cache of an external stylesheet. In most conditions, you’ve got to get towards 100KB before it even becomes uncertain. Loading from cache tends to be surprisingly slow, vastly slower than most people expect. (Primary source: non-rigorous personal experimentation and observation. But some has been written about how surprisingly slow caches can be, leading to things like some platforms racing a new network request and reading from disk, which is bonkers.)
Seriously, people should try doing a lot more inlining than they do.
I think it depends on how you set your cache? If it’s not configured to re-check with the server it may be much faster.
Then again, for 3KB the overhead of doing a cache check after parsing the HTML for the first time and then rendering it again may already be too much :)
Exactly. Inline is just surprisingly much faster than a lookup by URL, which has to check “do I have this cached? What type (immutable, revalidate, &c. &c.)? Now (if suitable) fetch it from the cache.” before it can get on with actually acting on the resource’s contents.
"To the best of my knowledge, this derivation is more efficient than the standard published Bresenham or midpoint circle formulations I found on-line. All of them seem to require more operations. In the branching version, mine requires only 3 to 6 adds — that’s it. The standard algorithms I’ve seen listed tend to require 9 or more."
Being crushed by Amazon’s purchase of Whole Foods certainly didn’t come to pass, that seemed to be the big threat this slide deck was discussing, no doubt in response to some talking heads over on the financial channels who’s hair was on fire because Amazon was going to outcompete all grocery stores and drive them out of business, even Costco.
Hasn't everyone learned that "store all the history of changes" is an anti-feature? The Legal departments generally do not care for this (its just more data to make sure you deleted). And it makes schema migrations more painful as not only do you have to migrate the data you have now, but all of your historical data too! If you add a new property do you backfill it in your old data (to keep your code working)? Or start special casing old version in your code? Neither is pretty.
If you want historical audit trails, make them intentional and subject to the same rules and patterns as your regular data.
I wonder how often it is an advantage to not know English. Then you might have less baggage from outside the computer and you might map the token to what it actually does in the system (rather than what the token was aspiring to be). Assuming you can get to that point, of course.
Listening at 1.8x for most of these shows forces me to pay attention, and let's me consume more content. The exception is for The Memory Palace which deserves to be heard exactly as Nate makes it (1x).
The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."