Hacker Newsnew | past | comments | ask | show | jobs | submit | r00tbeer's commentslogin

The first patch release (released on launch day) says: "Messaging to distinguish particular users hitting their user quota limit from all users hitting the global capacity limits." So, collectively we're hitting the quota, its not just your quota. (One would think Google might know how to scale their services on launch day...)

The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."


With Ultra I hit that limit in 20 minutes with Gemini 3 low. When the rate limit cleared some hours later, I got one prompt before hitting limit again.


If by "Ultra", you're referring to the Google AI Ultra plan, then I just want to let you know that it doesn't actually take Google AI plans into consideration. It seems like the product will have its own separate subscription. At the moment, everyone is on the same free plan until they finalize their subscription pricing/model (https://antigravity.google/docs/plans).

On a separate note, I think the UX is excellent and the output I've been getting so far are really good. It really does feel like AI-native development. I know asking for a more integrated issue-tracking experience might be expanding the scope too much but that's really the biggest missing feature right now. That and, I don't like the fact that the "Review Changes" doesn't work if you're asking it to modify reports that are not in the current workspace that's open.


perhaps you were feeding into its the context your whole node_modules folder? :/


You'd really hope that an AI IDE would know to respect .gitignore


you'd hope so, the same way you'd hope that AI IDEs would not show these package/dependency folder contents when referencing files using @ - but i still get shown a bunch of shit that i would never need to reference by hand


Depending on which shared GCP project you get assigned to, mine had a global 300 million tokens per minute quota that was being hit regularly.


One would think this would have been obvious when it fails on the first or second request already, yet people here all complain about rate limits.

When I downloaded it, it already came with the proper "Failed due to model provider overload" message.

When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.

I did not see functionality to have it test the app in the browser yet.


Avoiding this scenario is why Google will renew all the domains for every startup it has ever acquired in perpetuity.


See https://irenezhang.net/papers/demikernel-sosp21.pdf for a more thorough paper on the Demikernel from 2021. There are some great ideas for improving the kernel interface while still allowing efficient DPDK-style pipelines.


I think that URL gets served by the same single binary. Might look like multiple files and connections to you ...


Maybe. Seems odd that they'd use a vestigial 'static' directory in the request path, though. I didn't read it because the layout makes it useless on mobile browsers, but I have a feeling they mean that the whole site is coded into one binary like a self-contained ssg rather than the site only requiring one file to work.


The static files are probably a directory embedded into the binary with Go's embed package, and mounted on /static.


But why? Why not serve all needed content inline?


Because that means every page request downloads all the static content. It’s generally nice for people if they only have to download the shared assets once.


The stylesheet is just under 3KB even with no minification or compression. At that size, the cost is negligible, and inlining will consistently be faster to load even than a warm cache of an external stylesheet. In most conditions, you’ve got to get towards 100KB before it even becomes uncertain. Loading from cache tends to be surprisingly slow, vastly slower than most people expect. (Primary source: non-rigorous personal experimentation and observation. But some has been written about how surprisingly slow caches can be, leading to things like some platforms racing a new network request and reading from disk, which is bonkers.)

Seriously, people should try doing a lot more inlining than they do.


I think it depends on how you set your cache? If it’s not configured to re-check with the server it may be much faster.

Then again, for 3KB the overhead of doing a cache check after parsing the HTML for the first time and then rendering it again may already be too much :)


Exactly. Inline is just surprisingly much faster than a lookup by URL, which has to check “do I have this cached? What type (immutable, revalidate, &c. &c.)? Now (if suitable) fetch it from the cache.” before it can get on with actually acting on the resource’s contents.


interesting. i shall consider this :)


I sincerely doubt that the assets on the site in question are large enough to warrant this.


Sometimes browser caching and ease of development is the reason here.


"To the best of my knowledge, this derivation is more efficient than the standard published Bresenham or midpoint circle formulations I found on-line. All of them seem to require more operations. In the branching version, mine requires only 3 to 6 adds — that’s it. The standard algorithms I’ve seen listed tend to require 9 or more."


This is from June, 2018. (That should be in the title.) I'm curious how the last five years have been for Costco.


Looks good: https://www.macrotrends.net/stocks/charts/COST/costco/gross-...

And the stock is now at 563 so more than doubled in 5 years, not too shabby.


Being crushed by Amazon’s purchase of Whole Foods certainly didn’t come to pass, that seemed to be the big threat this slide deck was discussing, no doubt in response to some talking heads over on the financial channels who’s hair was on fire because Amazon was going to outcompete all grocery stores and drive them out of business, even Costco.


Hasn't everyone learned that "store all the history of changes" is an anti-feature? The Legal departments generally do not care for this (its just more data to make sure you deleted). And it makes schema migrations more painful as not only do you have to migrate the data you have now, but all of your historical data too! If you add a new property do you backfill it in your old data (to keep your code working)? Or start special casing old version in your code? Neither is pretty.

If you want historical audit trails, make them intentional and subject to the same rules and patterns as your regular data.


This is systems debugging: "After hacking around for a while I got nowhere, but in so much detail."


I wonder how often it is an advantage to not know English. Then you might have less baggage from outside the computer and you might map the token to what it actually does in the system (rather than what the token was aspiring to be). Assuming you can get to that point, of course.


Here's my playlist. First the good ones that are actively in my queue (in general order of recommendation):

- The Memory Palace: http://thememorypalace.us/ (these are stunningly good historical stories)

- On the Media: http://www.wnyc.org/shows/otm/

- Reply All: https://gimletmedia.com/reply-all/ (this is the one I expected to see on every list at hackernews, and don't)

- The Gist: http://www.slate.com/articles/podcasts/gist.html

- The Daily: https://www.nytimes.com/column/the-daily

- 99% Invisible: http://99percentinvisible.org/

- Economist Radio: https://radio.economist.com/

- New Yorker Radio Hour: http://www.wnyc.org/shows/tnyradiohour/

- Planet Money: http://www.npr.org/sections/money/ (not the juggernaut of content they once were)

- This American Life: https://www.thisamericanlife.org/

Shorter series (or just defunct or really rarely updated) that I can recommend to this crowd:

- Zachtronics Podcast: http://www.zachtronics.com/podcast/

- Revisionist History: http://revisionisthistory.com/ (should be starting a new season soon)

- Mystery Show: https://gimletmedia.com/mystery-show/

- A Life Well Wasted: http://alifewellwasted.com/ (great videogamey series)

- Containers: https://www.flexport.com/blog/alexis-madrigal-containers-pod...

- S-Town: https://stownpodcast.org/

Listening at 1.8x for most of these shows forces me to pay attention, and let's me consume more content. The exception is for The Memory Palace which deserves to be heard exactly as Nate makes it (1x).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: