Hacker Newsnew | past | comments | ask | show | jobs | submit | tybit's commentslogin

Yes, but it’s a common misconception that impact is a bad thing.

The body, including bones, muscles, tendons and joints, adapt to stress. Many people do too little, not too much, as they get older.

There’s a limit to that recovery of course, and balancing it with stress is not always simple.


I also think fsync before acking writes is a better default. That aside, if you were to choose async for batching writes, their default value surprises me. 2 minutes seems like an eternity. Would you not get very good batching for throughout even at something like 2 seconds too? Still not safe, but safer.


Yes, the greenest browser is one that doesn’t use AI. They aren’t claiming they’ve built that though, just the greenest AI.


I don't use AI though. Are they going to put automatic AI responses on the SERP? That's less green than simply not having AI on the SERP. Giving me something I do not want is wasteful by definition.


I disagree. Without AI I might take 15 min to search for something in google that would have taken me a single prompt in ChatGPT. The energy used by my screen in those 15 minutes would be higher than the energy taken by that prompt.


It’s interesting that the author chose to use SHA256 hashing for the CPU intensive workload. Given they run on hardware acceleration using AES NI, I wonder how generally applicable it is. Still interesting either way though, especially since there were reports of earlier Graviton (pre v3) instances having mediocre AES NI performance.


Hardware-accelerated SHA support has a patchy history. I wrote an article some years ago about the prevalence of SHA instructions in x86 in x86_64 CPUs [0], like the current mess we see now with AVX-512, Intel invented something useful then declined to continue supporting it, while competitors that were late to the party became the real champions.

[0]: https://neosmart.net/blog/will-amds-ryzen-finally-bring-sha-...


Does AES NI imply SHA256 acceleration support?


There are some crossed wires here.

AES-NI is x86-specific terminology. It was proposed in 2008. SHA acceleration came later, announced in 2013. The original version covers only SHA-1 and SHA-256 acceleration, but a later extension adds SHA-512 acceleration. At least for x86, AES-NI does not imply SHA support. For example, Westmere, Sandy Bridge, and Ivy Bridge chips from Intel have AES-NI but not SHA.

The equivalent in Arm land is called "Cryptographic Extensions" and was a non-mandatory part of ARMv8 announced in 2011. Both AES and SHA acceleration were announced at the same time. While part of the same extensions, there are separate feature flags for each of AES, SHA-1, and SHA-256.


Yeah, investing in the top companies leads to higher returns for most periods when looking short term.

Over longer periods, the top companies by market cap tend to change though. https://www.investmentnews.com/equities/only-one-of-the-worl...

So if you want to invest in the top companies, you either need to think they won’t change anymore, or you need to find when to buy and sell. Index funds solve this problem for you, albeit with slightly lower returns in the short term.


> At successful tech companies, engineering work is valued in proportion to how much money it makes the company

If you look at what it actually takes to get promoted at most tech companies I’d say this isn’t generally true at many big tech companies.

Being on a very lucrative part of the product may not get you as much “impact” on your promotion packet as if you are working on a platform/infra touching the whole org. Even if that platform isn’t generating the company much money even indirectly.


Runtimes with garbage collectors typically optimize for allocation, not deletion.


Generational GC optimizes for both. They assume that most objects die young, so choose to relocate live objects and just mark the entire region that was evacuated as empty. So this is a very efficient way to delete data.



CockroachDB is presumably what they’re referring to in:

> For Postgres-compatible NewSQL, we would’ve had one of the largest single-cluster footprints for cloud-managed distributed Postgres. We didn’t want to bear the burden of being the first customer to hit certain scaling issues

I find their claim a bit hard to believe.


It is in inherent of a good engineer’s thinking to always consider the worst. Problem is sometimes, it can come across as a boast, or out of touch. It’s neither. It’s really the manifest anxiety of the whole thing shitting its pants and falling over.


At some point failures start getting measured in millions of dollars per minute.

It's smart to be cautious.


While they don’t specify it sounds like they don’t even require 2FA to access their systems?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: