You can preallocate your data structures and control memory layout in Go.
Also, despite GC there’s a sizeable amount of systems programming already done in Go and proven in production.
Given how much importance is being deservedly given to memory safety, Go should be a top candidate as a memory safe language that is also easier to be productive with.
I’m not sure. I keep asking the LLMs whether I should rewrite project X in language Y and it just asks back, “what’s your problem?” And most of the times it shoots my problems down showing exactly why rewriting won’t fix that particular problem. Heck, it even quoted Joel Spolsky once!
Of course, I could just _tell_ it to rewrite, but that’s different.
I just asked Claude to create a memory system for itself for one of my projects. It created a file based utility written in Rust on its own. I offered it to use beads but it declined as beads is a task tracker and what we needed was a spec tracker.
Long winded way to say that it’s now easier to just create something to fit your needs… like 3D printing components.
Claude Code already has a built-in task tracker for short/mid term tracking.
It’s a worthwhile answer if it can be proven correct because it means that we’ve found a way to create intelligence, even if that way is not very efficient. It’s still one step better than not knowing how to do so.
It must be deposited into OpenAI's bank account so that they can then deposit it into NVIDIA's account who can then in turn make a deal w/ OpenAI to deposit it back into OpenAI's account for some stock options. I think you can see how it works from here but if not then maybe one of the scaled up "reasoning" AIs will figure it out for you.
Here’s a conceptual background about how and why HTTP/3 came to be (recollected from memory):
HTTP/1.0 was built primarily as a textual request-response protocol over the very suitable TCP protocol which guaranteed reliable byte stream semantics. The usual pattern was to use a TCP connection to exchange a request and response pair.
As websites grew more complex, a web page was no longer just one document but a collection of resources stitched together into a main document. Many of these resources came from the same source, so HTTP/1.1 came along with one main optimisation — the ability to reuse a connection for multiple resources using Keep Alive semantics.
This was important because TCP connections and TLS (nee SSL) took many round-trips to get established and transmitting at optimal speed. Latency is one thing that cannot be optimised by adding more hardware because it’s a function of physical distance and network topology.
HTTP/2 came along as a way to improve performance for dynamic applications that were relying more and more on continuous bi-directional data exchange and not just one-and-done resource downloads. Two of its biggest advancements were faster (fewer round-trips) TLS negotiation and the concept of multiple streams over the same TCP connection.
HTTP/2 fixed pretty much everything that could be fixed with HTTP performance and semantics for contemporary connected applications but it was still a protocol that worked over TCP. TCP is really good when you have a generally stable physical network (think wired connections) but it performs really badly with frequent interruptions (think Wi-Fi with handoffs and mobile networks).
Besides the issues with connection reestablishment, there was also the challenge of “head of the line blocking” — since TCP has no awareness of multiplexed HTTP/2 streams, it blocks everything if a packet is dropped, instead of blocking only the stream to which the packet belonged. This renders HTTP/2 multiplexing a lot less effective.
In parallel with HTTP/2, work was also being done to optimise the network connection experience for devices on mobile and wireless networks. The outcome was QUIC — another L4 protocol over UDP (which itself is barebones enough to be nicknamed “the null protocol”). Unlike TCP, UDP just tosses data packets between endpoints without much consideration of their fate or the connection state.
QUIC’s main innovation is to integrate encryption into the transport layer and elevate connection semantics to the application space, and allow for the connection state to live at the endpoints rather than in the transport components. This allows retaining context as devices migrate between access points and cellular towers.
So HTTP/3? Well, one way to think about it is that it is HTTP/2 semantics over QUIC transport. So you get excellent latency characteristics over frequently interrupted networks and you get true stream multiplexing semantics because QUIC doesn’t try to enforce delivery order or any such thing.
Is HTTP/3 the default option going forward? Maybe not until we get the level of support that TCP enjoys at the hardware level. Currently, managing connection state in application software means that over controlled environments (like E-W communications within a data centre), HTTP/3 may not have as good a throughput as HTTP/2.
Thank you for a great overview! I wish HTTP3/QUIC was the "default option" and had much wider adoption.
Unfortunately, software implementations of QUIC suffer from dealing with UDP directly. Every UDP packet involves one syscall, which is relatively expensive in modern times. And accounting for MTU further makes the situation ~64 times worse.
In-kernel implementations and/or io-uring may improve this unfortunate situation, but today in practice it's hard to achieve the same throughput as with plain TCP. I also vaguely remember that QUIC makes load-balancing more challenging for ISPs, since they can not distinguish individual streams as with TCP.
Finally, QUIC arrived a bit too late and it gets blocked in some jurisdictions (e.g. Russia) and corporate environments similarly to ESNI.
> In-kernel implementations and/or io-uring may improve this unfortunate situation, but today in practice it's hard to achieve the same throughput as with plain TCP.
This would depend on how the server application is written, no? Using io-uring and similar should minimise context-switches from userspace to kernel space.
> I also vaguely remember that QUIC makes load-balancing more challenging for ISPs, since they can not distinguish individual streams as with TCP.
Not just for ISPs; IIRC (and I may be recalling incorrectly) reverse proxies can't currently distinguish either, so you can't easily put an application behind Nginx and use it as a load-balancer.
The server application itself has to be the proxy if you want to scale out. OTOH, if your proxy for UDP is able to inspect the packet and determine the corresponding instance to send a UDP packet too, it's going to be much fewer resources required on the reverse proxy/load balancer, as they don't have to maintain open connections at all.
It will also allow some things more easily; a machine that is getting overloaded can hand-off (in userspace) existing streams to a freshly created instance of the server on a different machine, because the "stream" is simply related UDP packets. TCP is much harder to hand-off, and even if you can, it requires either networking changes or kernel functions to hand-off.
Glad you found it helpful! Most of it is distilled from High Performance Browser Networking (https://hpbn.co/). It’s a very well organised, easy to follow book. Highly recommended!
Unfortunately, it’s not updated to include QUIC and HTTP/3 so I had to piece together the info from various sources.
That's basically what QUIC is? It is a UDP based protocol over which HTTP can be run.
How else would you consider "just" switching HTTP to UDP? There are minimum required features such as 1. congestion control 2. multiplexed streams 3. encryption and probably a few others that I forgot about.
QUIC is actually a level 4 protocol, on the same level as UDP and TCP, it could work on IP directly, making it QUIC/IP.
They chose to keep the UDP layer because of its minimal overhead over raw IP and for better adoption and anti-ossification reasons, but conceptually, forget about UDP, QUIC is a TCP replacement that happens to be built on top of UDP.
Now for the answers:
- Why not HTTP over UDP? UDP is an unreliable protocol unsuitable for HTTP. HTTP by itself cannot deal with packet loss, among other things.
- Why not keep HTTP/2? HTTP/2 is designed to work with TCP and work around some of its limitations, it could probably work over QUIC too, but you would lose most of the advantages of QUIC
- Why not got back to HTTP/1? I could turn out to be a better choice than HTTP/2, but it is not a drop-in replacement either, and you would lose all the intersting features introduced since HTTP/2
I’d been looking for networking books meant for software developers for a while and just ordered “High Performance Browser Networking” and “Kubernetes Networking” a few hours ago. If only this was posted yesterday!
I had read Andrew Tanenbaum’s book on networking when I was in college. Great book, fun to read but as a developer, I could never really apply the knowledge from that book in my work and it’s been a gap that I only managed to bridge through unsystematic learning so far.
On-shore manufacturing requires an on-shore workforce. I’m wondering how this will sit with any company that wants to invest in on-shore manufacturing. I mean, what’s the big picture here?
Factory workers are easy targets compared with actual criminals. So ICE goes after them to meet their quotas. It could also be that the Hyandai executives weren't contributing enough to the right parties so had to be made an example of. Could be both.
Trump did get a metal turtle ship model made by one "Oh Jeong-chul, a master from HD Hyundai", gifted by Korean president Lee Jae-myung. But maybe Trump forgot that. He's getting old after all, and it was more than a week ago how could we expect him to remember.
That's the fun part. They all are violating immigration laws. The country is run on cheap immigrate labor. What we are seeing is selective enforcement.
Doesn't appear to be the case considering all the ICE news lately... On the contrary seems like they're enforcing white-collar immigration laws too now.
Also i'm willing to bet it was a tip off by a pissed off vendor or local union or something.
You can also just do it by the book and get proper l1-b visas or have them performing duties not categorized as work such as training, consulting local staff, etc (not a legal advice). Or you can do what they allegedly did and see where that gets you.
Also if they were cutting corners on this what else did they cut corners on?
the picture is that these workers must have been in the US under a completly above board legal framework to build out a battery plant and presumably they are a specialised ,experienced work force, under contract, Korea right, one of the most orderly countrys on the planet, rich too,
tell them to go home, they go home
so this must be a contrived way to eliminate competition, after all of the contracts were signed.
reputational damage to the US is incalculable
and a strategic retreat by all other countrys becomes likely
these people came with there familys, were detained at work, with there kids comming home from school to no parents
full tilt, psycho freak move
If they were all working on b1/b2 visas (like actually setting up lines and doing other work and not just training locals) as this source is claiming[0] it’s a clear violation and slam dunk case. They will be deported and barred entry for like 10 years. Also these laws were on the books since forever just hard to enforce unless someone is being completely obvious which seems to be the case here
Right, but my point is that isueing (very embarassing) orders for those workers, to leave, knowing that these workers would have only the vaugest, or no, notion of the laws involved in the contract, as they very likely could not fill out any of the paper work themselves, VS a mass arrest
that will likely derail further investment from some of the major allies of the US.
The net effect of this and other actions looks something like a modern equivilant of the Chinese "Cultural Revolution", but with an economy vastly more integrated and dependent with the rest of the world, but with no clear policy statement, and edicts bieng issued and revoked on a day to day basis.
The US could have told the Korean's , go home, in a soto voice, they would blush, go home, and ask for another chance to do business, that it was 500 workers @ a major new car plant means that they were given some sort of "dont worry about it" verbal assurance, and have been double crossed, which is a game changer.
Yes yes not letting foreign for profit corporations to blatantly break immigration laws to save a few million is exactly like cultural revolution. They totally got double crossed that is the most logical explanation
Also, despite GC there’s a sizeable amount of systems programming already done in Go and proven in production.
Given how much importance is being deservedly given to memory safety, Go should be a top candidate as a memory safe language that is also easier to be productive with.
reply