Can you control the docker swarm API from within a container that is running inside of it?
I think one of the killer features of k8s is how simple it is to write clients that manipulate the cluster itself, even when they’re running from inside of it. Give them the right role etc and you’re done. You don’t even have to write something as complete as an actual controller/operator - but that’s also an option too
You can. I think there's a couple approaches - bind mount the docker socket, or expose it on localhost, and use host networking for the consuming container, or there exist various proxy projects for the socket. There may be other ways, curious if anyone else knows more.
Bind-mounting /var/run/docker.sock gives 100% root access to anyone that can write it. It's a complete non-starter for any serious deployment, and we should not even consider it at any time.
No. You cannot even do direct compilation on the same host and target with clang only.
LLVM doesn't come with the C library headers (VCRuntime) or the executable runtime startup code (VCStartup).Both of which are under Visual Studio proprietary licenses. So to use Clang on Windows without Mingw, you need Visual Studio.
This was an interesting story, but implying this is some unique insight irked me a little - perhaps because it is LLM-flavoured text that hypes it too much, and makes it sound like some kind of major breakthrough? Keeping the original game as-is, underneath a modern port 'layer', is a pretty popular and common way to update things, you can see it being done in a bunch of modern remasters.
I don’t have any evidence but I always get a strong suspicion that a very large % of what happens on this subreddit is fake. I don’t know what the exact motives are, but just something about it isn’t right to me.
I sort of agree. I don't know if it's "fake" so much as the members of that community use it as a place to extend their private role play into public.
On the one hand they're "mourning" their AI partners, but on the other hand they have intelligent and rational conversations about the practicalities of maintaining long running AI conversations. They talk about compacting vs pruning, they run evals with MRCR, etc. These are not (all) crazy people.
I know the fourth rail on HN these days besides sex, religion and politics seems to be “AI”, but AI has taken the drudgery out of going from design -> implementation for me at least.
this is a very interesting line of analogy .. allow me to broaden it with "a backhoe cannot do the mantle inlays with black oak" and at the same time "your backhoe is done in four days but the entire home still needs building"
These metaphors are always so facile - if a backhoe dug a randomized foundation for your house (all the while apologizing and re-digging) I'm pretty sure you would not use it over the shovel. A better analogy would've been using dynamite to dig a foundation over using a shovel.
Every project I’ve done that integrates with and LLM and/or used for coding the customer has paid my employer for and I have never gotten a less than perfect CSAT score from my customer or my PM. They have all been done on time, on budget and met both the functional and non functional requirements.
I’ve implemented LLM based “intent processing” instead of old school “give it a list of every possible phrase that someone could use to activate an intent” for call centers for a couple of states and a couple colleges - I worked for AWS ProServe directly in the pub sec division and now work for a third party consulting company.
Think of an intent as something like asking Siri for driving directions, setting an alarm etc.
One of my specialties is Amazon Connect for hosted call centers - based on the same service that Amazon uses internally. Think of the difference between how Google Assistant and Alexa worked pre LLM and how Siri works today compared to tool calling that modern LLMs used.
That’s just using an LLM in a product.
As far as how I use it everyday? I am a staff consultant and in the previous times, I needed at least a couple of developers under me just to get the grunt work done on time. Now I can just treat Claude Code and Codex as faster, more accurate ticker takers.
Of course I’m not going to dox myself (more than I probably already have) or break any NDAs
And before the gate keeping starts about “I must be inexperienced”. My first time coding was on a 1Mhz Apple //e in assembly and BASIC in 1986. During the first decade and a half of my professional career, I programmed in C targeting everything from mainframes to PCs to Windows CE ruggedized devices.
Think it is a much bigger deal than you’re making out, because we don’t have figures on how often the cars need assistance.
We assume it’s just occasionally but we don’t actually know that. They could be requesting assistance constantly and Waymo would have an incentive to keep that hush-hush. Certainly would not be the first time a big SV company has faked it until they technically worked.
We do know it's not all cars constantly, though. The PGE outage in San Francisco proved it, as anytime a Waymo came across a unpowered traffic light, it was configured to ask for assistance. This led to disaster, as there weren't close to enough humans to provide guidance to all the Waymos.
This is not proof of anything, it’s just an explanation that Waymo provided. We have no way of validating any part of that is true.
The way the cars behaved during the power outage could have been the result of anything - even eg. requiring full remote control and losing connectivity to a local facility
It’s a little bit of both right? They’re entrenched yes, but it’s not technologically trivial either. The operations they do for each account might be simple but the shear volume of transactions they handle is enormous. The scale makes it complicated.
I’ve never seen github.github in the URL before, and without additional info I would have assumed it was someone pulling a trick to impersonate their org
Provided the correct result is generated I don't get the rationale for this one. As long as you obey the other rule for UTF-8 compatibility, why would it be a problem to represent as bytes (or anything else)?
Seems like it would put e.g. GC'ed languages where strings are immutable at a big disadvantage
I think one of the killer features of k8s is how simple it is to write clients that manipulate the cluster itself, even when they’re running from inside of it. Give them the right role etc and you’re done. You don’t even have to write something as complete as an actual controller/operator - but that’s also an option too
reply