Agreed but not just NuGet. David also was a key driver for SignalR and now Aspire (which is genuinely the most awesome tool I’ve seen for a while). He’s also extremely humble and doesn’t need to try to impress anyone.
I think it’s clear that he’s just doing this tool for fun and chose to share it. People shouldn’t mix up their anti-Microsoft autoeroticism and a person that happens to work for them.
I joined Stackoverflow early on since it had a prevalence towards .NET and I’ve been working with Microsoft web technologies since the mid 90’s.
My SO account is coming up to 17 years old and I have nearly 15,000 points, 15 gold badges, including 11 famous questions and similar famous answer badges, also 100 silver and 150 bronze. I spent far much time on that site in the early days, but through it, I also thoroughly enjoyed helping others. I also started to publish articles on CodeProject and it kicked off my long tech blogging “career”, and I still enjoy writing and sharing knowledge with others.
I have visited the site maybe once a year since 2017. It got to the point that trying to post questions was intolerable, since they always got closed. At this point I have given up on it as a resource, even though it helped me tremendously to both learn (to answer questions) and solve challenging problems, and get help for edge cases, especially on niche topics. For me it is a part of my legacy as a developer for over 30 years.
I find it deeply saddening to see what it has become. However I think Joel and his team can be proud of what they built and what they gave to the developer community for so many years.
As a side note it used to state that was in the top 2% of users on SO, but this metric seems to have been removed. Maybe it’s just because I’m on mobile that I can’t see it any more.
LLM’s can easily solve those easy problems that have high commonality across many codebases, but I am dubious that they will be able to solve the niche challenging problems that have not been solved before nor written about. I do wonder how those problems get solved in the future.
Apparently it was removed to reduce the load on the database see [0], [1].
The top voted response points out that SO are [2]:
> destroying a valuable feature for users.
Kinda wild they allowed it. As that answer also suggests, perhaps rather than remove it entirely, they could just compute those stats at a lesser frequency to reduce load.
i've been using SO for 17 years as well but ultimately gave up out of frustration, and a lot of comments here are correctly pointing at the toxicity but the real-time chats were on a next level, it was absolutely maddening how toxic and aggressive these moderators were.
I’m continually impressed by my 12 year old son’s ability to get around those restrictions. He recently got around his time limit with Brawl Stars by having his friends sending him Brawl Stars view links via WhatsApp. This opens links using Safari’s underlying engine (SFSafariViewController), which does not get considered by the safari app time block.
There was a time in the mid 90’s where we could have gone down one of two paths. “Free” ad-supported-you-are-the-product”, or digital micropayments. The latter was hard, and didn’t mature until 20 years later. Instead we got the shitfest internet we see today.
What would the internet look like today if every visitor could drop micro-donations on content creators by simply clicking a donate button in their browser or on the website to support the content creators they appreciated?
Webrings led to DoubleClick. DoubleClick led to Google Ads. That led us to Facebook and the hyper-invasive internet we have today where everyone worldwide is profiled for their ad-worth. Fuck this timeline.
I now have a Raspberry PI + Pi-hole running on my home network so none of my family see ads any more and nobody has to install any software themselves. You can also add your children’s devices to specific groups with extra block lists, to keep them a bit safer in the internet.
I got the Pi 5 to future-proof myself a bit and the entire kit cost about $160 and assembling and installing + configuring the network took 30 minutes.
I honestly don’t know why I waited so long to do it. I highly advise. I know it’s not perfect, and you have to do some tweaks around DNS-over-HTTPS but quite honestly it feels like it was the best money I spent last year by far.
That’s mainly because German has fucked up the smart meter rollout. In their wisdom they separated the meter and the gateway when other countries just combined it. They also made it super secure (good), but then didn’t look at the fact that lots of people live in rented apartments and their meters in the cellars have really poor or no cellular connectivity. When Germany can finally do steerable dynamic loads properly at 95% of the market rather than under 10%, it will finally make a difference on steering pricing for such consumers as yourself.
Germany is investing in massive battery parks dotted around the grid. This will make a difference to supporting base load and offsetting coal, but it will take time.
If there’s anything about the Germans you can count on, is that they move slowly.
This reminds me of Adrian Thompson’s (University of Sussex) 1996 paper, “An evolved circuit, intrinsic in silicon, entwined with physics,” ICES 1996 / LNCS 1259 (published 1997), which was extended in his later thesis, “Hardware Evolution: Automatic Design of Electronic Circuits in Reconfigurable Hardware by Artificial Evolution, Springer, 1998”.
Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.
Answering another commenter's question: yes the final result was dependent on temperature. The author did try using it over different temperatures. It only was able to operate in the region of temperatures it was trained at.
I’d argue that this was a limitation of the GA fitness function, not of the concept.
Now that we have vastly faster compute, open FPGA bitstream access, on-chip monitoring, plus cheap and dense temperature/voltage sensing, reinforcement learning + evolution hybrids, it becomes possible to select explicitly for robustness and generality, not just for functional correctness.
The fact that human engineers could not understand how this worked in 1996 made researchers incredibly uncomfortable, and the same remains true today, but now we have vastly better tooling than back then.
I don't think that's true, for me it is the concept that's wrong. The second-order effects you mention:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
...are not just influenced by the chip design, they're influenced by substrate purity and doping uniformity -- exactly the parts of the production process that we don't control. Or rather: we shrink the technology node to right at the edge where these uncontrolled factors become too big to ignore. You can't design a circuit based on the uncontrolled properties of your production process and still expect to produce large volumes of working circuits.
Yes, we have better tooling today. If you use today's 14A machinery to produce a 1µ chip like the 80386, you will get amazingly high yields, and it will probably be accurate enough that even these analog circuits are reproducible. But the analog effects become more unpredictable as the node size decreases, and so will the variance in your analog circuits.
Also, contrary to what you said: the GA fitness process does not design for robustness and generality. It designs for the specific chip you're measuring, and you're measuring post-production. The fact that it works for reprogrammable FPGAs does not mean it translates well to mass production of integrated circuits. The reason we use digital circuitry instead of analog is not because we don't understand analog: it's because digital designs are much less sensitive to production variance.
Possibly, but maybe the real difference is the subtlety between a planned deterministic (logical) result versus deterministic (black box) outcome?
We’re seeing this shift already in software testing around GenAI. Trying to write a test around non-deterministic outcomes comes with its own set of challenges, so we need to plan can deterministic variances, which seems like an oxymoron but is not in this context.
That unreplicability between chips is actually a very, very desirable property when fingerprinting chips (sometimes known as ChipDNA) to implement unique keys for each chip. You use precisely this property (plus a lot of magic to control for temperature as you point out) to give each chip its own physically unclonable key. This has wonderfully interesting properties.
I wonder what would happen if someone evolved a circuit on a large number of FPGAs from different batches. Each of the FPGAs would receive the same input in each iteration but the output function would be biased to expose the worst-behaving units (maybe the bias should be raised biased in later iterations when most units behave well).
Either it would generate a more robust (and likely more recognizable) solution, or it would fail to converge, really.
You may need to train on a smaller number of FPGAs and gradually increase the set. Genetic algorithms have been finicky to get right, and you might find that more devices would massively increase the iteration count
The interesting thing about DMT is that it’s an ego-stripper. You have no sense of self. You are non-corporeal. Time and space are irrelevant.
People who have taken DMT find it very difficult to explain what the visions mean when they flash before your eyes. “Flash” in the sense that they are so fast and from every conceivable direction simultaneously and you can see in all directions. And beautifully purple.
Since we are beings that have a conscious “self”, we attribute these moving images to “our lives flashing before our eyes”, but I believe that to be our egotistical selves applying that after the fact.
I now believe that the human brain acts as a filter to a raw stream of collective human shared consciousness, normally out of our grasp.
What people see there is a short temporary window into everyone else’s exact same moment in time.
It’s like a back door hack into god’s admin console and you get to watch the interconnected consciousness of human existence in real time for a few minutes.
However our brains aren’t meant to run unfiltered. Our brains usually optimize and filter as much as they can to conserve energy. We notice the differences and not the usual. Our brains fill in gaps. Eventually the brain overloads as the trip runs to an end and everything goes black. A complete void overwhelms you.
The brain finally reboots and coming back is like watching an old Linux machine reboot, loading its kernel and drivers before adding the OS layers.
First you question what you are, before then discovering who you are. It’s like a process of birth but coming out of hibernation mode for fast boot.
Maybe death is the same. Returning to the collective consciousness.
Like the ant that cannot comprehend the existence of the universe or the neuron that only understands its nearest neighbors, maybe there exists a plane above human individuals as an analogy to the neuron or the ant, that we too cannot not perceive nor understand, because our brains are too small to comprehend it. Only for those fleeting moments when we overclock the system.
I think it’s clear that he’s just doing this tool for fun and chose to share it. People shouldn’t mix up their anti-Microsoft autoeroticism and a person that happens to work for them.
reply