Do you or someone reading this know who would be the best person that would be willing to come on a guest on a podcast and has the correct knowledge (ideally the person who implemented in WC3, or something similar enough).
Asking as I'm the host of netstack.fm, a podcast about networking and rust, but some episodes are just about networking alone.
Would love to devote an episode to the Kali TCP/IP IPX bridge as there's a lot to unpack there and that can be learned from. Any tips for a guest for such an episode are more than welcome!
In grad school 15+ years ago I took a ‘user-centered innovation’ class and I wrote a paper on the topic of Kali and its predecessor: how gamers, not the game devs, made games built for IPX work across the internet. What is neat is early collab on the first ipx->tcp/ip bridge happened on Usenet, so you can find a record of the first doom deathmatch coordinated and played over the internet. I think I reached out to jay cotton (author of kali) via email and he answered my questions, so I’d try to track him down if I were you.
Sadly I didn’t make a backup of my paper (not sure how I managed to screw that up), so I no longer have it.
Htmx got me into hypermedia heaven, but it lead me to datastar for sure.
Recently we also had an interview with the creator of datastar, where he also talked a bit about darkstar (something he wants to built on top of webtransport for the few things where datastar is no well suited for now)
Maybe they should focus less on "agentic" and more on just keeping their core product solid... I suppose that doesn't rhyme with growth at all cost... zzz sad... it is
Yeah such clear cut cases I've done in the past. E.g. there was a period some months ago when random AI people (so not humans "operating AI's) did such PRs. Those were easy to filter out.
But sometimes you have people that do seem to really do put in the time and do their best, and I suspect that without LLM they couldn't do what they can now do.
It's those cases that I find less clear cut. Like how to politely ask them to have certain criteria before they submit.
Invalid pointers and related bugs are older then 20 years…
Languages like Rust don’t solve all problems and nobody is hoping that the research and progress towards better tooling stops there. Even many people working on Rust give plenty of ideas on what future languages can do better or different.
You don’t need to wait 20 years, those discussions are already happening…
So uh… I’m confused on what exactly you are trying to communicate here?
Knowing how your machine works is the only answer. It’s funny you cite the age of pointer issues as some kind of proof. Pointers themselves are abstractions. I’m not saying avoid abstractions or code everything in assembler, but if you want to be a strong practitioner in this field you better be able to disassemble any code and understand what is happening. And alter your practices accordingly.
Hallucinations reduce the success rate of AI workflows, which must be taken seriously.
Imagine a workflow with 8 steps where each step/agent has a 95% success rate, the success rate of this workflow is only (1-0.05)^8 = 0.66 ~= 66%. Not bad but not enough to replace humans yet (unless 66% makes you profitable).
The hallucinations/errors compound and can misguide decisions if you rely too much on AI.
Not enough to replace humans in most critical tasks, but enough to replace Google, that's for sure. My own success rate to find information on Google these days is around 50% by query at best.
I prefer "confabulation," which describes the analogous human behavior where you have no idea what the objective truth actually is, so you just make up something that sounds right
Their developers have intent. That intent is to give the perception of understanding/facts/logic without designing representations of such a thing, and with full knowledge that as a result, it will be routinely wrong in ways that would convey malicious intent if a human did it. I would say they are trained to deceive because if being correct was important, the developers would have taken an entirely different approach.
generating information without regard to the truth is bullshitting, not necessarily malicious intent.
for example, this is bullshit because it’s words with no real thought behind it:
“if being correct was important, the developers would have taken an entirely different approach”
If you are asking a professional high-stakes questions about their expertise in a work context and they are just bullshitting you, it's fair to impugn their motives. Similarly if someone is using their considerable talent to place bullshit artists in positions of liability-free high-stakes decisions.
Your second comment is more flippant than mine, as even AI boosters like Chollet and LeCun have come around to LLMs being tangential to delivering on their dreams, and that's before engaging with formal methods, V&V, and other approaches used in systems that actually value reliability.
Hallucinating has the implication of being wrong. The word further adds the context of being elaborately wrong. That feels pretty accurate to describe an AI going into detail when it is wrong.
Asking as I'm the host of netstack.fm, a podcast about networking and rust, but some episodes are just about networking alone.
Would love to devote an episode to the Kali TCP/IP IPX bridge as there's a lot to unpack there and that can be learned from. Any tips for a guest for such an episode are more than welcome!