Hacker Newsnew | past | comments | ask | show | jobs | submit | gdcbe's commentslogin

Do you or someone reading this know who would be the best person that would be willing to come on a guest on a podcast and has the correct knowledge (ideally the person who implemented in WC3, or something similar enough).

Asking as I'm the host of netstack.fm, a podcast about networking and rust, but some episodes are just about networking alone.

Would love to devote an episode to the Kali TCP/IP IPX bridge as there's a lot to unpack there and that can be learned from. Any tips for a guest for such an episode are more than welcome!


In grad school 15+ years ago I took a ‘user-centered innovation’ class and I wrote a paper on the topic of Kali and its predecessor: how gamers, not the game devs, made games built for IPX work across the internet. What is neat is early collab on the first ipx->tcp/ip bridge happened on Usenet, so you can find a record of the first doom deathmatch coordinated and played over the internet. I think I reached out to jay cotton (author of kali) via email and he answered my questions, so I’d try to track him down if I were you.

Sadly I didn’t make a backup of my paper (not sure how I managed to screw that up), so I no longer have it.


Htmx got me into hypermedia heaven, but it lead me to datastar for sure. Recently we also had an interview with the creator of datastar, where he also talked a bit about darkstar (something he wants to built on top of webtransport for the few things where datastar is no well suited for now)

https://netstack.fm/#episode-4


Maybe they should focus less on "agentic" and more on just keeping their core product solid... I suppose that doesn't rhyme with growth at all cost... zzz sad... it is


Yeah such clear cut cases I've done in the past. E.g. there was a period some months ago when random AI people (so not humans "operating AI's) did such PRs. Those were easy to filter out.

But sometimes you have people that do seem to really do put in the time and do their best, and I suspect that without LLM they couldn't do what they can now do.

It's those cases that I find less clear cut. Like how to politely ask them to have certain criteria before they submit.


I’m not a fan of these but you can have a PR template


That’s vaguely what https://blog.cloudflare.com/ai-labyrinth/ is about



For me it only worked after I added their form text to my hn account description on my actual hn account page…


Invalid pointers and related bugs are older then 20 years…

Languages like Rust don’t solve all problems and nobody is hoping that the research and progress towards better tooling stops there. Even many people working on Rust give plenty of ideas on what future languages can do better or different.

You don’t need to wait 20 years, those discussions are already happening…

So uh… I’m confused on what exactly you are trying to communicate here?


I can tell.

Knowing how your machine works is the only answer. It’s funny you cite the age of pointer issues as some kind of proof. Pointers themselves are abstractions. I’m not saying avoid abstractions or code everything in assembler, but if you want to be a strong practitioner in this field you better be able to disassemble any code and understand what is happening. And alter your practices accordingly.


Why do people talk about hallucinations? Pretty deceptive word of you ask me.

Not an expert though, but isn’t that behaviour inherent to how it works? Bit of a misnomer and giving people the wrong idea of what is going on here.


Hallucinations reduce the success rate of AI workflows, which must be taken seriously. Imagine a workflow with 8 steps where each step/agent has a 95% success rate, the success rate of this workflow is only (1-0.05)^8 = 0.66 ~= 66%. Not bad but not enough to replace humans yet (unless 66% makes you profitable).

The hallucinations/errors compound and can misguide decisions if you rely too much on AI.


Not enough to replace humans in most critical tasks, but enough to replace Google, that's for sure. My own success rate to find information on Google these days is around 50% by query at best.


Because "fabrication" seems worse, if more accurate.


I prefer "confabulation," which describes the analogous human behavior where you have no idea what the objective truth actually is, so you just make up something that sounds right


Fabrication implies malicious intent or at least intentional deception. LLMs don’t have any “intent”.


Their developers have intent. That intent is to give the perception of understanding/facts/logic without designing representations of such a thing, and with full knowledge that as a result, it will be routinely wrong in ways that would convey malicious intent if a human did it. I would say they are trained to deceive because if being correct was important, the developers would have taken an entirely different approach.


generating information without regard to the truth is bullshitting, not necessarily malicious intent.

for example, this is bullshit because it’s words with no real thought behind it: “if being correct was important, the developers would have taken an entirely different approach”


If you are asking a professional high-stakes questions about their expertise in a work context and they are just bullshitting you, it's fair to impugn their motives. Similarly if someone is using their considerable talent to place bullshit artists in positions of liability-free high-stakes decisions.

Your second comment is more flippant than mine, as even AI boosters like Chollet and LeCun have come around to LLMs being tangential to delivering on their dreams, and that's before engaging with formal methods, V&V, and other approaches used in systems that actually value reliability.


In reality, it should sound worse so people don’t trust it so much.

But those who sell AI products don’t want that.


It's obviously an analogy, but it seems pretty fitting to me? What would you call it?


Making errors, generating nonsense, being wrong. It's a catchy term but it's not accurate in any meaningful way.


Hallucinating has the implication of being wrong. The word further adds the context of being elaborately wrong. That feels pretty accurate to describe an AI going into detail when it is wrong.


What would you call it when AI doesn't have the answer so it makes stuff up (sometimes in a dangerous way)?


Confabulating if you want a non-"vulgar" word. Bulshitting if you don't care.


Bullshitting


That's called bullshit.


Isn’t hallucinating inherent to biological brains too?

It’s normal in small degrees even for mentally healthy individuals.


Stop anthropomorphizing the token generator please.


lmao, not what i said. fix your attention head.

metaphors grounded in reality are fine. otherwise don’t call it a “generator”


That's the accepted word to describe it making up bullshit instead of regurgitating existing information.


The latest “Decoder” episode was about local smart homes. Mainly about “Matter”, but also about Thread: https://podcasts.apple.com/be/podcast/decoder-with-nilay-pat...

Might be of interest if Thread is of interest to you.


Does it touch on the legal issue this post is about at all?


Not at all. That is why I did not mention the legal part. Which is a bit disappointing as the host has a lawyer background. Oh well…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: