Hacker Newsnew | past | comments | ask | show | jobs | submit | purple_basilisk's commentslogin

Happy New Year and the same hope from up here in Vermont


Came here to say this. When I'm really pressed for time, I use the custom stars in Gmail to indicate the type of followup needed - reply, separate task, etc.


Thank you for this deeply revealing take. I think this is the dynamic at the core of what matters here. Reminds me of Dostoevsky's take on what people really want - here's an interesting short piece that direction.

https://www.laphamsquarterly.org/freedom/these-are-barbarous...


Thanks for sharing this.

It made me wonder where a future goes when it keeps trying to define both barbarism and normalcy.

As a small tribute in return, three films came to mind:

Bicentennial Man,

Gattaca,

Fight Club.

I’ve always preferred Ivan the Fool — choosing to live, rather than living inside a definition.


This is an amazing summary of good advice for software projects! I literally pasted it into my notes for reference. You should write a blog or something on this topic if you don't already.


Thanks for the very kind words. No blog yet. The closest thing to a blog was that I was writing a book, "Racket Tactical Field Manual" (RTFM), which would teach a programming language (Racket) and effective software engineering practices incidentally using Racket tooling, in one tractable, self-contained book (and beg Randall Munroe to do the cover illustration)... But then it looked like Racket was determined not to have commercial uptake, so I abandoned it.

I suppose, if I had a software engineering blog that was fortunate to be well-received, maybe I wouldn't get 90%+ interviews wanting to give me a LeetCode frat hazing. We could instead speak like colleagues, who can see that the other person has, say, written a bunch of open source code, and skip ahead to getting a feel for more important factors about how we might work together. Instead, the signal I get is that they really, really want to do the frat hazing. (Like the AI company I just got off a successful recruiter screen with, minutes ago.)


Check out the perfect set of teeth on the skull of the soldier shown in the article. Amazing how human dental health has changed since then.


the photo is here: https://www.gavi.org/vaccineswork/dna-reveals-real-killers-b...

(parent comment was posted before we merged the threads)


We’re talking about a foot soldier on Napoleon’s army, easy to keep a good set considering he could be 18-25.


And probably with limited access to sugar.


Probably, cane sugar was unavailable due to blockades and the Napoleonic expansion of beet sugar was only just starting.


1. The soldier probably died young

2. Good dental health has always been part of the screening to join a professional army


They were looking to analyze teeth. This might be selection bias.


Weston A. Price thought it was from K2 -> https://www.westonaprice.org/health-topics/vitamin-k2-mk-4-d...


Due to "softer" diets and all the sugar, right?


Harder diets were more common in ancient humans, and straighten teeth in childhood. I don’t know what the diet was like in 1800.


Do you have any proof? A doctor once mentioned this to me, but then I talked to an orthodontist and he called it BS


I think the theory is that modern jaws are too narrow to contain all the teeth, causing them to overlap and grow in "unintended" directions. The reason for the narrowness is lack of exercise in childhood due to the soft nature of prepared foods. I'm also a bit sceptical.


Not proof, but logic: Without machinery to do the work of processing food, it has to done by hand or not at all, which leads to a lot more jaw work. Especially if you're eating a lot of animals where you'll be eating off of bones. But even with plants too.


Where is the feedback mechanism that ties extra jaw exercise to straight teeth?


The strengthening of the jaw during childhood leads to a larger jaw, and more room for teeth. Our jaws are too small. "Straightening" (like I said earlier) isn't exactly correct.

This whole topic is debated.


This. The most important unknown about AI is when will it plateau.


Why will it plateau?


Because every technology does eventually


What is the defining factor that makes all technologies plateau unlike evolution that seems to be open-ended? Technologies don't change themselves, we do.

And what is the endgame for AI?


What? Evolution is specifically known for getting caught in local maximums. Species have little evolutionary pressure to get better when they are doing great, like a species with no predators on an island. The only thingsdriving evolution for that creature is natural selection towards living longer and getting less diseases, dying in less accidents, stuff like that. And those aren't specific enough and don't pressure on a time basis so there isn't much pressure to improve beyond the natural lifespan. Plus, for some cases, living longer is not really the goal, it's reproducing more. It's entirely possible, likely even, that maximizing for longevity eventually starts to give a negative effect towards reproduction, and vice versa, so an equilibrium is reached.

Also technologies don't develop like evolution really so not sure why you drew that comparison.

Technologies plateau for a combination of reasons - too expensive to make it better, no interest in making it better, can't figure out any more science (key people involved leave / die / lose interest, or it's just too difficult with our current knowledge), theoretical limits (like we are reaching in silicon chips). I don't see a lot of similarity with evolution there.


Because everything does, eventually.


Good point about hallucinations - low accuracy, high confidence. I wonder if AI will develop the ability to nuance its own confidence. It would be a more useful tool if it could provide a reasonable confidence level along with its output. Much like a human would say, "not sure about this, but..."


I'm not an AI expert so I could be wrong, but it's my understanding that there is a confidence score behind the scenes. It's just not shown in the current UI.

An automated AI system should be able to ask a human for help whenever the confidence score is below a certain threshold or even spit out a backlog of all the tasks it can't confidently handle itself.


FWIW, Watson used its internal confidence score when playing Jeopardy.


It needs to be able to evaluate its own output. We human do a quick sanity check most of the time before we speak - “On what do I base this assertion?” … etc.


I wonder if multiple, independently trained LLM‘s could be used in a voting system to determine confidence, or simply call out each others’ bulls**.


Two wrong systems won't make a right though. Especially when the wrong systems are getting move convincing at being right.


Two wrong systems can help determine if your answer is wrong if they don’t agree. That’s pretty useful, even if neither is actually correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: