Came here to say this. When I'm really pressed for time, I use the custom stars in Gmail to indicate the type of followup needed - reply, separate task, etc.
Thank you for this deeply revealing take. I think this is the dynamic at the core of what matters here. Reminds me of Dostoevsky's take on what people really want - here's an interesting short piece that direction.
This is an amazing summary of good advice for software projects! I literally pasted it into my notes for reference. You should write a blog or something on this topic if you don't already.
Thanks for the very kind words. No blog yet. The closest thing to a blog was that I was writing a book, "Racket Tactical Field Manual" (RTFM), which would teach a programming language (Racket) and effective software engineering practices incidentally using Racket tooling, in one tractable, self-contained book (and beg Randall Munroe to do the cover illustration)... But then it looked like Racket was determined not to have commercial uptake, so I abandoned it.
I suppose, if I had a software engineering blog that was fortunate to be well-received, maybe I wouldn't get 90%+ interviews wanting to give me a LeetCode frat hazing. We could instead speak like colleagues, who can see that the other person has, say, written a bunch of open source code, and skip ahead to getting a feel for more important factors about how we might work together. Instead, the signal I get is that they really, really want to do the frat hazing. (Like the AI company I just got off a successful recruiter screen with, minutes ago.)
I think the theory is that modern jaws are too narrow to contain all the teeth, causing them to overlap and grow in "unintended" directions. The reason for the narrowness is lack of exercise in childhood due to the soft nature of prepared foods. I'm also a bit sceptical.
Not proof, but logic: Without machinery to do the work of processing food, it has to done by hand or not at all, which leads to a lot more jaw work. Especially if you're eating a lot of animals where you'll be eating off of bones. But even with plants too.
The strengthening of the jaw during childhood leads to a larger jaw, and more room for teeth. Our jaws are too small. "Straightening" (like I said earlier) isn't exactly correct.
What is the defining factor that makes all technologies plateau unlike evolution that seems to be open-ended? Technologies don't change themselves, we do.
What? Evolution is specifically known for getting caught in local maximums. Species have little evolutionary pressure to get better when they are doing great, like a species with no predators on an island. The only thingsdriving evolution for that creature is natural selection towards living longer and getting less diseases, dying in less accidents, stuff like that. And those aren't specific enough and don't pressure on a time basis so there isn't much pressure to improve beyond the natural lifespan. Plus, for some cases, living longer is not really the goal, it's reproducing more. It's entirely possible, likely even, that maximizing for longevity eventually starts to give a negative effect towards reproduction, and vice versa, so an equilibrium is reached.
Also technologies don't develop like evolution really so not sure why you drew that comparison.
Technologies plateau for a combination of reasons - too expensive to make it better, no interest in making it better, can't figure out any more science (key people involved leave / die / lose interest, or it's just too difficult with our current knowledge), theoretical limits (like we are reaching in silicon chips). I don't see a lot of similarity with evolution there.
Good point about hallucinations - low accuracy, high confidence. I wonder if AI will develop the ability to nuance its own confidence. It would be a more useful tool if it could provide a reasonable confidence level along with its output. Much like a human would say, "not sure about this, but..."
I'm not an AI expert so I could be wrong, but it's my understanding that there is a confidence score behind the scenes. It's just not shown in the current UI.
An automated AI system should be able to ask a human for help whenever the confidence score is below a certain threshold or even spit out a backlog of all the tasks it can't confidently handle itself.
It needs to be able to evaluate its own output. We human do a quick sanity check most of the time before we speak - “On what do I base this assertion?” … etc.