I agree with most everything you said. The problem has always been the short-term job loss, particularly today where society as a whole has resources for safety nets, but hasnt implemented them.
Anger at companies who hold power in multiple places to prevent and worsen this situation for people is valid anger.
> The problem has always been the short-term job loss
Does anyone have any idea of the new jobs that will be created to replace the ones that are being lost? If it's not possible to at least foresee it, then it's not likely to happen. In which case the job loss will be long-term not short-term.
As much as I like the article, I begrudgingly agree with you, which is why I think the author mentions the physical constraints of energy as the future wall that companies will have to deal with.
The question is do we think that will actually happen?
Personally I would love if it did, then this post would have the last laugh (as would I), but I think companies realize this energy problem already. Just search for the headlines of big tech funding or otherwise supporting nuclear reactors, power grid upgrades, etc.
In my experience in neuroscience it even differs widely across programs/universities. Some good professors care about giving good talks, and if you're lucky it becomes contagious in the program. Others think less of you if it's clear, some are too naive to realize obscurity is not a virtue.
Yeah, but still "scary" because you have to be really careful to not fool yourself and pay attention even with those algorithms. For example, a good demonstration with tsne
https://distill.pub/2016/misread-tsne/?hl=cs
Plus, 24/7 access isn't necessarily the best for patients. Crisis hotlines exist for good reason, but for most other issues it can become a crutch if patients are able to seek constant reassurance vs building skills of resiliency, learning to push through discomfort, etc. Ideally patients are "let loose" between sessions and return to the provider with updates on how they fared on their own.
I agree with your point except for scientific papers. Let's push ourselves to use precise, non-shorthand or hand waving in technical papers and publications, yes? If not there, of all places, then where?
"Know" doesn't have any rigorous precisely-defined senses to be used! Asking for it not to be used colloquially is the same as asking for it never to be used at all.
I mean - people have been saying stuff like "grep knows whether it's writing to stdout" for decades. In the context of talking about computer programs, that usage for "know" is the established/only usage, so it's hard to imagine any typical HN reader seeing TFA's title and interpreting it as an epistemological claim. Rather, it seems to me that the people suggesting "know" mustn't be used about LLMs because epistemology are the ones departing from standard usage.
colloquial use of "know" implies anthropomorphisation. Arguing that usign "knowing" in the title and "awarness" and "superhuman" in the abstract is just colloquial for "matching" is splitting hairs to an absurd degree.
You missed the substance of my comment. Certainly the title is anthropomorphism - and anthropomorphism is a rhetorical device, not a scientific claim. The reader can understand that TFA means it non-rigorously, because there is no rigorous thing for it to mean.
As such, to me the complaint behind this thread falls into the category of "I know exactly what TFA meant but I want to argue about how it was phrased", which is definitely not my favorite part of the HN comment taxonomy.
I see. Thanks for clarifying. I did want to argue about how it was phrased and what is alluding to. Implying increased risk from "knowing" the eval regime is roughly as weak as the definition of "knowing". It can be equaly a measure of general detection capability, as it can about evaluation incapability - i.e. unlikely news worthy, unless it reached top HN because of the "know" in the title.
Thanks for replying - I kind of follow you but I only skimmed the paper. To be clear I was more responding to the replies about cognition, than to what you said about the eval regime.
Incidentally I think you might be misreading the paper's use of "superhuman"? I assume it's being used to mean "at a higher rate than the human control group", not (ironically) in the colloquial "amazing!" sense.
I really do agree with your point overall, but in a technical paper I do think even word choice can be implicitly a claim. Scientists present what they know or are claiming and thus word it carefully.
My background is neuroscience, where anthropomorphising is particularly discouraged, because it assumes knowledge or certainty of an unknowable internal state, so the language is carefully constructed e.g. when explaining animal behavior, and it's for good reason.
I think the same is true here for a model "knowing" somethig, both in isolation within this paper, and come on, consider the broader context of AI and AGI as a whole. Thus it's the responsibility of the authors to write accordingly. If it were a blog I wouldn't care, but it's not. I hold technical papers to a higher standard.
If we simply disagree that's fine, but we do disagree.
This is something I think many people don't appreciate. A perfect example in practice is the Journal of Personality and Social Psychology. It's one of the leading and highest impact journals in psychology. A quick search for that name will show it as the source for endless 'news' articles from sites like the NYTimes [1]. And that journal has a 23% replication success rate [2] meaning there's about an 80% chance that anything you read in the journal, and consequently from the numerous sites that love to quote it, is wrong.
The purpose of peer review is to check for methodological errors, not to replicate the experiment. With a few exceptions, it can't catch many categories of serious errors.
> higher retraction/unverified
Scientific consensus doesn't advance because a single new ground-breaking claim is made in a prestigious journal. It advances when enough other scientists have built on top of that work.
The current state of science is not 'bleeding edge stuff published in a journal last week'. That bleeding edge stuff might become part of scientific consensus in a month, or year or three, or five - when enough other people build on that work.
Anybody who actually does science understands this.
Unfortunately, people with poor media literacy who only read the headlines don't understand this, and assume that the whole process is all a crock.
Anger at companies who hold power in multiple places to prevent and worsen this situation for people is valid anger.