Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are not intrinsically truth seekers, and any truth seeking behaviour is mostly tuned during the training process.

Unfortunately it also means it can be easily undone. E.g. just look at Grok in its current lobotomized version



> They are not intrinsically truth seekers

Is the average person a truth seeker in this sense that performs truth-seeking behavior? In my experience we prioritize sharing the same perspectives and getting along well with others a lot more than a critical examination of the world.

In the sense that I just expressed, of figuring out the intention of a user's information query, that really isn't a tuned thing, it's inherent in generative models from possessing a lossy, compressed representation of training data, and it is also truth-seeking practiced by people that want to communicate.


You are completely missing the argument that was made to underline the claim.

If ChatGPT claims arsenic to be a tasty snack, nothing happens to it.

If I claim the same, and act upon it, I die.


You are right. I have ignored completely the context in the phrasing "truth seeker" was made, given my own wrong interpretation to the phrase, and I in fact agree with the comment I was responding to that they "work with the lens on our reality that is our text output".


If ChatGPT claims arsenic to be a tasty snack, OpenAI adds a p0 eval and snuffs that behavior out of all future generations of ChatGPT. Viewed vaguely in faux genetic terms, the "tasty arsenic gene" has been quickly wiped out of the population, never to return.

Evolution is much less brutal and efficient. To you death matters a lot more than being trained to avoid a response does to ChatGPT, but from the point of view of the "tasty arsenic" behavior, it's the same.


It's difficult to ascertain the interests and intent of people, but I'm even more suspicious and uncertain of the goals of LLMs who literally cannot care.


>Is the average person a truth seeker in this sense that performs truth-seeking behavior?

Absolutely


I keep seeing news articles that claim Grok is flawed or biased recently, but I've been unable to replicate any such behavior on my computer.

That being said, I don't ask any controversial or political questions; I use it to search for research papers. But if I try the occasional such question, the response is generally balanced and similar to that of any other LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: