Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, it’s the “rationality.” Well maybe the people too, but the ideas are at fault.

As I posted elsewhere on this subject: these people are rationalizing, not rational. They’re writing cliche sci-fi and bizarre secularized imitations of baroque theology and then reasoning from these narratives as if they are reality.

Reason is a tool not a magic superpower enabling one to see beyond the bounds of available information, nor does it magically vaporize all biases.

Logic, like software and for the same reason, is “garbage in, garbage out.” If even one of the inputs (premises, priors) is mistaken the entire conclusion can be wildly wrong. Errors cascade, just like software.

That's why every step needs to be checked with experiment or observation before a next step is taken.

I have followed these people since stuff like Overcoming Bias and LessWrong appeared and I have never been very impressed. Some interesting ideas, but honestly most of them were recycling of ideas I’d already encountered in sci-fi or futurist forums from way back in the 1990s.

The culty vibes were always there and it instantly put me off, as did many of the personalities.

“A bunch of high IQ idiots” has been my take for like a decade or more.



> As I posted elsewhere on this subject: these people are rationalizing, not rational.

That is sometimes true, but as I said in another comment, I think this is on the weaker end of criticisms because it doesn't really apply to the best of that community's members and the best of its claims, and in either case isn't really a consequence of their explicit values.

> Logic, like software and for the same reason, is “garbage in, garbage out.” If even one of the inputs (premises, priors) is mistaken the entire conclusion can be wildly wrong. Errors cascade, just like software.

True, but an odd analogy: we use software to make very important predictions all the time. For every Therac-25 out there, there's a model helping detect cancer in MRI imagery.

And, of course, other methods are also prone to error.

> That's why every step needs to be checked with experiment or observation before a next step is taken.

Depends on the setting. Some hypotheses are not things you can test in the lab. Some others are consequences you really don't want to confirm. Setting aside AI risk for a second, consider the scientists watching the Trinity Test: they had calculated that it wouldn't ignite the atmosphere and incinerate the entire globe in a firestorm, but...well, they didn't really know until they set the thing off, did they? They had to take a bet based on what they could predict with what they knew.

I really don't agree with the implicit take that "um actually you can never be certain so trying to reason about things is stupid". Excessive chains of reasoning accumulate error, and that error can be severe in cases of numerical instability (e.g. values very close to 0, multiplications, that kind of thing). But shorter chains conducted rigorously are a very important tool to understand the world.


> "um actually you can never be certain so trying to reason about things is stupid"

I didn't mean to say that, just that logic and reason are not infallible and have to be checked. Sure we use complex software to detect cancer in MRI images, but we constantly check that this software works by... you know... seeing if there's actual cancer where it says there is, and if there's not we go back around the engineering circle and refine the design.

Let's say I use the most painstaking, arduous, careful methods to design an orbital rocket. I take extreme care to make every design decision on the basis of physics and use elaborate simulations that my designs are correct. I check, re-check, and re-check. Then I build it. It's never flown before. You getting on board?

Obviously riding on an untested rocket would be insane no matter how high-IQ and "rational" its engineers tried to be. So is revamping our entire political, economic, or social system on the basis of someone's longtermist model of the future that is untestable and unverifiable. So is banning beneficial technologies on the basis of hypothetical dangers built on hypothetical reasoning from untestable priors. And so on...

... and so is, apparently, killing people, because reasons?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: