Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"I hope I have been able to convince you that the Luddite fallacy is not a fallacy and that this will have significant economic and social implications."

I'm sorry, did you actually make an argument? All I could find was some un-supported assertions.



No, not for it having significant economic and social implications, that's more a corollary of employment rate decreasing -- I'll change that sentence. The argument for the Luddite fallacy not being a fallacy was that humans have so far been able to compete with technology, but that we're fast losing that edge and when that happens things really are different this time. Does that make sense?


>was that humans have so far been able to compete with technology

Once a ditch digging machine was created, the machine outcompeted humans in that niche, humans were still by far better generalists. A better way of looking at this would probably be systems biology. Machines started as the corollary of a specialist lifeform. More and more they are evolving into a generalist lifeform. As they become more generalist they will directly compete with a larger population of the humans. This will push more people into specialist roles in society. Any specialist lifeform runs the risk of extinction if the environment it depends on is significantly altered or disappears.

The strange thing is that is well understood in biology, but for some reason when we apply it to people we think it doesn't work that way. Most people are limited in their ability to significantly retask. If you think you'll take a bunch of 50 year old accountants and turn them into (good) computer techs for the 15 years before their retirement, I would guess it won't go so well. As the rate of change increases because of technology this becomes a bigger and bigger problem as specialisation takes time to achieve and by the time you become well learned the entire field you are in could be automated.


That's an assertion, not an argument. The article also does not support it in any way (what would turn it into an argument). Thus, I'll have to agree with the GP, in that it is an unsupported assertion.

It's an assertion that I happen to agree with, but not because of this article.


Perhaps, I might be missing something. The way I understand it is that "the Luddite fallacy is not a fallacy" is an assertion (OED: "a confident and forceful statement of fact or belief"). The reason, I claim, is that humans will not be able to compete with robots for much longer (in large numbers) which means unemployment is likely to go up (I understand that that's not a strict implication since governments could ban robots). The reason humans won't be able to compete with robots is that technology is gaining more and more of the abilities that humans use in their jobs (like reasoning and visual recognition). Those reasons consists of (a set of) assertions that could be wrong, but they are reasons and an argument is (OED again) "a reason or set of reasons given in support of an idea, action or theory". Thus, I thought that what I did qualified as an argument, or am I mistaken?

In any case, if you agree with the assertion, what would be your argument for it?


Yes, the Luddite "fallacy" is also an assertion. It's based on strong historical data and weaker economical theory.

Your comment does make sense, but "we are losing the edge" does not automatically mean that we'll ever be completely defeated. And even a small victory is good enough to avoid a crisis, because of Jevon's paradox (that's also not a paradox).

To argue that we are headed to a crisis where humans won't be able to compete with capital, one needs evidence supporting that there'll be absolutely no economical activity where humans will outcompete machines (at least for a reasonably big share of the humans).

I do think that'll happen because there's no feature of a human that a good enough machine could not emulate, and machines are inherently cheaper (because we are "wasteful" from a production perspective), but my argument is fundamentally a repeat of materialism, for what the only possible evidence is the lack of evidence of the alternatives.

Also, the timing is iffy, there's little evidence that we'll have that crisis soon (there's little evidence either way, but it mostly points into a crisis soon). I happen to think we will because our current machines started to do lots of tasks that we learned that were very hard at the last AI explosion. But there's no guarantee that there aren't even harder tasks, that we just didn't try yet. Also, our computers are approaching the same capacity that people estimate that our brains have. But those estimations have lots of assumptions, that could easily be wrong.


"...that humans will not be able to compete with robots for much longer..."

Do you have any convincing reasons to believe that?


You might not find them convincing, but the reasons I believe that is the case are:

- Human hardware is fairly fixed (unless we go the cyborg route) whereas robot hardware (at least the computation part) evolves roughy exponentially and I don't see reasons for that to stop.

- As robot behaviour evolves (whether through deliberate design, genetic algorithms, or other types of learning) improvements can be replicated quickly and approximately for free. Improvements to human behaviours is notoriously hard, expensive, and time-consuming to replicate.

- We can rewrite many of our wealth creation recipes to make use of more specialised robots instead of flexible humans, which means robots won't need to get close to general AI before this has significant effects on jobs.

- We are starting to see robots perform the most sophisticated human skills: visual recognition, acting on and producing language, and decision making under uncertainty. Granted, robots don't do most of these things very well yet compared with humans, but I don't see fundamental reasons for why the development will stop short of human abilities.

- Robots can work 24/7, won't go on vacation, won't quit on you, don't play political games with the other robots, won't sue you, don't require food and bathrooms, and they'll make fewer mistakes.

- If you're mostly questioning the timing, I don't have a particularly good answer, but given how I understand the state of things I believe we're talking low single-digit decades rather than centuries for a significant proportion of people to look around and not find a job they could do better than a robot for a liveable wage (without government subsidies). If you disagree on the timescale I think we'd need to have a detailed discussion about how we understand technological developments and the jobs people do. You may well be able to convince me that I'm off on the timing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: