The corollary here is quite interesting: if we imagine this extending quite a ways into the future, we might ask what sort of society we'd like to live in far in the future, where more and more jobs are lost because technology makes them pointless for humans to do.
Say only 10% of humans had any skill whatsoever that would provide value that could not be more cheaply obtained by deploying technology. Say it got further, to 1%.
Imagine a world where only a handful of the most incredibly skilled humans can do anything better (i.e. more cost effectively) than a computer, and the rest are rightfully unemployed, because they have nothing to offer. A world where your 20 years of hard earned programming expertise is utterly worthless in the marketplace because an open source program exists that will read a plain language spec and spit out a good approximation of the same code you would have written, except without buffer overrun bugs and it'll be done in 20 seconds instead of three weeks. Where "lawyer" and "doctor" are professions-that-were because humans aren't fit to compete, and where one by one all the other careers considered unassailable today are swallowed up by increasingly sophisticated automation. Sure, we might be left with a few professions that require physical humanity (people will still want to see people in entertainment, for instance), but in the end, most of the wealth in the world will not be created by humans in any direct way.
You might start to understand why it's not such a terrible idea to move closer towards a socialist society as time goes on - eventually we're all going to be unskilled labor, possibly even unemployable, and we're close enough to that point (I'd say within a couple generations, at the outside, barring some major disruptions that hold back further technological progress) that we should probably start thinking about the consequences.
"an open source program exists that will read a plain language spec and spit out a good approximation of the same code you would have written"
And who will write the spec? And why will it be any easier to write the spec than the program? "Plain language" is a misnomer. Idealistic people try to write legal contracts in plain language, but it doesn't work.
And who will write the spec? And why will it be any easier to write the spec than the program?
Sure, there will be people at the top of the command hierarchy dictating what needs are being met. That doesn't mean that spec-writing jobs will provide gainful employment for as many people as are employed fulfilling specs these days.
As for why it will be any easier to write the spec than the program, it already is. "Plain language" is exactly how people spec things out all the time these days, it serves me quite well when I'm getting work assignments ("The Fizzle button is making the back end Furble on Tuesday nights, fix it!", or "Add a coupon code field to our order page and an administration tab to add codes to the database - just make sure the same code can't be used more than once").
Idealistic people try to write legal contracts in plain language, but it doesn't work.
People tell their lawyers in plain everyday language what they care about achieving with a legal contract, and the lawyers fill in the blanks based on what they know. Similarly, normal people tell their programmers what they care about achieving, the programmers fill in the blanks. Sure, we can't automate that filling-in-the-blanks super well yet, and yes, there often has to be some back and forth to figure out what the person really wants (this is, IMO, on of the most important things missing in a lot of machine learning approaches, as human thought - especially language processing - involves a lot of feedback between systems and backtracking), but to me it's not hard at all to imagine all of that being extremely successfully automated once we have thirty years more NLP research under our belts and laptop computers that would put the most expensive Hadoop cluster you could construct today to shame...
Then again, I'm one of the weirdos that thinks that the "problem" of intelligence can be solved by a much less sophisticated algorithm than the one that our brains implement, under the theory that typically evolution only ever seems to do one thing well: create highly complex solutions to medium difficulty problems.
Is this an argument claiming something can't ever happen, not because its impossible, but because its -bloody hard-?
That's a ridiculous argument...and remember, the example was ridiculous, talking about a world where only 10% of people had jobs: People would build -very difficult- things to find employment. (incidentally, we're all of us doing harder(cognitively) things than our ancestors to find employment today.)
Say only 10% of humans had any skill whatsoever that would provide value that could not be more cheaply obtained by deploying technology. Say it got further, to 1%.
Imagine a world where only a handful of the most incredibly skilled humans can do anything better (i.e. more cost effectively) than a computer, and the rest are rightfully unemployed, because they have nothing to offer. A world where your 20 years of hard earned programming expertise is utterly worthless in the marketplace because an open source program exists that will read a plain language spec and spit out a good approximation of the same code you would have written, except without buffer overrun bugs and it'll be done in 20 seconds instead of three weeks. Where "lawyer" and "doctor" are professions-that-were because humans aren't fit to compete, and where one by one all the other careers considered unassailable today are swallowed up by increasingly sophisticated automation. Sure, we might be left with a few professions that require physical humanity (people will still want to see people in entertainment, for instance), but in the end, most of the wealth in the world will not be created by humans in any direct way.
You might start to understand why it's not such a terrible idea to move closer towards a socialist society as time goes on - eventually we're all going to be unskilled labor, possibly even unemployable, and we're close enough to that point (I'd say within a couple generations, at the outside, barring some major disruptions that hold back further technological progress) that we should probably start thinking about the consequences.