When we ran it, the workload was mostly during weekdays in different timezones (so distributed over more than 8 hours). This was on a single c4.2xlarge AWS server, which was bigger than we needed. To be clear, I wasn't trying to make a claim about high load with that statement, just that the system has been running smoothly with steady activity for a long time.
Each week the number of individual users was in the thousands; that was usually a rolling window since people tend to not do more than a few e-learning courses per year. I'm certainly not claiming that there are no bugs -- in all likelihood there are -- only that no bugs have yet crashed the system or were obvious and serious enough for users to tell us about.
Sorry I couldn't make it more clear. Was there anything particular that you didn't understand?
Agree on the basic income (in general, I'm not sure about the details). Jobs so far have been a convenient and pragmatic (not necessarily fair) way to both create and distribute wealth. At the same time, we should note that popular alternatives like communism or socialism have failed rather badly.
No, not for it having significant economic and social implications, that's more a corollary of employment rate decreasing -- I'll change that sentence. The argument for the Luddite fallacy not being a fallacy was that humans have so far been able to compete with technology, but that we're fast losing that edge and when that happens things really are different this time. Does that make sense?
>was that humans have so far been able to compete with technology
Once a ditch digging machine was created, the machine outcompeted humans in that niche, humans were still by far better generalists. A better way of looking at this would probably be systems biology. Machines started as the corollary of a specialist lifeform. More and more they are evolving into a generalist lifeform. As they become more generalist they will directly compete with a larger population of the humans. This will push more people into specialist roles in society. Any specialist lifeform runs the risk of extinction if the environment it depends on is significantly altered or disappears.
The strange thing is that is well understood in biology, but for some reason when we apply it to people we think it doesn't work that way. Most people are limited in their ability to significantly retask. If you think you'll take a bunch of 50 year old accountants and turn them into (good) computer techs for the 15 years before their retirement, I would guess it won't go so well. As the rate of change increases because of technology this becomes a bigger and bigger problem as specialisation takes time to achieve and by the time you become well learned the entire field you are in could be automated.
That's an assertion, not an argument. The article also does not support it in any way (what would turn it into an argument). Thus, I'll have to agree with the GP, in that it is an unsupported assertion.
It's an assertion that I happen to agree with, but not because of this article.
Perhaps, I might be missing something. The way I understand it is that "the Luddite fallacy is not a fallacy" is an assertion (OED: "a confident and forceful statement of fact or belief"). The reason, I claim, is that humans will not be able to compete with robots for much longer (in large numbers) which means unemployment is likely to go up (I understand that that's not a strict implication since governments could ban robots). The reason humans won't be able to compete with robots is that technology is gaining more and more of the abilities that humans use in their jobs (like reasoning and visual recognition). Those reasons consists of (a set of) assertions that could be wrong, but they are reasons and an argument is (OED again) "a reason or set of reasons given in support of an idea, action or theory". Thus, I thought that what I did qualified as an argument, or am I mistaken?
In any case, if you agree with the assertion, what would be your argument for it?
Yes, the Luddite "fallacy" is also an assertion. It's based on strong historical data and weaker economical theory.
Your comment does make sense, but "we are losing the edge" does not automatically mean that we'll ever be completely defeated. And even a small victory is good enough to avoid a crisis, because of Jevon's paradox (that's also not a paradox).
To argue that we are headed to a crisis where humans won't be able to compete with capital, one needs evidence supporting that there'll be absolutely no economical activity where humans will outcompete machines (at least for a reasonably big share of the humans).
I do think that'll happen because there's no feature of a human that a good enough machine could not emulate, and machines are inherently cheaper (because we are "wasteful" from a production perspective), but my argument is fundamentally a repeat of materialism, for what the only possible evidence is the lack of evidence of the alternatives.
Also, the timing is iffy, there's little evidence that we'll have that crisis soon (there's little evidence either way, but it mostly points into a crisis soon). I happen to think we will because our current machines started to do lots of tasks that we learned that were very hard at the last AI explosion. But there's no guarantee that there aren't even harder tasks, that we just didn't try yet. Also, our computers are approaching the same capacity that people estimate that our brains have. But those estimations have lots of assumptions, that could easily be wrong.
You might not find them convincing, but the reasons I believe that is the case are:
- Human hardware is fairly fixed (unless we go the cyborg route) whereas robot hardware (at least the computation part) evolves roughy exponentially and I don't see reasons for that to stop.
- As robot behaviour evolves (whether through deliberate design, genetic algorithms, or other types of learning) improvements can be replicated quickly and approximately for free. Improvements to human behaviours is notoriously hard, expensive, and time-consuming to replicate.
- We can rewrite many of our wealth creation recipes to make use of more specialised robots instead of flexible humans, which means robots won't need to get close to general AI before this has significant effects on jobs.
- We are starting to see robots perform the most sophisticated human skills: visual recognition, acting on and producing language, and decision making under uncertainty. Granted, robots don't do most of these things very well yet compared with humans, but I don't see fundamental reasons for why the development will stop short of human abilities.
- Robots can work 24/7, won't go on vacation, won't quit on you, don't play political games with the other robots, won't sue you, don't require food and bathrooms, and they'll make fewer mistakes.
- If you're mostly questioning the timing, I don't have a particularly good answer, but given how I understand the state of things I believe we're talking low single-digit decades rather than centuries for a significant proportion of people to look around and not find a job they could do better than a robot for a liveable wage (without government subsidies). If you disagree on the timescale I think we'd need to have a detailed discussion about how we understand technological developments and the jobs people do. You may well be able to convince me that I'm off on the timing.
You're right, the micro-level version isn't a fallacy (either..), though I think that's less contentious, or? Many countries already have some experience with this. For example, agriculture has gone from having 70-80% of the workers in 1870 to less than 2% in 2008 (https://en.wikipedia.org/wiki/Agriculture_in_the_United_Stat...). Some countries now have welfare systems that introduce some security on an individual level, though in their current form they tend to rely on the average employment rate remaining fairly high.
Indeed, that is the question. The post tries to show that "cut your hair and get a job" will disappear as an answer to "how do I get to partake in this wealth creation" for a large number of people and we'll thus probably want to come up with an answer that scales better than today's social security for example and works for all those who live in countries without social security.
I asked myself the same question. What is the best choice depends on both stated goals, assumptions, and predictions, which means others might differ in their assessment of the right tool for the job. In this case, why was Haskell not the best choice for the project and do you think those that tried to introduce it would have agreed?
The author makes the implicit and false assumption that humans will always be able to compete with machines in something that other people are willing to pay for. That has been, and probably still is, the case. I see no reason to think that will continue indefinitely. Humans have three key properties that have kept us competitive with machines:
- "Intelligent" observation-based decision making
- Flexible manipulation of objects
- Teachability
Technology still have some way to go to match our capability here, but they're getting there. The dynamics are roughly that humans improve or change linearly through education, but technology can improve roughly exponentially.
Technology is improving at a higher rate than humans -- our wetware/body have fundamental limits than hardware/software. Hence, at some point -- sooner or later -- we start to run out of jobs that we are willing to pay humans to do instead of machines. (Lowering minimum wages could delay but not stop that.)
He does say "passage" to refer to a small part of a composition, not the whole thing. Your thinking is spot on though -- the length of the segment needs to be considered.
You need to adjust the segment length to reflect your current success/failure. Either that, or you need to slow down. I'm viewing this from a musician's perspective, but those are the two basic tools in my practice toolbox:
1) slow down until comfortable
2) isolate the thing that is giving you the actual trouble, and fix that before attempting the bigger challenge
The thing to avoid is practicing mistakes. These two basic principles attempt to minimize that.
So Good They Can't Ignore You by Cal Newport (https://www.goodreads.com/book/show/13525945-so-good-they-ca...)