OP strikes me as someone who is trying to position himself as a "moderate" but who is mistaking a caricaturized extreme (<0.25%) for "functional programmers". Honestly, I like FP extremists better than their OOP cargo-cult counterparts, who have the same smugness but are usually better, in most companies, at getting managerial blessing. In many companies, OOP acolytes glow in the managerial sun like Dudley Dursley, despite the general crappitude of their ideas.
"Functional programming" in the real world isn't about dogmatically eliminating mutable state. It's about managing it. It's about modularity. It's not far from the original vision behind OOP-- before it got taken away by the wrong crowd-- which was that when complexity is inevitable, it should be encapsulated behind simpler interfaces. For example, SQL is append-only (as functional as one can get over a growing knowledge base) if you never use DELETE or UPDATE. The inner guts of SQL are not "purely functional"-- caching and indexing cause internal state changes, for performance reasons-- but the interface is.
I don't even know what endofunctors are. If it's relevant to my job, I'll learn about them, but I care more about writing good software than reciting category-theory arcana. I'm sure it's important (especially as an inspiration for design) for some people, but you don't need to know them to do functional programming. Not even close. And I was already a productive functional programmer (in Ocaml and Lisp) before I even knew what monads were.
That's the thing, FP is not only a great model for writing a CRUD or any kind data-processing system, but also things like Haskell's type system makes it incredibly powerfull to "hack away" code while controlling complexity. And both are things that FP extremists doesn't seem to like to promote. The "scientific" culture seems to revolve around using it for numerical computations, while trying to get as close as doing "formal proofs" as possible.
The problem is that the best, most expressive functional languages (lookin at ya Haskell) are mathematically based on this theory. If you want to use these languages, you have a choice between (a) learning the theory, and (b) not fully understanding the tool you're using. Both have significant downsides from a UI perspective.
Learn You A Haskell is a classic case of (b). As a tutorial it's completely successful, and how? It tells you that you can learn Haskell without learning PL theory. It also neglects to tell you how to debug you a Haskell (without learning PL theory). But once you type in the examples, I'm sure you can figure it out.
A lot of people are smart enough to program on top of mysterious black boxes. That is: a lot of people are smart enough to work past this UI problem. Moreover, the smartest people can learn Haskell by rote and then gradually, through intuition and experience, learn to grok the black box. This is not to say, I think, that it's not a problem.
Of course you could just learn the theory. I'd say anyone who can graduate with a math degree from an Ivy-class university can do it. Is you a member of this set? Find out by reading the Wikipedia page for Hindley-Milner:
I'm sure there are simpler explanations of H-M (Haskell's type inference algorithm). But they're not on Wikipedia. I sent this link to a friend of mine with a PhD in astrophysics from Caltech. "Which seems easier to you?" I asked. "This, or special relativity?" You can probably imagine his response.
So, wanted: a higher-order typed functional language whose type system is easier to understand than special relativity. Or of course a proof that such a device is impossible :-)
Do you think H-M, which can be presented in full in less than a 60 minutes functional programming lecture, is harder than a whole subfield of physics? I don't think so. Astrophysists deal with large objects every day and there might be a reason why they find relativity easier.
BTW: Debugging is much less used in Haskell, this is not a strong objection to LYAH.
Special relativity is a few formulas, not a "whole subfield of physics." But yes - I didn't really regard this as a serious scientific criticism.
I can present LR parsing in a lecture as well. However, if you had to understand LR parsing to write an HTML 5 app, there would be a lot fewer HTML 5 apps.
By "debugging" I didn't just mean tracing. I meant one of the most basic tasks of the programmer - interpreting and responding to error messages. Is there a single error message shown or explained in LYAH? If so, I missed it.
This is germane because, when a type system works perfectly, it appears magic by definition. When it issues you an error, however, you have to understand what went wrong. It is sometimes possible to do this without understanding the black box, but the skill is again recondite.
I refuse to believe that Haskell programmers are so godly that their programs work the first time they're typed in.
I think "special relativity is a few formulas" is an underestimation, but this is not important.
> By "debugging" I didn't just mean tracing. I meant one of the most basic tasks of the programmer - interpreting and responding to error messages. Is there a single error message shown or explained in LYAH? If so, I missed it.
I could code comfortably in Pascal long before I knew how exactly does "expecting a semicolon, found a number" arise. I could code comfortably in Haskell many months before learning HM as well. You know what to do when you see "no instance for Num Char" as well as you know what to do with that parsing error. No need to think about unification or parsing, a kneejerk reaction is enough.
> I refuse to believe that Haskell programmers are so godly that their programs work the first time they're typed in.
Of course that's false -- but once a Haskell program is typed and compiled, there are much larger chances that it will work first time compared to, say, Java.
I could code comfortably in Haskell many months before learning HM as well.
This is a better argument for you than for Haskell. I wasn't disputing that there's a set of people who can intuitively command an incredibly recondite black box they don't (yet) understand - or that that set includes you. What I will argue is that that set is small, making it very hard to produce a critical mass of Haskell users despite the tremendous academic subsidies Haskell has received.
Imagine you're a helicopter pilot. That it's easy for you to fly a helicopter doesn't mean it's easy to fly helicopters. If we compare the set of people cognitively able to use PHP, to the set cognitively able to use Haskell, I don't think we're far from comparing car drivers to helicopter pilots.
Of course that's false -- but once a Haskell program is typed and compiled, there are much larger chances that it will work first time compared to, say, Java.
Indeed. (I'm not just agreeing for rhetorical purposes - this really is true.) Inasmuch as the programmer isn't perfect, however, he spends his time chasing not runtime errors - but static type errors.
I agree that using Haskell is like flying a helicopter when others use cars. However, I still strongly disagree with the choice you wrote earlier:
> If you want to use these languages, you have a choice between (a) learning the theory, and (b) not fully understanding the tool you're using.
Please bear in mind HM is only a basic idea of how Haskell type system works. Current inner workings of GHC type checking are described in 80 page research paper on OutsideIn(X), http://www.haskell.org/haskellwiki/Simonpj/Talk:OutsideIn. I never read more than half a page from this. Yet I can use GADTs, type families, existentials, rank-2-types and so on with no trouble. I don't think Haskellers who did not read those 80 pages are "not fully understanding the tool they're using".
Why should I know theory - OutsideIn(X), HM, category theory etc., to "fully understand" the tool? Intuitive understanding gained by practice is enough.
Intuitive understanding gained by practice enough for you. If Haskell didn't exist, you could probably invent it. The result might even be better.
Neighbor, how much code is there in the world? And how many coders as good as you? Divide these numbers, and you'll see why the world can't possibly adopt Haskell.
Some people can climb Half Dome with their fingernails. That doesn't mean there shouldn't be a fixed rope, too - that is, if your goal is to maximize the climbers of Half Dome. (If its goal is to separate the men from the boys, Haskell is already doing just fine and shouldn't change a thing.)
I think intuitive understanding gained by practice is enough for every programmer, not just me. Learning by programming is easier than reading research papers and digging the theory. What more, it was a prerequisite for me: I could not understand papers on type systems or category theory before I saw Haskell code. I heard similar opinions often - Haskellers learn Haskell first, can code comfortably, and only then are able to read about the theory. Theory seems hard, dull and useless at first. Even now, I feel a lot of it is cruft.
The semi-official motto of Haskell is "Avoid success at any cost", not world domination. It is enough as a tool for hackers, not for everyone. Haskell expands extremely slowly, yet steadily.
> If we compare the set of people cognitively able to use PHP, to the set cognitively able to use Haskell...
These may be very different meanings of "use."
Using PHP, Java, etc. involves a fair bit of purely mechanical (think "human compiler") labor - cranking out boilerplate, and solving the same idiotic non-problems (all "patterns" spoken of by programmer types) again and again and again. Both of these are things that the average programmer can get quite good at. He will think of himself as a master craftsman. But what he really is: is a valve turner on a Newcomen steam engine. (http://www.loper-os.org/?p=388)
If Haskell were to be rid of all of the shortcomings you have described, the masses would still avoid it for the above reason. Most programming work - particularly paid work, that an ordinary person can reliably get hired to do - is of the idiotic makework variety. So a language which does not supply a steady stream of such makework (e.g. Common Lisp) will stay obscure.
If we were to banish the automatable makework, only the intrinsic complexity of intrinsically-complex problems would remain. And we would need about the same number of computer programmers as we need neurosurgeons. (Fewer, because there are many meat bags and they are always diseased. But engineering problems, once solved, stay solved - or they would, in a sane world.)
It appears that SR and HM both benefit from great teachers. I learned HM in a couple weeks of an undergraduate course, and that knowledge has continued to pay off. A good introduction can be found in Krishnamurthi's free book PLAI in the chapters on types. http://www.cs.brown.edu/~sk/Publications/Books/ProgLangs/200...
Every language is based on a huge mass of things which might fall under "fully understanding the tool you're using". Do you have a sharper distinction separating type inference algorithms from other things like calling conventions or garbage collection algorithms that languages are presumably allowed to sweep under the rug?
Type inference in general seems particularly benign as far as black boxes go - it is somewhat user visible, but you can limit your exposure to knowing where to put explicit type annotations.
The plain Hindley-Milner type system is particularly well behaved, in that the inference algorithms for it are
guaranteed to work if there is any valid way at all to assign types. Combine that with type erasure guaranteeing the exact types assigned can't affect program behavior and the details of inference are doubly encapsulated.
Did you ask your friend if it actually looked non-trivial?
I am not particularly fond of GC either, but at least it is semantically opaque to the programmer (at least in theory).
Optimizing compilers run large numbers of algorithmically complex optimizations that are equally opaque. The abstraction is actually abstract.
If you're going to understand why the compiler rejected the program, you have to understand the reasoning process of the compiler. Understanding the constraints on the result is not enough - simply because the programmer, to be sure his or her program will work, has to follow the same algorithm as the compiler, or at least some equivalent calculation, in his or her own dumb head. This statement is not specific to Haskell, but true for all languages everywhere.
That for many individuals this is doable, with or without a formal understanding of PL theory, is true. Even for these individuals, I assert that it remains a heavy cognitive load. And many, probably most, are simply unable to lift it.
My general sense of the conventional wisdom is that Haskell has the reputation of being mathematically deep and difficult. It matters not at all whether this is a true or accurate perception, so long as it is indeed perceived. And indeed, I see plenty of references to this conventional perception on this very same thread.
The difficulty is that you can hardly ever get people to talk publicly about being intimidated by Haskell - because they feel like it's equivalent to admitting that they're stupid. Worse yet, that hypothesis is by no means precluded.
In the real practical world, you simply can't get things done without relying extensively on your poor fellow human beings who happen through no fault of their own to have been born with IQs of 0x7f or less. My theory predicts that Haskell usability among this demographic should be effectively nil. I believe the broadly perceived reality backs me up, and I welcome alternative interpretations of said reality.
Special Relativity is easier to understand than General Relativity. On a systems-category/wicked-abstract level it's like "Macroevolution" and "Microevolution".
Special relativity is like how if a car going 50 mph east collides with a car going 50 mph west it's the same situation as a car going 100mph hitting a stationary wall. When a train is moving at 35 mph and you're driving 50 mph in the same direction and you're passing it at 15 mph, that's special relativity.
Everyone knew about 'special relativity' Galileo wrote about it, it was a label put on something really everyone always knew. This is similar to how everyone knows about "microevolution" farmers and herdsmen have been manipulating animal and plant DNA for centuries. "Macroevolution" like general relativity required a global perspective.
Einstein wrote about General Relativity for the first time ~ 1907, it is the idea that everything that moves moves at a rate that is relative to the speed of light.
General relativity has the implications for the clocks and the twins where one goes off in a rocket ship and comes back and the Earth twin is so old. Einstein's theory had it's basis in differential geometry and was verified by Arthur Eddington's snapshots of starlight that bent as it passed through the Sun's gravitational field in 1919. The comment you're replying to had the two topics switched around.
That's not special relativity, that's Newtonian dynamics. Special relativity does have the implications for the clocks and the twin paradox, and general relativity is roughly special relativity plus gravity.
Well, that's embarrassing, you're right. I confused the historical existence of the principle of relativity and how general relativity generalizes special relativity and took them further than they actually go.
Monads are intellectually difficult at first, because it's hard to see how such disparate concepts (IO vs. List vs. Random vs. Async) so neatly follow the same pattern.
I personally find it more useful to start by "playing with" the black-box system, and then back-filling in theoretical knowledge if I need to do so. I find that, once I understand the motivation for the problem, the mathematics itself is pretty easy to understand. Math isn't complex or hard. It's simple. The hard part is figuring out the best way to provide the tools it provides.
Sure. I'm not saying that solving the hard problem of learning a black-box system is either impossible, or even necessarily un-fun.
I'm just saying that there is no such problem when it comes to learning, say, PHP. Or any other "townie" language.
So is the imposition of this arcane body of knowledge, "PL theory," whose DNA obviously originates not just in the academy but in the freakin' math department, and which has no relationship to any other body of knowledge commonly held or needed by the working programmer however 31337, essential to HOT-FP? Or could it be in some way dispensed with? If so, that would sure help out the mules on the way home, so to speak...
I would say that the FP people (and yes, this includes you) are solving the wrong problem. Mathematical formalism is not a substitute for understandability. Where is the mathematical proof that your doorknob turns?
No interesting property of a computer system (vs. a single algorithm) can ever be provable in the mathematical sense - because said proof, if true, can be construed as a program in its own right - and where is the proof of its correctness - which is to say, its consistency and correspondence to actual human wants? A. Perlis put this concisely: "You cannot transition from the informal to the formal by formal means."
Switching between alternate mathematical models of computation will not give us better intelligence amplifiers (what the personal computer is really meant to become.) Instead, we need systems which cut the operator's OODA loop delay as close to zero as possible: http://www.loper-os.org/?p=202
Any idiot can use modelling clay. Or a child's set of blocks. These objects have no hidden state, allow for no inconsistent states, posses no "compile" switch. One can build a computer which behaves in the same way. And no mathematical wankery need be involved.
Lustrate the squiggly crowd! Half a century of stagnation in computing is enough:
“Throughout my life I have known people who were born with silver spoons in their mouths. You know the ones: grew up in a strong community, went to good public or private schools, were able to attend a top undergraduate school like Harvard or Caltech, and then were admitted to the best graduate schools. Their success was assured, and it seemed to come easy for them. These are the people— in many, but certainly not in all cases—who end up telling the rest of us how to go about our business in computing. They figure out the theories of computation and the semantics of our languages; they define the software methodologies we must use. It’s good to have their perspective, but it’s only a perspective, not one necessarily gained by working in the trenches or watching the struggles of people grappling with strange concepts. Worse, watching their careers can discourage the rest of us, because things don’t come easy for us, and we lose as often or more often than we win. And discouragement is the beginning of failure. Sometimes people who have not had to struggle are smug and infuriating. This is my attempt to fight back. Theirs is a proud story of privilege and success. Mine is a story of disappointment and failure; I ought to be ashamed of it, and I should try to hide it. But I learned from it, and maybe you can, too.”
- Richard P. Gabriel, “A Personal Narrative: Journey to Stanford (Patterns of Software)
"Functional programming" in the real world isn't about dogmatically eliminating mutable state. It's about managing it. It's about modularity. It's not far from the original vision behind OOP-- before it got taken away by the wrong crowd-- which was that when complexity is inevitable, it should be encapsulated behind simpler interfaces. For example, SQL is append-only (as functional as one can get over a growing knowledge base) if you never use DELETE or UPDATE. The inner guts of SQL are not "purely functional"-- caching and indexing cause internal state changes, for performance reasons-- but the interface is.
I don't even know what endofunctors are. If it's relevant to my job, I'll learn about them, but I care more about writing good software than reciting category-theory arcana. I'm sure it's important (especially as an inspiration for design) for some people, but you don't need to know them to do functional programming. Not even close. And I was already a productive functional programmer (in Ocaml and Lisp) before I even knew what monads were.