I have never, ever seen "Monads are just monoids in the category of endofunctors" used in an actual conversation with someone who is trying to learn monads. The phrase is and always has been a joke; surely part of the joke is that most of us who use monads don't understand all that jargon ourselves. http://stackoverflow.com/questions/3870088/a-monad-is-just-a...
BTW, the article asks about design patterns that structure functional programs. Well, monads are one such pattern. That's why they're important, not some mathematical jargon.
Please read between the lines. The author wasn't implying that anyone actually said that, just like he doesn't think anyone has ever said this in real life:
>"Design patterns? Hey everyone, look at the muggle try to get the wand to work!"
I can speak only for Haskell-land, but I honestly see no truth in any of the author's negative arguments towards the functional programming community.
I have been doing most of my programming in Haskell for 3 years now, having worked in Ruby and Python before that. I can tell you that the Haskell community is among the most cordial, helpful and communicative I have ever seen in any field. In fact, many members of the community will cry foul if anyone dons on a "smug" attitude at any point; I've seen countless comments on reddit get buried for this alone.
If you want proof, just ask a question on StackOverflow[1], join #haskell on freenode or check out the reddit[2] page.
I find that many functional programmers, including Haskell programmers, seem convinced that I will completely abandon C++ if only I spent the time to understand Haskell.
After spending quite a bit of time learning Haskell, seeing a number of advantages and disadvantages, and being unconvinced and staying with C++, the assumption is I didn't "do enough".
Of course, I also know a number of lovely functional programmers as well!
> I find that many functional programmers, including Haskell programmers, seem convinced that I will completely abandon C++ if only I spent the time to understand Haskell.
With enough work, you can accomplish any task in any language. The only thing you can't do is force yourself to use something you don't like. So I commend you for giving it a try, but I certainly don't begrudge you for not liking it or using it for everything.
Programming language proselytization is one of the most obnoxious things about our industry. We love to cloak our feelings in rational language, but at the end of the day, all of the design decisions in a language are choices that have upsides and downsides. Some of these upsides are going to be valuable to a particular user or a particular problem, a lot of them are going to be irrelevant and a few will be detrimental. The factor that determines success is absolutely the individual and not the language.
Believing your language is the best must help motivate people to use it, but it is definitely an irrational belief, and not a technical fact. Advertising language differences as factual improvements when they are, at best, assumptions derived from untested principles must be doing for our industry what snake oil has done for Western medicine.
I used to attend the Bay Area Functional Programmers meetings (apparently now defunct). At one point, I mentioned that I was having a hard time applying FP to my work, which requires lots of iterative updates to vectors that represent states in physical systems.
The general reaction was not that I simply wasn't committed enough to FP. Rather, it was a very thoughtful version of "Yeah I can see where pure FP probably isn't a good fit".
This is almost the definition of religious differences.
Adherents of religion X will tell you how it changed their life, and it can change yours too, if you just believe hard enough.
When you investigate, find problems or just say "it's not for me", it's "your fault" for doing it wrong, or with a bad attitude, or you didn't talk to the right people, or you'll "grow up" later and 'see the light'.
I disagree - choosing a language is not choosing a placebo. Languages are tools, and need to be chosen appropriately. Often the exact choice will not matter (compare Java/C# or Python/Ruby), but sometimes it will.
"it's not for me" is not a valid excuse, it means you cannot handle the tool. This is your fault. A valid excuse is "it's not for my application domain".
Languages are tools to be used by people, not by identical automatons. "This tool doesn't mesh well with the way I approach this class of problems" is a perfectly valid objection.
In my case it's more often "this language makes tradeoffs that are counter to the way I want to work". E.G. I won't touch C++ with a 10 foot pole because I believe it's the compilers duty to track down and at least warn me about as many errors in my code as possible, and C++ seems to go out of its way to allow you to introduce subtle and hard to find bugs. I've worked in a ton of languages from nearly every paradigm (no logic languages yet) and in each one I've found at least one language that works best for me, or best in a particular problem space. Some of that choice is based on need (must be able to search text easily, must have a strong HTTP framework, must have a nice cross-platform GUI, must have strong matrix math framework, etc.), and some of it is based on preference (prefer strong typing, prefer meta-programming features, prefer garbage collection).
C++ and Haskell have some overlap, but mostly cover different domains. Do you use C++ for high-level application programming that has no performance requirements? Conversely, I would not use Haskell for low-level code that has to have very predictable, tight resource use on an embedded device.
Maybe you're not ready to leave C++, but if you don't have a slightly icky feeling from C++ compared to Haskell, then you've been programming C++ for so long that the weirdness has metabolized.
This is exactly what I am talking about, I'm "not ready to leave C++".
C++ certainly has it's faults, and some of them are quite painful. However, I find there are things I can do nicer in Haskell than C++, but some jobs call for an XORed linked list, or implementing trailing by knowing my underlying types are C and just calling memcpy, and I can't do that in Haskell.
At http://minisat.se/ , you will find Minisat, an (in my opinion) neat, easy to read(ish) and highly efficient C++ program. I've seen numerous people try to turn it into Haskell and all they have ended up with is something less readable, and much slower.
Did you actually mention any of these disadvantages? Most often I see the attitude you describe when someone says "I tried haskell/lisp/ocaml/scala/clojure but it isn't practical". And that's it. No explanation of what their problems were, just "nope, it obviously can't be used for anything practical and you must be crazy if you think you can use such a language". After seeing that kind of nonsense enough, it can be easy for somebody to fall into the trap of assuming you didn't really try it either if you don't bring up some specific problems.
3 years of Haskell.. May I ask where do you work? I've been trying to learn Haskell but are afraid that there are no jobs out there for Haskell programmer. Job sites aren't helpful in revealing the Haskell jobs either.
I would assert that learning a language for the joy of it, or for how much it expands your mind, is a much better motivation than that you hope it will get you a job. And I believe that a number of potential employers out there feel the same way.
This article serves to fracture the community in a manner disturbingly similar to that in which the 'smug functional programmers' he complains about do. I can't help but sigh.
I think the recent rise in popularity of functional programming is a sign that more people are beginning to approach computing from a 'computer science' perspective as opposed to a 'programming' one. That is, to say, that the industry has matured to a point in which a significant portion of industry programmers are getting excited about some of the more abstract concepts brought to us through functional programming.
Few people will argue that the concepts underlying FP don't take some time to internalize, but the number and quality of FP learning resources is increasing at an incredible rate. This, alone, makes me skeptical of the author's points — people are going out of their way to create accessible learning resources so that they might share something they find to be useful/interesting. In any movement of any kind, there will always be evangelists. Some (read: most) of these evangelists will be overly aggressive and preachy, but they are rarely (read: never) representative of the larger body.
In the past two years I've become quite a fan of Clojure, and I have found the community to be tight-knit, welcoming and eager to share.
Is that why more people are learning FP? I think it's a different trend: recognition that programming is no longer about writing tight loops. Those have mostly been written. Instead, practical programming these days is dominated by orchestrating different people's code. Functional languages move more quickly here because functional code is, in some sense, more reusable by default.
I don't think our ideas are mutually exclusive — what you're describing is the notion of abstraction as it is applied to writing code. FP gives programmers incredible power with regards to reusing code (macros, first class functions, etc) at the expense of taking a little longer (in some cases) to wrap one's head around.
I was merely commenting that I think part of the reason people have been drawn to FP lately is that the industry as a whole has matured to a point in which more people recognize the need for and have developed the skills necessary to embrace such abstractions.
The downside of being better than everyone else is psople tend to assume you're pretentious.
I learned FP by doing it, not asking around what monads are. There are plenty of great tutorials and no one has ever seriously talked about monads in that way.
The only time you really need to know what monads are is when you are building a new one.
The downside of being better than everyone else is psople tend to assume you're pretentious.
The article used the word "smugness", not "pretention". They don't quite mean the same thing. Being merely better than someone might certainly seem pretentious depending on context.
Telling everyone about it (in a context without relevance or evidence), however, is undeniably smug.
I like how you respond to an article addressing smugness in FP by starting your post by "The downside of being better than everyone else is people tend to assume you're pretentious".
I guess the OP doesn't even have to make his case when people like you serve such an unadulterated example of the very problem he was describing.
I've just started having a play with Haskell which I got interested in after working through the seven languages in seven weeks book.
I've not really seen much smugness in the Haskell community. Though there are some proper clever people talking about stuff I don't understand, I don't feel they've done it in a patronizing way (or at least its been to subtle for me to pick up on!) That's my experience anyhow. I realize that the plural of anecdote is not data.
I'm actually more interested in the second part of the article where the author talks about claims of productivity gains from functional programming being exaggerated. I am curious if folk on here can point to counterexamples to the study quoted.
My strong suspicion is that there is much more likely to be a large variance in the productivity of programmers than in the productiveness of a particular language or way of programming.
My experience is the exact opposite. Whenever I struggle with an FP concept, those deeply nested in the community tend to be both eager to help and pretty unassuming. This is especially true among Haskellites (Haskellistas?) who usually understand there's a high learning curve associated with the language.
I suspect FP will have more resistance to cargo-cult application. I suspect far fewer people who have a partial understanding of FP will be able to hold a job doing FP than was the case for OOP.
Heck, I know of one programmer who not only held an OOP programming job, but got to write a major subsystem -- with all long deeply nested class-side methods with 4 to 5 temp variables acting as merge-sort style incrementing indexes and not a single instance method or instance variable in sight. Last I heard, she was promoted and has great job security.
I'm not so sure. OOP entered the scene and became a dominant paradigm in a short amount of time - this doesn't happen without massive amounts of evangelization (from existing users) and marketing, neither of which can be particularly smug if they intend on being successful.
OOP was able to become dominant because it came with some clearly defined "killer apps", the likes of which I have yet to see materialize from the FP world. I mean, Smalltalk shows up in Byte and says "look, here is an entire system for graphically interacting with a computer, creating music, simulations, animations, drawing programs, documents, and more. It is so easy that school children can use it. We have managed to accomplish all of this in a few hundred thousand lines of our language called Smalltalk, which incidentally also contains the implementation of all the dev tools and the standard library. How large would this be in the language you are currently using (probably COBOL, Pascal, C or something similar)?"
I get that FP has some very good characteristics, and I have tried to add a bit more "functional-ity" to my code over the years, but at the same time I get an overwhelming urge to say "put up or shut up" every time I am confronted with an article talking about how terrible OO and the programmers who work in it are, and how those working in FP are in some sort of enlightened programming utopia.
Convince me by showing me some examples of programs so good that I can't possibly ignore them, the way OOP did. Convince me by becoming the foundation of almost every program I am currently running on my computer right now, the way OOP did.
Don't convince me by writing article after article showing how insanely elegant your implementation of a mathematical function (My head will literally explode if I see another Fibonacci example) is, and how brain damaged what I am currently using must be. Numerical code comprises probably 0.1% of any computer system I am currently using. Show me instead how easy you just made to it interact with my computer, to write the type of programs that I am currently using and, for bonus points, to write the types of programs that I didn't even think were possible.
Hmmm. The killer app, when it comes, will probably be for parallelism. Since FP languages are better able to control state, they can neatly sidestep many of the contention issues that plague imperative programming in multi-core scenarios. Problems that require heroic programming in imperative realms become much simpler in certain FP languages. And given that clock speed is settling down while core numbers are ramping up, I think this alone will make FP more compelling.
It's more about appropriateness, though, than anything else. I still think OOP is great for things like GUIs, modeling complex domains.
As always, the trick is to pick either the right language for your problem, or pick a multi-paradigm language, and choose the right approach for your problem.
The killer app, when it comes, will probably be for parallelism.
If that's the case, then FP's got a problem in the form of OOP having already done it first and famously with projects like Hadoop.
I think that too much hay is made of the tradition of minimizing mutable state in functional programming. It's not something that's exclusive to FP. And I'd even go a step further and suggest that excessive use of mutable state isn't a hallmark of imperative programming so much as a hallmark of bad imperative programming. Though I'm sure there's also plenty of room for argument there.
Regardless, the fact remains that when functional programming proselytizers suggest that imperative programmers need FP to escape from the bugaboos of excessive mutability, it comes across as somewhat naive. While many functional languages make mutability less convenient, this is not the same as making immutability more convenient. Which I mention by way of pointing out that really the only thing people working in imperative languages need to do to cut back on their use of mutable state is cut back on their use of mutable state.
Now, if the current fad for FP made more noise about something that distinguishes functional languages much more tangibly, it might be more impressive. Take higher-order programming, something most functional languages do very well and most imperative languages do poorly (if at all). Imagine doing something like node.js in a non-functional language.
Or this couple minutes from a lecture on F#, where the presenter converts sequential code to asynchronous parallel code in just a few keystrokes. Sure, managing state properly is an important precondition to being able to do that, but the real magic is in the async workflow (a type of monad). http://youtu.be/HQ887aOZITY?t=53m
Yes. That's precisely why I chose it as my example. One of the major themes in my post was statements to the effect of "you need functional programming languages to do X" often being mistaken, so that detail makes Hadoop a rather poignant case.
I agree. I think perhaps, a better point to make is that FP makes certain things easy, or the default, while doing it in am imperative language requires discipline, which I think is a mistake to rely on at the industry level.
It's a bit of a reductio ad absurdum, but remember: technically everything could be written in assembler. Languages and their idiomatic usage matter. Any C++ person could point out that, yeah, everything you can do in Java can be done in C++, but the fact that Java does automatic memory management still has value to many people.
Totally. And I also think a lot of imperative languages make that kind of discipline more of a hassle than it needs to be. (Giant peeve: .NET does not have any decent options for immutable collections in its standard libraries.)
But the author does bring up a good point: The industry is full of people who were taught using Scheme in university, but quickly switched to an imperative language when given the chance. Having TA'd classes that are taught in Scheme, and spent a lot of time tutoring people who fit into that class, and watching the way they go nuts when they're finally shown set! for the first time, I've come to the conclusion that it isn't merely a preference. Most of them really do have a genuinely easier time reasoning about and writing correct imperative code. It would also be a mistake to rely on a programming paradigm in which many people have a hard time functioning at the industry level.
If there's a middle ground that allows people to keep their FOR-loops while discouraging them from using mutability in the contexts where it's dangerous, that's the direction I think industry should be headed in.
Well, OOP had some help in the form of C++. It was easy to sneak in because it didn't strictly require people who were used to C to change their habits. Folks could, in effect, switch compilers first and then learn the language at their leisure. Of course, you could also make the argument (and I would) that C++ wasn't really an object-oriented language so much as a structured language with classes thrown in.
Most of the popular functional languages, on the other hand, require folks to do a whole lot of mental gearshifting before they can be productive in the new language. That's a natural impediment to adoption.
I suspect, though, that functional programming has finally found its C++, in the form of more recent versions of C#. Like with C++, it doesn't require users to immediately switch programming styles to be able to get the job done. Also like C++, though, you really can't use the language to its full capacity without getting very comfortable with a lot of functional programming concepts. Pretty much all of the neat new stuff in the .NET 4 class libraries, for example, is built around higher-order programming. Most of it will also be useless (or at least severely crippled) for someone who can't get past relying on mutable state.
I don't agree that "OOP entered the scene and became dominant in a short amount of time", though I guess it depends how you define "entered the scene" (Simula 67? Smalltalk 80? C++ in the mid 80s? Java in the mid 90s?) and "dominant" (presumably the rise of Java as a mainstream language in the late 90s?). Maybe if by OOP you just meant Java?
Note that in the early days of OOP (pre-Java, say the first few years of OOPSLA in the mid to late 80s) there was a lot of overlap between the FP and OOP communities, both because CLOS was one of the pioneers exploring some of far corners of OOP, and because aficionados of weird languages tend to flock together.
Out of curiosity .. when was the last time you read a story that told of company x dominating the market because they used programming paradigm n.. just a thought.
Our hypothesis was that if we wrote our software in Lisp, we'd be able to get features done faster than our competitors, and also to do things in our software that they couldn't do. And because Lisp was so high-level, we wouldn't need a big development team, so our costs would be lower. If this were so, we could offer a better product for less money, and still make a profit. We would end up getting all the users, and our competitors would get none, and eventually go out of business. That was what we hoped would happen, anyway.
I totally agree with the author in my experiences. To be fair those individuals were also egotistic elitists in other realms of their life. "If there's anything around here more important than my ego, I want it caught and shot now!"
OP strikes me as someone who is trying to position himself as a "moderate" but who is mistaking a caricaturized extreme (<0.25%) for "functional programmers". Honestly, I like FP extremists better than their OOP cargo-cult counterparts, who have the same smugness but are usually better, in most companies, at getting managerial blessing. In many companies, OOP acolytes glow in the managerial sun like Dudley Dursley, despite the general crappitude of their ideas.
"Functional programming" in the real world isn't about dogmatically eliminating mutable state. It's about managing it. It's about modularity. It's not far from the original vision behind OOP-- before it got taken away by the wrong crowd-- which was that when complexity is inevitable, it should be encapsulated behind simpler interfaces. For example, SQL is append-only (as functional as one can get over a growing knowledge base) if you never use DELETE or UPDATE. The inner guts of SQL are not "purely functional"-- caching and indexing cause internal state changes, for performance reasons-- but the interface is.
I don't even know what endofunctors are. If it's relevant to my job, I'll learn about them, but I care more about writing good software than reciting category-theory arcana. I'm sure it's important (especially as an inspiration for design) for some people, but you don't need to know them to do functional programming. Not even close. And I was already a productive functional programmer (in Ocaml and Lisp) before I even knew what monads were.
That's the thing, FP is not only a great model for writing a CRUD or any kind data-processing system, but also things like Haskell's type system makes it incredibly powerfull to "hack away" code while controlling complexity. And both are things that FP extremists doesn't seem to like to promote. The "scientific" culture seems to revolve around using it for numerical computations, while trying to get as close as doing "formal proofs" as possible.
The problem is that the best, most expressive functional languages (lookin at ya Haskell) are mathematically based on this theory. If you want to use these languages, you have a choice between (a) learning the theory, and (b) not fully understanding the tool you're using. Both have significant downsides from a UI perspective.
Learn You A Haskell is a classic case of (b). As a tutorial it's completely successful, and how? It tells you that you can learn Haskell without learning PL theory. It also neglects to tell you how to debug you a Haskell (without learning PL theory). But once you type in the examples, I'm sure you can figure it out.
A lot of people are smart enough to program on top of mysterious black boxes. That is: a lot of people are smart enough to work past this UI problem. Moreover, the smartest people can learn Haskell by rote and then gradually, through intuition and experience, learn to grok the black box. This is not to say, I think, that it's not a problem.
Of course you could just learn the theory. I'd say anyone who can graduate with a math degree from an Ivy-class university can do it. Is you a member of this set? Find out by reading the Wikipedia page for Hindley-Milner:
I'm sure there are simpler explanations of H-M (Haskell's type inference algorithm). But they're not on Wikipedia. I sent this link to a friend of mine with a PhD in astrophysics from Caltech. "Which seems easier to you?" I asked. "This, or special relativity?" You can probably imagine his response.
So, wanted: a higher-order typed functional language whose type system is easier to understand than special relativity. Or of course a proof that such a device is impossible :-)
Do you think H-M, which can be presented in full in less than a 60 minutes functional programming lecture, is harder than a whole subfield of physics? I don't think so. Astrophysists deal with large objects every day and there might be a reason why they find relativity easier.
BTW: Debugging is much less used in Haskell, this is not a strong objection to LYAH.
Special relativity is a few formulas, not a "whole subfield of physics." But yes - I didn't really regard this as a serious scientific criticism.
I can present LR parsing in a lecture as well. However, if you had to understand LR parsing to write an HTML 5 app, there would be a lot fewer HTML 5 apps.
By "debugging" I didn't just mean tracing. I meant one of the most basic tasks of the programmer - interpreting and responding to error messages. Is there a single error message shown or explained in LYAH? If so, I missed it.
This is germane because, when a type system works perfectly, it appears magic by definition. When it issues you an error, however, you have to understand what went wrong. It is sometimes possible to do this without understanding the black box, but the skill is again recondite.
I refuse to believe that Haskell programmers are so godly that their programs work the first time they're typed in.
I think "special relativity is a few formulas" is an underestimation, but this is not important.
> By "debugging" I didn't just mean tracing. I meant one of the most basic tasks of the programmer - interpreting and responding to error messages. Is there a single error message shown or explained in LYAH? If so, I missed it.
I could code comfortably in Pascal long before I knew how exactly does "expecting a semicolon, found a number" arise. I could code comfortably in Haskell many months before learning HM as well. You know what to do when you see "no instance for Num Char" as well as you know what to do with that parsing error. No need to think about unification or parsing, a kneejerk reaction is enough.
> I refuse to believe that Haskell programmers are so godly that their programs work the first time they're typed in.
Of course that's false -- but once a Haskell program is typed and compiled, there are much larger chances that it will work first time compared to, say, Java.
I could code comfortably in Haskell many months before learning HM as well.
This is a better argument for you than for Haskell. I wasn't disputing that there's a set of people who can intuitively command an incredibly recondite black box they don't (yet) understand - or that that set includes you. What I will argue is that that set is small, making it very hard to produce a critical mass of Haskell users despite the tremendous academic subsidies Haskell has received.
Imagine you're a helicopter pilot. That it's easy for you to fly a helicopter doesn't mean it's easy to fly helicopters. If we compare the set of people cognitively able to use PHP, to the set cognitively able to use Haskell, I don't think we're far from comparing car drivers to helicopter pilots.
Of course that's false -- but once a Haskell program is typed and compiled, there are much larger chances that it will work first time compared to, say, Java.
Indeed. (I'm not just agreeing for rhetorical purposes - this really is true.) Inasmuch as the programmer isn't perfect, however, he spends his time chasing not runtime errors - but static type errors.
I agree that using Haskell is like flying a helicopter when others use cars. However, I still strongly disagree with the choice you wrote earlier:
> If you want to use these languages, you have a choice between (a) learning the theory, and (b) not fully understanding the tool you're using.
Please bear in mind HM is only a basic idea of how Haskell type system works. Current inner workings of GHC type checking are described in 80 page research paper on OutsideIn(X), http://www.haskell.org/haskellwiki/Simonpj/Talk:OutsideIn. I never read more than half a page from this. Yet I can use GADTs, type families, existentials, rank-2-types and so on with no trouble. I don't think Haskellers who did not read those 80 pages are "not fully understanding the tool they're using".
Why should I know theory - OutsideIn(X), HM, category theory etc., to "fully understand" the tool? Intuitive understanding gained by practice is enough.
Intuitive understanding gained by practice enough for you. If Haskell didn't exist, you could probably invent it. The result might even be better.
Neighbor, how much code is there in the world? And how many coders as good as you? Divide these numbers, and you'll see why the world can't possibly adopt Haskell.
Some people can climb Half Dome with their fingernails. That doesn't mean there shouldn't be a fixed rope, too - that is, if your goal is to maximize the climbers of Half Dome. (If its goal is to separate the men from the boys, Haskell is already doing just fine and shouldn't change a thing.)
I think intuitive understanding gained by practice is enough for every programmer, not just me. Learning by programming is easier than reading research papers and digging the theory. What more, it was a prerequisite for me: I could not understand papers on type systems or category theory before I saw Haskell code. I heard similar opinions often - Haskellers learn Haskell first, can code comfortably, and only then are able to read about the theory. Theory seems hard, dull and useless at first. Even now, I feel a lot of it is cruft.
The semi-official motto of Haskell is "Avoid success at any cost", not world domination. It is enough as a tool for hackers, not for everyone. Haskell expands extremely slowly, yet steadily.
> If we compare the set of people cognitively able to use PHP, to the set cognitively able to use Haskell...
These may be very different meanings of "use."
Using PHP, Java, etc. involves a fair bit of purely mechanical (think "human compiler") labor - cranking out boilerplate, and solving the same idiotic non-problems (all "patterns" spoken of by programmer types) again and again and again. Both of these are things that the average programmer can get quite good at. He will think of himself as a master craftsman. But what he really is: is a valve turner on a Newcomen steam engine. (http://www.loper-os.org/?p=388)
If Haskell were to be rid of all of the shortcomings you have described, the masses would still avoid it for the above reason. Most programming work - particularly paid work, that an ordinary person can reliably get hired to do - is of the idiotic makework variety. So a language which does not supply a steady stream of such makework (e.g. Common Lisp) will stay obscure.
If we were to banish the automatable makework, only the intrinsic complexity of intrinsically-complex problems would remain. And we would need about the same number of computer programmers as we need neurosurgeons. (Fewer, because there are many meat bags and they are always diseased. But engineering problems, once solved, stay solved - or they would, in a sane world.)
It appears that SR and HM both benefit from great teachers. I learned HM in a couple weeks of an undergraduate course, and that knowledge has continued to pay off. A good introduction can be found in Krishnamurthi's free book PLAI in the chapters on types. http://www.cs.brown.edu/~sk/Publications/Books/ProgLangs/200...
Every language is based on a huge mass of things which might fall under "fully understanding the tool you're using". Do you have a sharper distinction separating type inference algorithms from other things like calling conventions or garbage collection algorithms that languages are presumably allowed to sweep under the rug?
Type inference in general seems particularly benign as far as black boxes go - it is somewhat user visible, but you can limit your exposure to knowing where to put explicit type annotations.
The plain Hindley-Milner type system is particularly well behaved, in that the inference algorithms for it are
guaranteed to work if there is any valid way at all to assign types. Combine that with type erasure guaranteeing the exact types assigned can't affect program behavior and the details of inference are doubly encapsulated.
Did you ask your friend if it actually looked non-trivial?
I am not particularly fond of GC either, but at least it is semantically opaque to the programmer (at least in theory).
Optimizing compilers run large numbers of algorithmically complex optimizations that are equally opaque. The abstraction is actually abstract.
If you're going to understand why the compiler rejected the program, you have to understand the reasoning process of the compiler. Understanding the constraints on the result is not enough - simply because the programmer, to be sure his or her program will work, has to follow the same algorithm as the compiler, or at least some equivalent calculation, in his or her own dumb head. This statement is not specific to Haskell, but true for all languages everywhere.
That for many individuals this is doable, with or without a formal understanding of PL theory, is true. Even for these individuals, I assert that it remains a heavy cognitive load. And many, probably most, are simply unable to lift it.
My general sense of the conventional wisdom is that Haskell has the reputation of being mathematically deep and difficult. It matters not at all whether this is a true or accurate perception, so long as it is indeed perceived. And indeed, I see plenty of references to this conventional perception on this very same thread.
The difficulty is that you can hardly ever get people to talk publicly about being intimidated by Haskell - because they feel like it's equivalent to admitting that they're stupid. Worse yet, that hypothesis is by no means precluded.
In the real practical world, you simply can't get things done without relying extensively on your poor fellow human beings who happen through no fault of their own to have been born with IQs of 0x7f or less. My theory predicts that Haskell usability among this demographic should be effectively nil. I believe the broadly perceived reality backs me up, and I welcome alternative interpretations of said reality.
Special Relativity is easier to understand than General Relativity. On a systems-category/wicked-abstract level it's like "Macroevolution" and "Microevolution".
Special relativity is like how if a car going 50 mph east collides with a car going 50 mph west it's the same situation as a car going 100mph hitting a stationary wall. When a train is moving at 35 mph and you're driving 50 mph in the same direction and you're passing it at 15 mph, that's special relativity.
Everyone knew about 'special relativity' Galileo wrote about it, it was a label put on something really everyone always knew. This is similar to how everyone knows about "microevolution" farmers and herdsmen have been manipulating animal and plant DNA for centuries. "Macroevolution" like general relativity required a global perspective.
Einstein wrote about General Relativity for the first time ~ 1907, it is the idea that everything that moves moves at a rate that is relative to the speed of light.
General relativity has the implications for the clocks and the twins where one goes off in a rocket ship and comes back and the Earth twin is so old. Einstein's theory had it's basis in differential geometry and was verified by Arthur Eddington's snapshots of starlight that bent as it passed through the Sun's gravitational field in 1919. The comment you're replying to had the two topics switched around.
That's not special relativity, that's Newtonian dynamics. Special relativity does have the implications for the clocks and the twin paradox, and general relativity is roughly special relativity plus gravity.
Well, that's embarrassing, you're right. I confused the historical existence of the principle of relativity and how general relativity generalizes special relativity and took them further than they actually go.
Monads are intellectually difficult at first, because it's hard to see how such disparate concepts (IO vs. List vs. Random vs. Async) so neatly follow the same pattern.
I personally find it more useful to start by "playing with" the black-box system, and then back-filling in theoretical knowledge if I need to do so. I find that, once I understand the motivation for the problem, the mathematics itself is pretty easy to understand. Math isn't complex or hard. It's simple. The hard part is figuring out the best way to provide the tools it provides.
Sure. I'm not saying that solving the hard problem of learning a black-box system is either impossible, or even necessarily un-fun.
I'm just saying that there is no such problem when it comes to learning, say, PHP. Or any other "townie" language.
So is the imposition of this arcane body of knowledge, "PL theory," whose DNA obviously originates not just in the academy but in the freakin' math department, and which has no relationship to any other body of knowledge commonly held or needed by the working programmer however 31337, essential to HOT-FP? Or could it be in some way dispensed with? If so, that would sure help out the mules on the way home, so to speak...
I would say that the FP people (and yes, this includes you) are solving the wrong problem. Mathematical formalism is not a substitute for understandability. Where is the mathematical proof that your doorknob turns?
No interesting property of a computer system (vs. a single algorithm) can ever be provable in the mathematical sense - because said proof, if true, can be construed as a program in its own right - and where is the proof of its correctness - which is to say, its consistency and correspondence to actual human wants? A. Perlis put this concisely: "You cannot transition from the informal to the formal by formal means."
Switching between alternate mathematical models of computation will not give us better intelligence amplifiers (what the personal computer is really meant to become.) Instead, we need systems which cut the operator's OODA loop delay as close to zero as possible: http://www.loper-os.org/?p=202
Any idiot can use modelling clay. Or a child's set of blocks. These objects have no hidden state, allow for no inconsistent states, posses no "compile" switch. One can build a computer which behaves in the same way. And no mathematical wankery need be involved.
Lustrate the squiggly crowd! Half a century of stagnation in computing is enough:
“Throughout my life I have known people who were born with silver spoons in their mouths. You know the ones: grew up in a strong community, went to good public or private schools, were able to attend a top undergraduate school like Harvard or Caltech, and then were admitted to the best graduate schools. Their success was assured, and it seemed to come easy for them. These are the people— in many, but certainly not in all cases—who end up telling the rest of us how to go about our business in computing. They figure out the theories of computation and the semantics of our languages; they define the software methodologies we must use. It’s good to have their perspective, but it’s only a perspective, not one necessarily gained by working in the trenches or watching the struggles of people grappling with strange concepts. Worse, watching their careers can discourage the rest of us, because things don’t come easy for us, and we lose as often or more often than we win. And discouragement is the beginning of failure. Sometimes people who have not had to struggle are smug and infuriating. This is my attempt to fight back. Theirs is a proud story of privilege and success. Mine is a story of disappointment and failure; I ought to be ashamed of it, and I should try to hide it. But I learned from it, and maybe you can, too.”
- Richard P. Gabriel, “A Personal Narrative: Journey to Stanford (Patterns of Software)
Oh, stop. I'm tired of hearing about "smug" being the worst thing ever. Too easy to toss this insult around.
Instead, personally my issue with functional programming is the aversion to side effects. It seems to cause a lot of weird contortions, when the whole reason we make software in the first place is for the side effects. Programming in a functional style brings with it a lot of wonderful, powerful ideas. But that one has always struck me as unfortunate.
I find that the aversion to side effects even when working in imperative languages is one of the most useful things I've learned from functional programming.
It is true that, at one level of abstraction, we write programs for their side effects. However, it is also true that having the right side effects is extremely important - so important that a program with the wrong side effects can easily be worse than no program at all. From that perspective, a focus on minimizing, containing and controlling side effects is very valuable. And an aversion to side effects is a good way to sharpen your focus on that.
While it's true that the output/observable behavior is what we want out of a program, whether the internals rely on side effects, strictly speaking, isn't relevant to producing useful things for the users.
I won't deny that sometimes, a lack of side effects results in weird contortions, but there are ways around it and benefits that make up for it. Since side-effect-free functions are way easier to predict, debugging time is much shorter in FP, I find. Monads, while difficult to grok at first, are elegant ways to get around side effects in pure FP languages.
Or, just go with my personal favorite solution, use an FP language that's not pure (e.g., most Lisps). They tend to encourage side-effect-free and/or state-free programs, but do not strictly require it. This gives you the best of both (IMNSHO); 80-90% of your code is side-effect-free and easier to debug, and the side-effect code is relatively contained and straightforward (in an imperative sense).
Banning side-effects like IO does create the need for odd contortions, but having less side-effects and little or no mutable data makes error recovery and debugging so much nicer.
BTW, the article asks about design patterns that structure functional programs. Well, monads are one such pattern. That's why they're important, not some mathematical jargon.