Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Sick of Ruby, dynamic typing, side effects, and object-oriented programming (abevoelker.com)
110 points by blakehaswell on July 2, 2014 | hide | past | favorite | 94 comments


Programming is hard. A few years back, in the 90's when most of the code in my field was still mostly structured, and bad at that, a lot of people were saying that OOP would sort it out. I was skeptical, not because I'm resistant to change, but because it was obvious to me that doing OOP right was (and is) very hard. Not that structured programming was easy.

Fast forward to today: programming is still hard, and actually it probably got a lot harder. OOP did not sort it out. Most of the code in my field is object-oriented, and bad at that. A lot of people are saying that FP will sort it out. I am skeptical, not because I'm resistant to change, but because it is obvious to me that doing FP right is (and will be) very hard. Not that object oriented programming is easy.

Am I alone in thinking that fast forward a few years, once there is enough rotten FP code written, we will be reading people ditching FP because it's the root of all evil?

The facts are that programming is hard. Working with legacy code is hard. Learning a paradigm well enough so the code you write in it is not total crap is very hard, and requires years of practical experience if you are proficient at another paradigm, let alone if you simply skimmed a paradigm and moved away because it was too hard...

It's great that people want to move on from single-platform, single-paradigm monocultures, with one caveat: breadth without depth is shallowness.

I'd like to read people treating languages and platforms as tools and not as cargo cults. You don't read carpenters writing they'll ditch hammers for screwdrivers because the old cupboard they are fixing uses nails. You read carpenters debating the pros and cons of using hammers versus screwdrivers. And you read better carpenters that debate how cupboards are designed, because it is ultimately more important than whether they are glued, nailed or screwed.


Some changes are real progress:

  - automatic memory management by default
  - testing by default
  - distributed version control by default
All of these make you more productive in a typical project. (And should be turned off when not, that's why I say, by default.) I suggest that the following are also unalloyed good, and will make their way into more and more languages over time:

  - functions as first class values
  - immutability by default / copy-on-write semantics by default
  - side-effects only possible when declared (eg IO Monad)
  - powerful static typing by default
(We already see in eg git and some file systems that even when the implementation is imperative, immutability makes concepts simpler. Immutability by default virtually requires automatic memory management.)


I agree that FP will not be the solution to all problems just like OO wasn't, but what FP does is it forces the developer to think hard in every situation. There are fewer shortcuts that can lead to broken abstractions and hard to find errors.

On the other hand, with OO you are always just a few more lines of code away from shipping. It's popular because it provides easy abstractions over data (rather than operations), but as we all know it causes unnecessary coupling and broken abstractions if you aren't careful. Imperative programming (not just OO) presents a lot of problems for concurrency, which is perhaps the biggest problem in the coming decade.

OO can be done right, but when it is, it's just as "hard" as FP. In fact, a lot of good OO code is functional in nature.

What is tempting about FP is that it will make programming hard enough that bad programs won't ship. Is proponents of FP think that programming should be hard. That it should require a lot of thought up front, for every line of code.

In a good lang and framework, a complete program is a good program. This idea is likely not very welcome in an industry where deadlines are always more important than quality.


What I 'love' about object-oriented programming is that every other year there is a new pattern or paradigm that is shoved down our throats by opinion leaders and self-proclaimed OO-gurus. These architectural patterns all make sense on the surface but by the time people try to apply them in real life and realize that they are mostly a load of horse shit, the instigators have already moved on to the next "big thing". And this pattern is repeated ad infinitum.


The problem is that the pattern that are being discussed ( or a pattern in general, that's far from limited to OO ) are applied by a team of dedicated developer, combing the code meticulously, generally with near-limitless budget.

You read the story about how Linked In, Facebook, LMAX and you dream of applying that.

That won't work. Those companies decided to go with their current infrastructure after their previous one failed miserably and threatened their billion making core business.

Real life for most developer is a lot duller. Very often you will have barely enough to do what need to be done. Consistency, code gardening is difficult to justify a budget for until the house is on fire. No matter how good the pattern you use, the code will be shit if you don't have time to maintain it properly.


Any specific examples?

The patterns I know of, e.g Gamma/Beck/etc "Design Patterns" are pragmatic and useful solutions to common architectural problems.

And no, they are not "only for languages without first class support for some features", as some think. Or, actually, some of them are, others are useful regardless of language. Heck, a lot of them came from Smalltalk, a language which is expressivity wise miles ahead compared to "modern" languages like Go or Java).

That some people abuse them is not an inherent problem in them. Other people abuse macros, or gotos, or functions (the 100000 line function monstronsity) etc.


This can be called "The Professor Harold Hill problem", named for the character from "The Music Man".


> I'd like to read people treating languages and platforms as tools and not as cargo cults. You don't read carpenters writing they'll ditch hammers for screwdrivers because the old cupboard they are fixing uses nails. You read carpenters debating the pros and cons of using hammers versus screwdrivers. And you read better carpenters that debate how cupboards are designed, because it is ultimately more important than whether they are glued, nailed or screwed.

The problem with this carpenter analogy is that it simply doesn't scale - a carpenter is not going to build a skyscraper. To build skyscrapers we need engineering - the practical application of science - something which is completely missing from our field. We call ourselves software engineers, but we're really software masons - we can do good work in small quantities, but we're terrible and building big structures - which is where many of our software problems lie.

You can't build a house on some land, and later turn it into a skyscraper either - because the foundations are perhaps the most vital part of the structure - they need to be designed with some knowledge of the size, shape and mass of the structure they intend to support. The approach taken in software development is the tacking on of new systems - building structures equivalent to say, the Kowloon walled city (unsightly and unstable).

To do engineering and science, we need math - and we have no sound way of modeling imperative languages/programs in ways that makes them useful as mathematical concepts. FP is math - so building up concepts in these languages is providing a richer set of math objects and abstractions which we can later use to build our large structures. Using FP doesn't mean we need to abandon imperative coding - it just means we should clearly mark the effects of such imperative chunks of code, so we can treat them mathematically.

Carpenters are still relevant in the construction of skyscrapers - but their responsibilities are only a small part of it - they're given clearly defined boundaries of when and where they should be working. This is really how we should be doing software - we need engineers and architects to build structures, using science and math, then assigning isolated environments for the "masons" (e.g, junior programmers) to work in - in such a way that a mistake by a junior programmer cannot bring the entire skyscraper crashing down. (Turns out this was understood in the 70s, because Unix pipes and processes are still the best approach we have to this day).

Of course, this doesn't mean it should be "my Coq is better than your Twelf" - we need to separate the math from the textual representations and even the execution models.


I think your post mostly re-enforces the OP's claim that FP won't save the day. To quote the OP, "it is obvious to me that doing FP right is (and will be) very hard."

In-so-far as your claims are true and germane to FP, they merely re-enforce this critique. After all, doing math right is very hard.

> and we have no sound way of modeling imperative languages/programs in ways that makes them useful as mathematical concepts.

This simply isn't true. There exist sound logics of imperative programs, which can be used to explore mathematical concepts.

This also pre-supposes that imperative programs themselves (and machine models more generally) are not an interesting mathematical concept.

> FP is math

It's entirely unclear what this means, if anything. You probably mean to say that certain functional languages correspond to certain logics. But that's not terribly meaningful; you can establish similar correspondences with imperative languages.

And even then, logic is math does not imply that math is (just) logic. So even if you're granted this point, the rest of your argument doesn't follow.

From a more empirical perspective, plenty of mathematicians do great math without knowing anything about FP, and plenty of FP programmers write a ton of code without ever doing interesting math.

Finally, the vast majority of very mathematically informed programming is still done in languages like C and Python and Java. So I'm highly suspicious of the claim that we need programming languages which are close to foundations in order to do mathematically informed programming.


It's an analogy, not an isomorphism.


>> I'd like to read people treating languages and platforms as tools and not as cargo cults.

I really like how you put that, it basically summarizes my own opinions on programming languages and development methodologies, in more or less the most concise way I can imagine ;-)

I'm totally oblivious to FP languages, but from CS theory I remember they are not always fun and joy to work with at all, at least not for a sizable class of practical (real-world) problems, and often require you to build these crazy hard-to-follow mathematical abstractions/contortions to be able to do things that are downright trivial in other languages. Sometimes you actually want to have mutable state and the problem you are modelling does require you to allow side-effects or explicit synchronization.

From my years of experience with programming languages my conclusion is that whatever paradigm you can come up with, some problems will be hard, and some will be easy, but no matter what, you will still need to know what you are doing and tread carefully. IMO the best language is not one that is 'safe' or 'strictly [insert programming paradigm here]', but one that allows you to do whatever you like but at least provides you with the tools to 'do it right (tm)'. From there it's all up to the developer to actually use the available tools correctly.

This may be a little unsympathetic to unexperienced developers, but I don't believe in programming languages that are supposed to make programming 'easier' or 'more accessible'. Allowing you to write safer code is invaluable, but IMO it should be up to the developer to ensure he/she uses the tool correctly.


I am skeptical, not because I'm resistant to change, but because it is obvious to me that doing FP right is (and will be) very hard.

Doing FP right is hard, but not for everyone. Library writers have the hardest job because FP (especially pure, statically typed FP) forces you to plan ahead of time instead of cobbling things together. The reward of doing this is the ability to make very robust, stable, easy-to-use domain-specific languages that make it hard even for novice programmers to screw up.


Quite right for many computing tasks OO is pointless and adds extra overhead and complexity.

Today a lot of IT is still take a set of inputs perform some operation on it and output it.


The title "I'm sick of object-oriented programming" is misleading. The author is sick of their work with Ruby and the fact that it is OO is one amongst several complaints.

I use a mixture of OOP and FP day to day (mostly with the Java/Scala/Clojure family) and have to say that both have their place. In large projects I appreciate OO design patterns for clarity and flexibility (though maybe that is just because it is what I'm used to), and FP's mandate on immutability for the same.

Finally I have grown to whole heartedly share the author's dislike of dynamic typing. I find Scala, not its more "pure" FP cousins Clojure and Haskell, to provide the most productive balance of the above.

Anyone else like me: tried both and ended up walking the middle road?


I used to code a lot of Python earlier. After some time I started learning Haskell, Scala and Clojure. Finally I ended up sticking up with Haskell becomes of it's strong type system and ability to reason the code by just seeing the types. And now when sometimes I write code in a dynamic language, I must say that I write it more neatly.


Ok, I get it. Mature Ruby codebase sucks. Integer division changes are surprising. But what does it have to do with object-orientedness?

> break functionality into lots of small objects > use immutabile objects as much as possible (e.g. using thin veneers over primitives or adamantium)

are the guidelines that I'm using in C#.

> separate business logic into collections of functions that act on said objects (service objects) minimize mutation and side effects to as few places as possible

How does separating out functions minimize mutation?

> thoroughly document expected type arguments for object instantiation and method invocation with unit tests (mimickry of a static type system)

Yet another argument for static typing...


The point is that OOP was introduced so people could continue coding with globals (i.e. states), where the globals now lives inside different namespaced containers instead the global container.

The effect is still the same though, your code is just the same old Rube Goldberg machine.


Very few of their arguments seem to relate to OOP. One of the nastiest gotchas there is that importing a library can change how division works - or, in the general case, that which code has run before my code can affect how my code executes.

Plenty of Object-Oriented Languages do not allow you to rewrite existing code in other modules like this. In fact, most of them don't. Encapsulation is a hallmark of OOP, and it's typically non-OOP languages that allow you to do things like this. Just because Ruby says to you "I'm OOP", and you don't like a feature of Ruby, does not mean you do not like OOP!

Rust has a nice solution to this problem: You can modify an existing class, such as integers, but that modification only applies within the scope of code where you apply it. It can't bleed out into all code that runs after the patching.


A lot of people are confusing usage of immutable types and pure functions with pure functional programming.


The problems exposed by languages like Ruby are not necessarily problems with dynamic types. For example you can do reasoning about purity, side-effects and whatnot in languages of the LISP family as well. Clojure for example is much, much saner than Ruby in all the points listed by TFA - static typing is for example by definition anti-modularity and anti-adaptability and note that I prefer static typing over dynamic typing and Scala over Clojure. On the issue of static versus dynamic one has to view this as a different school of thought and to apply one versus the other depending on the needs of the project.

Uncontrolled side-effects are the real issue behind most of the accidental complexity that we are seeing. We all badly need to adopt more abstractions and techniques from functional programming.

Also, changing languages or idioms doesn't necessarily help with the exposed problems. We also need a change of mentality in how we are doing software development. Lets face it, when we need to do something right now, urgent, that should have been done yesterday - no matter the language, no matter the abstractions or idioms involved, we are bound to do stupid shit - because there's accidental complexity and then there's inherent complexity and nothing saves you from inherent complexity other than thinking really well about the problem at hand and splitting it into simpler, more manageable parts.

This is also why TDD is a failure and complete bullshit in how it is advertised. Tests don't save you from doing stupid shit. Tests don't tell you whether your architecture is any good, they only tell you if your architecture is testable. Tests don't prove the absence of bugs, they only prove their presence. Tests only tell if you reached a desired target, not what that target should be. And perhaps most importantly since this is touching the core of their purpose, when uncontrolled side-effects are happening in your system, tests are a poor safety net - anybody that had to deal with concurrency issues can attest to that.

Agile methodologies are also trying to paint a turd. Yes, we should deploy or publish as soon as we've got something to publish. We should pivot a lot. We should communicate more with the end-users or within the team. And so on and so forth. But it's an indisputable fact that some problems are hard enough that they can't be solved by puking code and tests in a matter of hours or days, or by adding more people to the team.


> static typing is for example by definition anti-modularity and anti-adaptability

This is so painfully wrong. Static types are not in the way of modularity and type correct programs are not hard to adapt to another type correct program.

I would say it's the exact opposite. Types are perfectly modular and compose beautifully. Adapting your program's design in a type safe environment is a breeze, because your compiler/types more often than not tell you exactly what you can and cannot do.


No, I'm right, because people smarter than me have said it. Here's the quintessence of this argument: https://en.wikipedia.org/wiki/Expression_problem

And also, imagine trying to build a system that works like our body does, particularly fascinating is the process of wound healing: https://en.wikipedia.org/wiki/Wound_healing

You know, there's a reason for why Akka's actors, a library built for a fairly static and expressive language (Scala), are dynamically typed. Try finding out why that is.


The existence of the Expression Problem is orthogonal to whether or not types promote modularity or not. In fact it has a type-safe solution. I recommend reading through the original post Wadler wrote to get a better sense of what it is saying: http://homepages.inf.ed.ac.uk/wadler/papers/expression/expre...

(There is also a link at the bottom of the Wikipedia article you linked to.)


> No, I'm right, because people smarter than me have said it.

http://en.wikipedia.org/wiki/Argument_from_authority


One can imagine the immune system being a kind of type checker for the body. Is it "static" or "dynamic?" Both! The "innate" immune system is basically fixed at DNA compile-time, while the evolutionarily later "adaptive" immune system is updated at runtime by exposure to pathogens.

I think it's unfortunate that we conflate strong explicit typing with static compilation. Common Lisp, for example, can be seen as statically typed; it can give you compile-time type errors. But you can also recompile at runtime; add new types at runtime; and so on.


Exactly - I did want to express that, forgetting to mention that it's not a black and white issue. I mentioned the healing process as fascinating, because the cells act as independent actors that when the wound happens, they first start arguing against each other, but then they start cooperating to reduce the wound, to provide cover for the new tissue and to form the new tissue. We are far from building systems that are as smart as the process of evolution could build.

Ruby is more dynamic than other languages and that's in a bad sense. I always get a kicker when thinking about the purpose of the "class" keyword, being to open the class context or create the class as a side-effect if it doesn't exist. Ruby is built for runtime mutation of types - which is good in certain contexts, but unfortunately you cannot scope those mutations, leading to the ultimate side-effecting hair ball if not careful about both your code and other people's libraries - it would be useful to say, modify the String type or import this library, but only for this block of code.


Notice how the very definition you give of the Expression Problem has the constraint of "retaining static type safety"?

It doesn't care about dynamic solutions, and they don't capture as much information anyway. Of course you can express anything in a dynamic fashion, in the end assembly is untyped too.

So, if those indeed were "people smarter than you", you didn't understand what they were saying.


I don't get the misunderstand we are having here. I was claiming that static typing has problems with modularity and adaptability because - one reason would be the expression problem.

You're then saying that the expression problem wouldn't exist if dynamic typing is allowed. Well, that was my whole point.


Akka will have typed actor support in future versions. There are obvious reasons why a direct port of code that was originally written for an untyped language (Erlang) would be written in an untyped way.


Akka has typed channels right now, as a compromise for people that do want static typing. But go ask its designers about it on its mailing list.


No, I'm right, because people smarter than me have said it.

Wow. Just... wow. Seriously? No! The expression problem can be solved in Haskell in a straightforward manner using type classes[0].

You know, there's a reason for why Akka's actors, a library built for a fairly static and expressive language (Scala), are dynamically typed. Try finding out why that is.

Because Scala is a crappy, broken language built out of compromises with the JVM? Take a look at Haskell's distributed-process[1] library. It solves the problem that Akka punted on (namely type-safe serialization of function closures).

[0] http://paulkoerbitz.de/posts/Solving-the-Expression-Problem-...

[1] http://hackage.haskell.org/package/distributed-process


Akka has to expose the actor registry, has to handle network connectivity issues and scalability issues, which means messages have to go through actors posing as proxies and routers; and actors may be dynamically created - and also because of the async model, actors can and must change/adapt their interface asynchronously, like when doing back-pressure and waiting for acknowledgement that a message was received on the other end, during which there can be no progress, as in "context.become", fundamental for the actor model - which actually renders static typing useless.

Saying that Haskell's distributed-process solves the problems that Akka is solving is short-sighted and wrong. But then again, the main reason for why Haskell hasn't attracted me has nothing to do with technicalities and everything to do with a vocal minority of Haskell users that I find totally repulsive.


> But then again, the main reason for why Haskell hasn't attracted me has nothing to do with technicalities and everything to do with a vocal minority of Haskell users that I find totally repulsive.

This.

> The expression problem can be solved in Haskell in a straightforward manner using type classes.

Lol, WUT? No.

Btw, I think that you are actually trying to lecture Scala people about typeclasses just shows how clueless you are.


> This is also why TDD is a failure and complete bullshit in how it is advertised. Tests don't save you from doing stupid shit.

Why do people think tests are for that? Tests are basically same as free climbers safety lines. They won't protect you from everything and sure they are tedious to place, but once they save your tush, you'll be glad they were there, and you weren't splattered across the floor.

I've seen lack of tests in practice and it's not pretty. Nope. Not pretty at all. Basically lots of entangled undocumented, untested systems that you can't refactor because your refactor just broke some code somewhere.

To demonstrate some of the bugs, if you accidentally type your username wrong, the whole server crashes and resets. It was not pretty.


> Tests are basically same as free climbers safety lines.

I never claimed that they aren't useful.

> once they save your tush, you'll be glad they were there

Of course, I'm glad when I work on well tested codebases. But I was speaking about the advertisement that it received. And people really do think that because that's what TDD enthusiasts claimed - and note that I'm making a difference between testing and TDD.

Particularly funny is this story on Sudoku solvers: http://ravimohan.blogspot.ro/2007/04/learning-from-sudoku-so...

The introduction is genius and I quote: "Ron Jeffries attempts to create a sudoku solver - here, here, here, here and here. (You really ought to read these articles. They are ummm...{cough} ...err.... enlightening.) ... Peter Norvig creates a Sudoku Solver."


I don't honestly know what people claim TDD does. But where the requirements are clearly specified (albeit changing) and problem space isn't well understood it has its uses.

The example given is one TDD is kinda one of worst case scenarios for it. If for example the goal was to write a novel game that is popular with some audience Norvigs analytical approach would falter. TDD wouldn't fare much better but I believe it would be better than purely analytical problem. I however have little doubts Norvig would adapt to the challenge.

Do note that by TDD I consider only red-green-refactor methodology on clearly specified parts and adding tests on encountered bugs. Rest I consider fluff.

Summary: Tests and test first design is a tool. It can't be used for everything.


> Tests are basically same as free climbers safety lines.

Absolutely. So many people seem to aggressively argue for 'test all the things!' or 'test nothing!', where I imagine most jobbing programmers are practical enough to test the key things first and then expand from there. If I'm greenfielding something, I'll do TDD-first, but on new features on an old code-base or legacy apps it's initially way more just a catch in case I come off the mountain unexpectedly. I don't want to spend a day writing a feature and then find out I've broken something fundamental, I want to find that out after a couple of hours.


"you can't refactor because your refactor just broke some code somewhere" - picking a language that is not matched with first-class automated refactoring (ReSharper, IDEA) was their first mistake (rather than lack of unit testing).


You couldn't rename things because they used AspectJ to bind certain method names to some validation methods. Of course this was well documented on a dark side of planet orbiting Andromeda.

Also the Javascript used #id an suffixes to call JSP fragments. You change the name of the div in that JSP fragment and you have to change any mention of it in any JS or Java files.


Renaming stuff and pushing variables around is not refactoring. It's just some editing work.


IDE refactoring is glorified grep/sed.

On the other hand an expressive static type system (with emphasis on expressive, because Java's type system doesn't qualify) can save you from countless of accidental bugs happening - and a single bug that's caught by the compiler is a bug that won't reach production.

Plus, static typing approaches the problem from a different perspective than unit testing. Through testing we try to prove that a piece of code conforms to the business logic that is being solved and to guard ourselves against regressions. A static type system on the other hand actually proves that your code has certain characteristics - for example depending on the language we are talking about, it can prove that you won't get any null pointer exceptions, or it can prove that the interface of this component is still the one expected by this other component, etc... for example I have a component modeled as a fairly complex FSM and I fixed a difficult non-deterministic bug by eliminating the possibility of it happening through the type-system - it was quite the eye opener.

Static typing doesn't negate the need for testing of course since they serve a different purpose, but if you find yourself writing tests for things that a compiler could prove, then you probably picked the wrong language.


You'd be surprised what transformation modern IDEs can do that IS refactoring, not just renaming and pushing variables around.

Plus, you're wrong: a lot of refactoring is also about renaming stuff and pushing variables around. You don't always have to rewrite everything in new hierarchies and patterns in order to do a refactor.


Renaming, pushing variables, changing their types, stuff like that. It's still editing work. Can any IDE split the class into two according to methods' responsibilities? Can it abstract a set of functions to a single generic function? Can it get rid of unnecessary boilerplate code scattered around in various classes? Can it change data structures used to store data?

Because refactoring is about making the code simpler and more flexible. Rearranging that IDEs do is only a method (and not the only one) to achieve that.


Anyone who doubts this should watch Jim Weirich's talk "Adventures in Functional Programming" - the last (and most mind-blowing) quarter of the talk is almost entirely made up of automatic refactoring.

[1] http://vimeo.com/45140590



I think a large part of good software design involves making the implicit into the explicit (although there are always tradeoffs).

Instead of writing six different very similar functions that implicitly do the same thing but on different types, we write a single function with generic type parameters. Instead of writing a bunch of nested for loops working on a bunch of mutable state in an imperative way, we use map, filter, reduce, fold, etc to explicitly describe the transformations we are trying to accomplish. Instead of allowing our code to crash when bad inputs cause an error, we use asserts to declare pre- and post-conditions. And so forth.

TDD style tests have their place, but they don't do all that much to make the implicit into the explicit. If you use tests as a source of truth or a form of documentation, you are essentially saying your specification doesn't actually exist except as some weird emergent phenomenon - and then you have to take the time to disentangle the assumptions inherent in the tests from the assumptions inherent in the choices of test inputs and outputs. Tests can, by their nature, only exercise an infinitesimally small fraction of the possible combinations of inputs and state transitions that your code can go through. So tests, which serve to validate the code, need themselves to be validated. How can we even arrive at the intuition that our tests are covering some useful sample of this gigantic state-space? Metrics like code coverage and the size of a test suite are only meaningful if you assume the tests themselves are sound in the first place.


Part of the problem I see after 11 years in the business, is that now I can see problems like you describe. I try and push as much work to the database as possible, not for performance (though that is a consideration) but because it is declarative, so you get less side effects. Its not as natural to think in sets as it is to think imperatively. But do people want to pay me more for this knowledge? Not here (in Spain). They want to pay extra for people using the latest fad technology, not people who know how to avoid problems at the design stage.


I can feel your pain :-)

On the other hand, people pay first of all for working solutions. If you keep building robust and simple solutions, your managers will see it eventually - even though many managers don't recognize design talent when they see it, for fear of having to increase your salary.

Or the incompetent ones aren't seeing it because their metric is in the lines of code written, whereas a good developer always avoids writing unneeded code - this can be extended to other areas as well - people really good at solving concurrency issues for example, avoid concurrency issues like the plague they are. If that's the case, then it's time to search for something better. I live in Romania, our situation is similar, but I discovered that I have no problems in finding interesting work remotely - I stopped doing that because it's boring for me to not have colleagues nearby and found a small local company that's pretty cool. But yeah, you don't have to stay within the local market if you can't find something you like.


How do you go about getting remote clients? I am keen to work remotely for a few reasons. Seems that you need a portfolio, but all my work is in house (though I am working on a small app to show I can code to a decent standard).


It's not natural to think imperatively, otherwise people would have no trouble with learning programming. It's just the way we were taught to write programs.


Its generally easier that thinking in sets. Certainly my not so talented team leader always describes database queries in terms of "if it has this then do this". I translate these into "give me the set where".


No, it's not generally easier. You can't generalize because people around you, who were taught to think imperatively, are thinking imperatively. You don't know how would it be if they were taught to think in terms of set theory. It might be the other way around or it could be not. That's the point: we don't have the data to generalize on which thinking is easier.


I was taught both. Who is generalising now?


And now you support your general claim by anecdotal evidence? Note that I don't make any claim apart from "it can't be generalized for now".


Ok, have you got any non anecdotal evidence that proves it is "being taught imperatively" that makes imperative programming the "mainstream"? In 11 years I have met very few people who program functionally, and usually those people are good enough to know multiple paradigms. As far as I know most universities teach both.


Most of the universities I have seen or heard about teach C, C++, Java, Python or the like on their courses. Only a subset of them teach also functional languages (which is still quite far from declarativeness, but let's ignore this for the time being), but even those don't require functional programming throughout the whole time and people fall back to imperative languages.

Let's go further. Most of the materials in the internet are about programming in imperative languages. Those are the most popular, including popularity among newcommers.

I have never heard about anybody who started learning programming by learning Lisp, Haskell, OCaml, SML or any other functional language. Similarly I haven't heard about somebody, who learned one of those as his second language and got any proficiency in them.

Please provide any data that contradicts the common sense conclusion (at least common sense for me) that vast majority of people who were taught programming, were taught programming imperatively.


I inherited a 6 y/o rails codebase and I worked on it for 6 months straight. I am usually against rewriting things, but this is like going into one of those abandoned houses and pulling off layer and layer of crap to find more termites, cockroaches etc. Every time I had it 'working', I found all kinds of weirdness, usually do to with forked GEMs to the old dev his personal github, which where not only old/ancient but had stuff bolted onto them so they could not be upgraded in any way.

So rewriting now... In F# (and C# where handy). I like Ruby but for projects like this I have seen it fall over too often; most Rails dedicated companies I know deliver great projects but they don't have to do long term support. I would like to see how that would work out as I see companies in the wild struggling to find devs willing to support their codebases. This is not a gripe with Ruby/Rails per se, but (in my experience!) Python programmers who do large Django (or other frameworks, but I encounter mostly Django) projects are more disciplined so less goes haywire, there are too many people who can do JS, so you'll always find people willing to work on your crap. Ruby is in a niche but popular spot; hard to find coders, some coders are not so good and yet enormous projects are created in it.


I get this periodically. I have to walk away and go and do something else for a bit before I go nuts. This is usually after debugging some patternitis rats nest that is completely over-engineered.

However occasionally amongst the muck, a beautiful and elegant thing pops out and makes it all OK. This event is getting rarer for me as our product evolves though.


The older I get, and the more I watch and help programmers (including programming myself), the more I believe this is an OO/mutability problem.

We created OO, at least the C++ version, in part as a way to create boxes of code -- a reusable module system. But what happened was that we created a huge stinking pile of mutability and hidden dependencies. If I'm looking at a method in an object that takes one parameter, I literally have no freaking idea what the current state of the object is, what the current state of the parameter is or might be (and let's assume the parameter is itself an object or graph of objects). In fact, it's impossible for me to reason about what the hell I'm looking at. That's why we are forced to use the debugger so much.

Pure FP takes that all away. I have data going in, I do a transform, I have data going out. If my data is clean and my transforms are broken down enough to be understandable? It just works.

We keep trying to bolt on solutions like TDD to a fundamentally flawed model of development. Damn, I hate to say that, because I love OOA/D/P. I'm not giving it up, but my current programming practices consist of using OCAML/F# and pure functions to begin with, then "Scaling up" to objects as systems get more mature. If I've got a big closure, I'm probably looking at an object. So far I've found that scaling up is not necessary. I get more mileage from composing my functions into command-line executables, a la unix, than I do sticking everything together. But that could change.

It's right to be discouraged. There's something deeply wrong here. A big change is coming to software development.


I am simply not convinced that static typing resolves as many bugs as its proponents claim. The bugs I have are very rarely related with the type of a variable. They are incomplete/incorrect implementations of business rules. The type system doesn't solve that.

The other point of the article is that dynamic typing makes code rot. I am not convinced that static languages do any better. Code rot is not a problem of the typing system, it's a problem with the programmer writing the code, or the environment he's in. Let's see what happens when he inherits a 5-year-old Haskell codebase.

That said, I think the OP will do good in learning other languages. That always helps.


> The bugs I have are very rarely related with the type of a variable. They are incomplete/incorrect implementations of business rules. The type system doesn't solve that.

It doesn't until you encode the business rules in the type system. But in a well-designed language that's actually fairly easy.

> The other point of the article is that dynamic typing makes code rot. I am not convinced that static languages do any better. Code rot is not a problem of the typing system, it's a problem with the programmer writing the code, or the environment he's in. Let's see what happens when he inherits a 5-year-old Haskell codebase.

Having done both, it's far easier to port an old statically-typed codebase to newer versions of its dependencies than to do the same for an old dynamically-typed codebase. Compare a Wicket upgrade (seamlessly smooth, and the compiler will catch anything you've missed) with a Rails/Django upgrade (batten down the hatches, hope you've got high test coverage, and even then you'll likely have something break).


The problems I experienced with TDD in Rails applications (it's about the only way I'm using Ruby) didn't derive from object orientation or dynamic typing. They came from time and money constraints, especially in those one hour or one day rushes to get a new small functionality into the code base and deliver it into production. If a company has an internal development staff the technical debt can be repaid later on. If a company works with consultants paid by the hour, there might be never enough money to do it. An application with reasonable test coverage can quickly turn into an application with 100% broken tests and weeks of work to fix them all. But this isn't about Ruby or OO, it's about customers and the amount of money they want to spend no matter how good you are at explaining them what's happening to their software.

I have a Rails 3.2/Ruby 1.9.3 application that will be hard to port to Rails 4/Ruby 2.1 because of that. The point of the original post could be that writing that application in Haskell would require less work on tests so we could have less technical debt by now. Maybe. But when integration tests break (originally run with Selenium) because the UI changed so much and there are many new functionalities to test, I don't think the language can help. It's back to having enough budget.


I think it's fair to say that 'everything is an object' was always a bad idea. I'm a little surprised that it took Ruby devs that long to realise that (Java devs still seem fine though).

My favourite example is the 'Utility' module that almost every project ends up with at some point. Why would that ever be an object? In C++ you'd open up a namespace and trow a bunch of free standing functions at it. Doesn't have to be more complicated than that. Classes are supposed to be one method of abstraction (among many others) that programming languages offer to us to structure our code.

The real problem with OOP aren't objects though. It's inheritance. Complex inheritance graphs are probably the best way to couple supposedly independent parts of your code as tightly as possible. And they're notoriously hard to wrap your head around. I guess a good example are component based scene graphs (again, in C++). Whenever you're implementing some sort of graph, chances are you start by writing a class called 'Node'. That's fantastic as long as you stop the chain there. Each individual object in your scene should be a subclass of Node that has any number of independent components (mesh component, audio component, AI component, what have you) attached to it. Favoring composition over inheritance is always a good idea as far as I'm concerned and I'm happy to see languages like Rust adopting it.

I don't want to get into the whole TDD thing right now. All I'm gonna say is that assuming your code didn't break anything because all tests passed is a risky business and testing JavaScript UIs might not be as useful as you think. Having said that, Cinder (http://libcinder.org) had a bug in it's matrix multiplication code not so long ago that could have easily been detected with unit tests.


He's inherited several rails codebases and has to maintain them.

Sounds like the 'other peoples code' problem to me.

Just like all non trivial abstractions are leaky (Joel) I guess all non-trivial applications are 'hairy'


No, I have seen tons of times when one developer on a single code base build a rats nest for themselves by just doing OOP.

Sure the objects or classes where probably not perfectly designed or the choices of the correct objects/classes to begin with, but that is the problem with OOP, you don't know if the main building blocks you are using are the correct ones.


So you are saying it's impossible to build a rat's nest with FP? It's problem with programmer, not with tool.


Uh? Where did I write that? Why are you making up what I wrote?

I guess your assumptions are in line with the OOP programming style... ;-)

You can make a rat's nest of any coding paradigm, however I would argue it is easier to do it with OOP.


I was sick of Object Oriented programming the first moment I had to deal with an API built with OO that replaced a previous API without it.

It was the WordPerfect 6 something language, replacing the WP5.1 macro language.

I had built a perfectly reasonable autocorrection engine, it handled typos, misplaced commas and periods, adding accents, and a few things more.

The Object Oriented macro language in WP6 removed any possibility to make my macros work, in exchange for adding silly graphic buttons.

Now I like C++ and D, but they are not 'pure OO'. Therefore I hate Java.


Most of the points he makes are a byproduct of expecting too much from other people. I.E. gem authors, or misunderstanding why something exists I.E. the ruby standard library, which is a motley crew of modules meant to let people write hacky scripts a la perl. This is why I'm getting away from magic/cleverness as much as possible. Clever solutions usually rely on the dark corners of a language/ecosystem, which might break in the future.


When Edsger Dijkstra pronounced Program testing can be used to show the presence of bugs, but never to show their absence! had your Pa met your Ma?


I suspect Rails the framework is as much to blame here as Ruby, OOP or dynamic typing.

The siren song of static typing is loud today. Robert Smallshire has given convincing -- and even evidence based -- arguments that it increases development time while catching only a very small percentage of bugs:

http://vimeo.com/74354480


Possibly but.. Having had to dig into the Chef client code base way too often.. Large ruby code bases can be a special kind of hell.


Just use Scala, and you'll get to love object oriented combined with immutability and functional programming.


I almost had to stop reading when I hit the quote from @HackerNewsOnion. Does the author realize that's a parody account?


Of course OP knows. Read it again with that in mind and it makes [more] sense.


I guess we all agree that one way to get around the burden of reasoning around mutable state is restricting the language.

I find that an equally reasonable way is to be able to add new language constructs to make this reasoning far more (humanly) tractable. You can do this with functions, and objects and so on, many a times, but without proper macros, you may lose far too much in terms of performance (in a dynamic language).


I've never written tests for my projects. I hate writing tests because almost certainly, as I'm creating a new software, things change. I'm focused on the changing part all the time I hate being bogged down by having to write a script that tells me everything is fine.

On a large scale project, I understand the need.


I agree with this. You (I) waste as much time trying to work out if the test is wrong, or the code is wrong. Fiddle until the bar goes green. Doesn't help you focus on the main problem. Show it to the users, and the requirements instantly change anyway.....

In some cases automated tests are helpful, but not everywhere.


Where do you draw the line between "creating new software" and a "large scale project"? Every large scale project starts as a small project, so at some point you must decide to start writing tests; and then you have the burden of backfilling 'em.


But at least you dont have to write the tests that were rendered useless due to changing requirements.


The original headline is this:

"Sick of Ruby, dynamic typing, side effects, and basically object-orientied programming"

He's sick of OOP in Ruby, not of OOP in general. Ruby's dynamic typing means you have one type, "the dynamic type". The compiler leaves you entirely on your own as static analysis is impossible. It means many more tests, assertions, and runtime error conditions. Most OOP techniques have evolved out of, and rely on strong static typing, Ruby isn't that kind of language.

OOP in Ruby is the inevitable result of many Ruby projects growing in scope and size. When you reach that point, a better way is to move part of the project out of Ruby and into a language that can handle that scope and size.

P.S.: Any communication with the outside world is a side effect (ex. network, file, database I/O), so sick of it or not, you better get used to it. Haskell masking side effects as monads through a language loophole isn't making things significantly better.


The point of the Haskell style is it makes it easy to control side effects. If you don't care when I/O happens, you can just do everything inside the I/O monad and pretend you're writing Ruby. But if you want to know which functions do I/O and which don't (which can help e.g. test appropriately), Haskell gives you the tools to keep track of that in a lightweight way.


> P.S.: Any communication with the outside world is a side effect (ex. network, file, database I/O), so sick of it or not, you better get used to it. Haskell masking side effects as monads through a language loophole isn't making things significantly better.

Eh what? Even as a Haskell non-user, this just doesn't make any sense.

a) Haskell doesn't "mask" side effects b) The fact that monads are involved is completely irrelevant c) It's not a language loophole


a) side effects are "masked" in Haskell in that a function that is generic over any type of monad may or may not incur side effects at run-time, depending on which actual monad it is used with.

b) see above - because monads as a type class exist, a function may be written with a type signature that does not indicate in any way that it will be eventually used over IO.

c) the fact that the Haskell denotational semantics are entirely silent over what happens at run-time with the IO monad is definitely a loophole. From a birds view, a Haskell program is a giant expression that reduces to an object of type IO, and what happens with that object is entirely outside of the language semantics.


a-b) The objects of type IO are values, just as any other monad. It is true that polymorphic functions can be instantiated with type IO, just as they can be of any other type. How you think this supports the assertion that side-effects are "masked" is beyond me.

c) The language boundary of Haskell typically ends at the production of an IO value. You can easily give an operational semantics to many IO operations, and in fact there are papers that do this for simple IO operations such as putStr. It's not possible to give a full operational semantics for IO in any language, not just Haskell. Of course you are going to model console I/O, references, exceptions, concurrency features (all of which have been done). But these aren't the only kinds of side effects. Does your model account for the network, and all its complexities? Does your model account for power failures? Does your model account for cosmic rays? Most models don't, but nobody calls them loopholes, nobody implies it's a big cover-up. Every model is going to have a boundary, and Haskell's seems good enough to me. It's no more undefined than, say, ML.


a and b don't actually matter, because what matters is how the function is called. If given some `f :: Monad m => m -> m`, then you can only call f with IO as the monad from an already impure context. For example, you can't call f with IO from inside a function (g :: a -> b).

The purpose of IO in Haskell is to explicitly mark side effects, because they cannot be arbitrarily composed in the way pure functions can. IO represents a one-way boundary in that you can turn some pure computation into an impure one (a -> IO a), but there is no way of "extracting" that computation back from IO (i.e, there exist no (IO a -> a)). That "monads" are used to do this is useful because they provide the (a -> IO a), and happen to have a convenient function for chaining computations (IO a -> (a -> IO b) -> IO b).

How IO is defined is up to the implementation, and not in the scope of the language - different implementations could use a different representation for IO - what matters is that it must be defined in a way that one cannot define an (IO a -> a).

On "a Haskell program is a giant expression that reduces to an object of type IO", this is really nothing to do with Haskell, but a consequence of how we've built our operating systems and how the defacto meaning of "program" these days is equivalent to an "executable file". Traditionally "program" was much more abstract and could refer simply to any piece of code, such as a (pure) haskell function. We can consider any Haskell function to be a "program" in itself. If we had an environment from which we launched processes which provided all the command line switches and environment variables as arguments, we could easily omit the "IO" from our "main", if the rest of our code was pure. (On a side note, this is precisely what early versions of Haskell did, before monadic IO became practical)


Sounds like you need to spend some time with go!


now he'll have two problems!


Using a language without generics in 2014?


As if that will solve his problems? Even Java 8 is more agile than Go.


I'm surprised those shiny new languages still can't do what was considered normal in good old BASIC. And all programmers nowadays go "Meh! BASIC. Considered harmful! Will permanently damage your brain!!!"

Come on people, FreeBASIC has really great options for object-oriented programming and could use your skills.


Fun fact:

"Since the only computer language Richard [Feynman] was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine."

http://longnow.org/essays/richard-feynman-connection-machine...


That's a really cool blog post. Makes me wonder about a lot of stuff:

What were the projects and discoveries for which the connection machines were clearly essential?

Are there connection machines that are still in operation?

What happened to the old ones?

Does Danny Hillis now keep a cluster of GPUs running somewhere instead of his old hardware?

Would connection machines be more useful now than in the past?

Is it the right time to boot up thinking Machines 2.0 ?


Parrella boards and cray xmt both are probably the closest you get to connection machines today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: