Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Copy-pasting a comment on the original article here, since I have the same comment/question:

Brian said...

Dynamic typing doesn't restore the modularity- it simply delays checking for violations. Say module X depends upon module Y having function foo- this is a statement which is independent of static vr.s dynamic checking. This means that that modularity is lost- you can't have a module Y which doesn't have a function foo. If module Y doesn't have a function foo, the program is going to fail, the only questions are when and how.

What I don't get is why detecting the error early is a disadvantage. It's well known that the earlier a bug is detected, the cheaper it is to fix. And yet, all the arguments in favor of dynamic typing are about delaying the detection of bugs- implicitly increasing their cost.

6/05/2011 6:49 AM



The article's just claiming that the modules of dynamic languages can be compiled independently; not that it will be correct.

Just my thoughts: There's a bigger question here about types, though. I think the argument is that the ceremony of types gets in the way more than it helps, and it requires up-front design. Many people like dynamic languages for their flexibility; and when you get bugs, you have to fix them anyway. There's an assumption in typed languages that we can design our types well enough upfront to help solve the problem.

It's interesting how types create a dependency on the interface (i.e. if you use a type, you depend on it). One of the ideas of Abstract Data Types was to reduce dependency on the internal implementation details; but you're still dependent on the external interface. If you change the interface, it creates problems (e.g. unit testing claims to give you confidence to refactor, but if you change interfaces, you also have to change the tests...). EDIT added "claims to"

This seems intrinsic to modularity, and I can't really see any solution to this; except for ways to make it easier to cope with interface change. Some insights may come from webapp API's, where the dependency is more explicit (you have to send and receive serialized messages).


Maybe you got me closer to the answer, and you certainly seem to have more experience on this subject than I do. I am still missing some part of the picture, so let me ask:

Types: You think about the problem up-front to the best you can. Most likely you later change the interface. The compiler now breaks the code and forces you to fix it.

Dynamic: You are flexible, which means you can start your work without having to think up-front (Q1: How is this good?). Later when you change the interface (which may exist only in your mind, but it does as far as the modules are to interact with each other in some way at all), the code still breaks but the compiler won't tell you.

In either case, as you said "you have to fix them anyway". So Q2: How is compiler not being able to complain about broken code (and possibly unit tests finding it) helping?

One comment here cited an example where one can get away without introducing errors while still changing the type ("f(g(x)"). But that is readily achieved with statically typed languages like C++ as well.


Q1, you're assuming you can design upfront; that is, that you understand the problem, you have enough information to solve the problem, the world won't change and so on.

Q2, the ceremony of types itself has a cost. You have to think about it, type it - and you may need to change it. When types ripple through layers of calls, the changes do too.

Note: I don't know the answer. I notice that dynamic languages seem to be becoming more popular (are they? or are they just more publicized?) In a rapidly changing world, it's more important to adapt now than perfect. There's a general trend, that because computers get faster but humans don't, more and more of the work will be transfered to the computer. Dynamic languages do this in the sense that they are less efficient than static languages - on the assumption that speed is the main benefit of static languages. Certainly, the coder has less work to do. Another benefit is that it makes coding accessible to less skilled people (and more skilled people who have less time to devote to a particular task).

There are trade-offs. The first thing is to note what the trade-offs are. The second thing is to note what groups of people use programming languages and for what tasks. The third thing is to ask how those people value those trade-offs, for those specific tasks. e.g. Perhaps sometimes, a crappy, hard-to-maintain, only partially correct solution now is better than a high-quality, clear, correct solution too late?


Web APIs often have explicit versioning. Perhaps that approach could be adopted for programming, where you have the option to specify the version of a function you are using. This would enable gradual migration and adoption of new features.


> If you change the interface, it creates problems (e.g. unit testing gives you confidence to refactor, but if you change interfaces, you also have to change the tests...).

Could you elaborate on this point? If you change the behavior of a dynamic module A, i.e. its implicit interface, you'd have to change its unit tests to reflect these changes anyway...


Yes, that's correct. I was addressing the selling point of unit tests, that they give you a safety net to ensure your changes don't introduce new bugs. The claim fails when the interface changes.

In my experience, interface changes happen quite often. When you prevent them from changing, you end up with back-compatibility hell, that popular platforms like x86, Windows and Java have to maintain - and that's just for external, public interfaces.


Traditionally, academic research has been funded by NASA and the Department of Defense. Under these contexts (space transport, war, etc.), the cost of a runtime failure is extremely high, counted in number of human lives or billions of dollars. Also, throughout history, most software was written once, shipped, then used. The possibility of a hotfix was practically impossible. This is where the adage that "the earlier a bug is detected, the cheaper it is to fix" came from.

In those contexts, this is absolutely true. If you're in space, and your software decides to vent your oxygen, you're screwed. And so it makes sense to invent all kinds of static compiler checks that try to eliminate all possible bugs. Type-checking, for example, can guarantee the elimination of an entire class of problems.

However, what if your context were different? What if your context was a web startup?

Suddenly, the cost of a runtime failure is not so high. With a simple drop-in plugin like Hoptoad, I can be notified of an error, diagnose, fix, and deploy oftentimes before the user can even email me describing what problem he/she had.

In fact, it's much worse than that. As you said, "dynamic typing doesn't restore the modularity- it simply delays checking for violations". This delayed checking for violations has extreme value in the startup context.

Take, for instance, yesterday's post: Show HN: Hacker News Instant (3 hour project) http://news.ycombinator.com/item?id=2621144 If you were one of the first to try it, it wasn't long before you realized that typing a space in your search threw the app into an infinite loop, making it unusable. But I think a simple comment in the discussion thread summed it up perfectly: "I don't think the bug makes it any less noteworthy".

The fact that the author could create a prototype -- a minimum viable product -- in just 3 hours, meant that he could post it on HN and get feedback on it. He could get an idea of whether the app was worth pursuing... or better off scrapping to pursue something else.

In the startup context, you can think of writing software as sketching. You just write enough code to convey to people what your product is and how it can help them. The code may be completely broken, calling functions that don't even exist. It doesn't matter. If no one ever tries to use some specific edge case of one specific feature (or hell, your entire product), that code will never be needed. And so, the fact that it doesn't make sense, doesn't matter.

By checking for as many kinds of "violations" as possible early in the development process, you're forcing the developers to spend time making it all right from the start -- before release. In the space shuttle context, a runtime failure may have critical consequences. But in the startup context, failure to release on time may have critical consequences. Releasing before your competitor may make or break your entire business. Or maybe it's your personal project and life gets in the way and you just end up never releasing at all. Personally, a lot of the joy I get out of making software, is seeing people I know benefit from using it. But if I don't see that, I'm much more likely to give up on it entirely.

People say that when a bug is found, you have to fix it, regardless of whether you're doing static or dynamic checking. They then conclude that it's a no-brainer that you'd rather have this happen sooner than later. But no one ever talks about the cost of fixing bugs. Fixing everything up front, the way static checking forces you to do, unnecessarily increases your costs if that feature eventually gets scrapped. And in startups, this happens all the time.

In the startup context, there is often an excess of good ideas. The problem is, you don't know which one will be the jackpot and you only have enough time and money to pursue a small handful of them. One approach is to do a rough sketch of as many ideas as you can, see which ones start to get traction, and then flesh out the finer details on the ones that do. The ones that don't get traction are scrapped.

Now, if you were using a tool that said every part of your program must make sense and be free of violations before you can run it, you would have spent all that time fixing bugs in features (and possibly entire projects) that no one ever used. The opportunity cost of this is you were not implementing 10 other great ideas. On the other hand, if you were using tools that allowed bugs to be present along-side working code, you would be free to choose when to polish something when it was a priority to you.

Also, in government-funded projects for the space program, you essentially have all the time and funding you could want from the start. Your goal is to use that funding to eliminate all possible runtime errors, possibly pushing back the release to do as much of this as possible. For the most part, it's okay. You'll just get more funding.

However, in the startup world, it's the exact opposite. If you're a poor developer trying to make it big, you have no money now. But if you can prove your app is valuable and can make lots of money, then and only then will investors consider giving you money for it. Before funding, you're lucky if you have one full-time developer. Only after funding, when the project has to have already proven itself, do you have the funds to pay the developers you need to stomp out all the bugs.

It's not that static is better than dynamic or dynamic is better than static. It all depends on what you're doing. You have to choose what makes sense for what you're doing in the context you're doing it in.

The idealist in me wants to eliminate all bugs in code I write before release. And even more so in the code other people write. I totally get that. But the pragmatist in me (which only developed after having to write real production code for a real startup that pays my actual bills) knows that sometimes it makes sense to choose to not fix bugs. Static checks tell me "no, fix them now". Dynamic checks empower me to choose what I need.


Static typing doesn't have to force you to fix them now, just statically-enforced typing. An ideal (and achievable) language would be capable of checking types statically but still allow you to run programs that contain type errors.

I know virtually every static type system in use today doesn't do this, but it still bothers me when people just give up and advocate dynamic typing for this reason.


I find this absurd. I don't even call them bugs when I get compiler errors. They are typos. They are trivial to fix, and I'd much rather fix them at 3pm before I've checked in the code than at 3am when it's live or 5 minutes before a presentation. If you really want to sketch a function, just make a stub that throws. It will only be 3 lines long.


Indeed, haskell has a standard library function for this:

    bar z = foo z + baz z
    foo = undefined
    baz = undefined


A trivial example is always trivial to fix. And you can raise an unimplemented exception in just about every language.

But what I'm talking about is usually when the code works correctly at first. Then you come along 6 months later to this code that you didn't write and know nothing about to add some new feature. Along the way, you change a few things. And that breaks something subtle. A parameter becomes nil that used to be something. Or a key in a map is no longer created.

Static type systems would say, "if a parameter or map lookup may be nil, you must wrap its uses with a case to handle it". Which makes a lot of sense in the scope of this one function.

But if I don't care about the case when the parameter is nil -- when it is, it usually means something much bigger is wrong or I simply don't care about having that feature working anymore -- why should I spend the time to track it down to handle it? It's not something trivial like a method is undefined. It's more like, a method is not defined on the specific instance of a class, whose methods get defined at runtime with metaprogramming, so it takes some tracking down. Sure when it's just you, and your entire code base is 300 lines, it doesn't matter. But when it's a large project and you have shit to get done in order for your company to get paid that day, whether or not your function handles the nil case that no one ever uses simply isn't so important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: