Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ruby and Python are also mainstream, and both provide saner OOP.

C++ templates at least have a purpose and provide extra benefits (e.g. performance, compile-time computations for pre-caching). In Java generics are practically implicit type-casts and nothing else.

About verbosity ... clear conventions are a lot more effective for readability and learning. In a dynamic language like Ruby / Python, if you don't know what an object's type is or what it does, the fix is as easy as ...

    obj = don_t_know_the_return_type()

    # doing lots of stuff

    import pdb
    pdb.set_trace()
That opens up a Python debugging console at the current point of execution, and you can inspect "obj" for its type / members and also modify its state. Also, if you're using "ipdb" instead of pdb then you've also got auto-completion on <TAB> (e.g. Intellisense). You can do something similar in Ruby, and in Smalltalk the whole application is active while you're typing in the IDE.

There, problem solved.



There, problem solved.

The problem isn't solved. You now know what the type is for that particular moment. What you don't know are the invariants of the don_t_know_the_return_type() function.

What types/subtypes is can return, whether it can return None, what exceptions it can throw, and whether those will change in the future. That information can only come from a type system and/or documentation. Simply reading the implementation only tells you the current state of the system, not the rules that will govern future iterations of it.

The more invariants that can be expressed concisely by the language itself and enforced by the compiler, the less work is left to the user of the function to review documentation/implementation.

This is one large reason why well-designed advanced type systems are so valuable -- you can express complex invariants using them, and then let the compiler enforce those invariants.


First, even in Java you don't know the return type / invariants ...

     int n = func_that_returns_positive_even_number()
What you need is to document the thing:

    def func_that_returns_positive_even_number():
          """Returns positive even number."""
This comment will be available when typing "help(func_that_returns_positive_even_number)" in the Python console btw.

Or if you're paranoid and that function can totally break your code:

         n = func_that_returns_positive_even_number()
         assert isinstance(n, int) and n % 2 == 0 and n >= 0
Or to make extra sure this will hold in the future (i.e. protecting from code-changes done by other people) ...

     import unittest

     class TestMyFunc(unittest.TestCase):
           def test_is_positive_and_even(self):
                n = func_that_returns_positive_even_number()
                self.assertTrue( n % 2 == 0 and n >= 0 )


First, even in Java you don't know the return type / invariants ...

This is a classic type system straw man. The language doesn't support encoding integer ranges in the type system, ergo, the type system is not ever a significant advantage and all invariants must be documented. You fool!

What you need is to document the thing:

Some invariants require further documentation. The more you can express concisely in code via the type system, the more time you and your API clients save in both development and maintenance.

More succinctly: By expressing them in code you let the compiler automate the work of enforcing them.

Or if you're paranoid and that function can totally break your code:

An assert doesn't "protect" your code from future changes (better phrasing would be: make your code adaptable to change, loosely coupled with its dependencies as to allow iteration of your code and its dependencies independently).

An assert simply causes your code to fail in an obvious way. It's still up to you to track them down (at runtime) and figure out where you went wrong.


"""The more you can express concisely in code via the type system, the more time you and your API clients save in both development and maintenance"""

That's not necessarily true ... the more complex the type-system, the more time you lose feeding it.

"""It's still up to you to track them down (at runtime) and figure out where you went wrong"""

A language with runtime-dispatching and/or where NULL is allowed will have the same problems. I mean, personally I had more null-pointer-exceptions than all the other kinds of errors combined and multiplied by a dozen.

We are talking about Python versus Java here ... Haskell's type system is smarter and can detect lots of errors for you, but then again we were also talking about beginner-friendliness.


> Ruby and Python are also mainstream, and both provide saner OOP.

For various other reasons I prefer Python to Java (or Ruby), but there is one thing that Java got right and Python totally messed up: avoiding the disaster that is multiple inheritance.

Of course a much better option is to completely do away with inheritance like Go has done (plus Go combines some of the advantages of Python's 'duck typing' with the static type checking of Java's interfaces).


From where did you get the idea that multiple-inheritance (in general) is a disaster? Multiple-inheritance is only a disaster when the rules aren't clear on ...

      (a) what you're inheriting
      (b) what you're overriding
      (c) what happens when you try calling super::
Otherwise, inheritance is really useful, and Python scores pretty good on all the above points.

Also, GO doesn't have "duck typing". That's called "structural typing" and it is a lot more limited than dynamic typing. GO is not really object-oriented either.


How do you define (and enforce) those rules? My general position when writing library code is that none of my classes should be subclassed.

The very few classes that may be subclassed are documented as such, and the methods that may be overridden are explicitly documented, as well as what behavior is required from the subclass when overriding those methods.

The invariants of complex inheritance hierarchies are very hard to understand. What happens if the superclass method isn't called? What happens if one of those methods is called, but another isn't, and the object is placed into an indeterminate state? What enforces that your subclass -- and all other subclasses -- will conform to these often complex and difficult to describe invariants?

This is very similar to multi-threading with mutable vs. immutable data. By making your data immutable, you grossly simplify the understanding of your system's behavior.


Why do you need to enforce rules, other than documenting the classes/methods defined?

     What happens if the superclass method isn't called
Shit doesn't work or it breaks, then the person who sub-classed needs to fix it. Sometimes it also means your class is leaking encapsulation.

This also happens with plain aggregation/composition btw. It also happens with data-immutability (which has nothing to do with inheritance, as your object can be immutable and extend a dozen classes).


Why do you need to enforce rules, other than documenting the classes/methods defined?

The less repetitive work we delegate to human fallibility and instead delegate to a machine, the more time we have for human ingenuity.

Shit doesn't work or it breaks, then the person who sub-classed needs to fix it.

It's not that simple. The more difficult it is to understand the rules of behavior before changing the code, the more difficult it is to change the behavior. It's not just a question of expressing valuable -- but simple -- kindergarden requirements (this value may not be NULL), but higher-level requirements (this method must be called in the context of a READ-COMMITTED transaction).

The more you can express concisely, the easier it is to mutate the system over time. It's not a question of breaking code -- or noticing when it breaks -- but having the language assist in simply not breaking it at all.

This also happens with plain aggregation/composition btw.

Composition makes invariants easier to understand. If you then design your classes so poorly as to fail to enforce correct behavior through their API insofar as it is possible to do so, that is the programmer's failure.

It also happens with data-immutability (which has nothing to do with inheritance, as your object can be immutable and extend a dozen classes).

Data immutability is related to the avoidance of inheritance insofar as they both very significantly facilitate the full and easy comprehension of an implementation invariants.


     this method must be called in the context of a 
     READ-COMMITTED transaction
I get what you're saying, but I like conventions and clear APIs with proper encapsulation.

Here's a sample from Python/Django ...

      @transaction.commit_on_success
      def do_stuff_with_the_db():
           db.execute("insert into tmp values (1)")
           raise Exception
Or if you need to supply the DB queries yourself, you can implement your own context-manager than use it with a with block ...

     with stuff.inside_transaction() as obj:
          obj.execute("query")
No need to extend a class that represents a transaction or some other shit like that.

       having the language assist in simply not breaking it at all
You know that's an utopian goal. What I dislike most about languages that try to detect too much shit for me is that it gives me a false sense of security. And the worst offender is Java: not only is its type-system too weak, because it is manifest-typed you get the false impression that it guarantees stuff for you, when it doesn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: