Good point. I would expect the immutable version of scaling a side to return a new rectangle, thus allowing a scaled square to also return a rectangle. I hadn't thought about mechanisms that would allow a type to basically upcast itself in a mutable operation. Of course, for typing proposes, the signature looks the same: This operation yields a rectangle. It's really just a matter of where the return value lives.
> for typing proposes, the signature looks the same: This operation yields a rectangle. It's really just a matter of where the return value lives.
Well, it yields a type union of square and rectangle. A runtime, whether the original language appears mutable or not, can play games with that. Like resolving types lazily. If for example some object is only drawn, and draw happens to dispatch it independently of squareness, on say a predicate type DistantObjectThatLooksLikeADot, then there's no need to resolve whether it was square. No one will ever know. Or ever have to pay the cost of finding out. It has say a MightBeSquareMightBeRectangleHaventHadToCareYet dictionary. :) Which can become important as types get more expressive, and expensive to prove/test.
Sum types! I don't get to use that kind of stuff very much, so I tend not to think about them off the top of my head :)
How do languages with sum types handle scenarios where the type is A+B, but B is a subtype of A? So you're really guaranteed to have an A, but you may or may not have a B? Do they allow transparent reference as type A, or must you disambiguate first?
That is, given a function that takes an A, can you pass it a type A+B, given that B is a subtype of A?