Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An Experimental New Type Inference Scheme for Rust (smallcultfollowing.com)
185 points by pohl on July 9, 2014 | hide | past | favorite | 11 comments


My colleague zwarich aptly said that "this changes our type inference algorithm from a master's thesis to a blog post". As we approach 1.0, language simplifications like this are really valuable; the fact that it doesn't break existing code is even better.


I would say the fact that is doesn't break existing code is more like the minimum requirement ;-) Even better would be to work in every case the existing system does. But yeah, simplicity is very desirable.


The rust language is still unstable. Breaking changes are expected up until the 1.0 release, because they want to make sure they get things right and don't have to live with bad syntax or semantics decisions made in 0.0.1 forever. After 1.0 it will be more stable :)


I really appreciate Niko's efforts into blogging about this sort of thing. It's really neat to watch Rust evolve in a fairly public manner. I'm not sure if there is any other language where these sorts of relatively early design decisions have been played out publicly before.

I also found this article really interesting: http://smallcultfollowing.com/babysteps/blog/2014/05/13/focu....


I always felt Slava Pestov's (now abandoned) blog on Factor's development was a good view into language design: http://factor-language.blogspot.com/

He and Andy Wingo (http://wingolog.org) both write very well on language issues though they may skew towards implementation concerns over design concerns.


As someone currently working on a type inference scheme himself (although for a completely different purpose) and who is a Rust programmer, this sure was an interesting article! The inner type theorist in me says that losing any amount of expressiveness is a bad thing, but in this case my pragmatism seems to win out and say that this simpler but weaker scheme is probably good enough :)


What would this break, theoretically? It's clearly not as rigorous as something based on HM, but I'm curious what the specific disadvantages are.


The article gives an example of a program that fails to typecheck, based on function variance.

Note that this algorithm is based on HM. You can't do plain old HM if you have subtyping; you have to do some algorithm based on it. This is just one of many possible algorithms.


Unidirectional inference in the presence of subtyping is undecidable in the general case. Languages like Haskell and OCaml don't have this problem because they don't include subtyping precisely for this reason.


OCaml has subtyping: it addresses the undecidability issue by requiring "cast" annotations for some use cases.


Semi-unification is always a pain, I really hope this is technically sound and tractable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: