Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From a language design perspective it makes a lot of sense to add linear types to the language itself instead of using an encoding. Every encoding that I know of (such as region types encoded as monads, which is what I think the article wants to get at) leads to excessive sequentialization of code. This in turn leads to a lot of boilerplate (or crazy type inference problems) at compile time as well as suboptimal run time performance.

Linear types are the perfect example of a feature that belongs in the core language, or at the very least into a core intermediate language. They are expressive, in that you can encode a lot of high-level design ideas into linear types. You can compile a number of complicated front-end language features into a linearly typed intermediate language. Linear types have clear semantics and can even be used for improved code generation. If we ignore the messy question of how best to expose linear types in a high-level language then this is just an all around win-win situation...



Have you took a look at Clean the programming language? It has unique types (used for resource management, but less restrictive than linear types) for decades and guess what? They invented special syntax (the "#-notation") which introduce shadowable names much like regular monad syntax does. And code with this syntax is, basically, sequential code most of the time albeit unique types allow for something like "where" clause. You just easily get lost with these handle, handle1, handle2... names.

I do not oppose inferring linear use at core and/or intermediate representation (GRIN allowed for that and more). I just do not see their utility at the high level, in the language that is visible to the user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: