Personally, I'm nursing a thesis that the study of concurrency is fertile ground for a formalization of modular design. Where parallelism is the optimization of a software system by running parts of it simultaneously, concurrency has much more to do with the assumptions held by individual parts of the program, and how knowledge is communicated between them. Parallelism requires understanding these facets insofar as the assumptions need to be protected from foreign action -- or insfar as we try to reduce the need for those assumptions in the first place -- but I expect that concurrency goes much further.
Concurrent constraint programming is a nifty approach in this vein -- it builds on a logic programming foundation where knowledge only increases monotonically, and replaces get/set on registers with ask/tell on lattice-valued cells. LVars is a related (but much more recent) approach.
A different approach, "session types", works at the type system level. Both ends of a half-duplex (i.e. turn-taking) channel have compatible (dual) signatures, such that one side may send when the other side may receive. Not everything can be modeled with half-duplex communications, but the ideas are pretty useful to keep in mind.
I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.
(Note, of course, that local variables are local in the truest sense: other programmatic agents cannot make assumptions about them or change them. Short-lived, local state is as good as functional non-state in most cases.)
>
I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.
Agreed. You mention LVars so I'm curious what you think about MVars and STM in general. I've always been fond of STM because relational databases and their transactions are a familiar and well understood concept historically used by the industry to keep state sane and maintain data integrity. SQLite is great, but having something that's even closer the core language or standard library is even better.
It's part of why I like using SQL to do the heavy lifting when possible. I like that SQL is a purely functional language that naturally structures state mutations as transactions through the write-ahead log protocol. My flavor of choice (Postgres) makes different levels of efficient read and write available through read isolation levels that can give me up to ACID consistency without having to reinvent the wheel with my read and write semantics. If I structure my data model keys, relations and constraints properly, I get a production strength implementation with a lot of the nice properties you talk about. And that's regardless of my service layer choice for my language that I can trust to stand up.
There's one exception in particular that I've seen begin to gain steam in the industry which I think is interesting, and that's Elixir. Because Elixir wraps around Erlang's venerable OTP (and distributed database mnesia), users can build on the top of something that's already solved a lot of the hard distributed systems problems in the wild in a very challenging use case (telecom switches). Of course, mnesia has its own issues so most of the folks I know using Elixir are using it with Phoenix + SQL. They seem to like it, but I worry about ecosystem collapse risk with any transpiled language -- no one wants to see another CoffeeScript.
I'm not especially familiar with either MVars or STM, so you'll have to make do with my first impressions...
MVars seem most useful for a token-passing / half-duplex form of communication between modules. I've implemented something very similar, in Java, when using threads for coroutines. (Alas, but Project Loom has not landed yet.) They don't seem to add a whole lot over a mutable cell paired with a binary semaphore. Probably the most valuable aspect is that you're forced to think about how you want your modules to coordinate, rather than starting with uncontrolled state and adding concurrency control after the fact.
STM seems very ambitious, but I struggle to imagine how to build systems using STM as a primary tool. Despite its advantages, it still feels like a low-level primitive. Once I leave a transaction, if I read from the database, there's no guarantee that what I knew before is true anymore. I still have to think about what the scope of a transaction ought to be.
Moreover, I get the impression that STM transactions are meant to be linearizable [1], which is a very strong consistency requirement. In particular, there are questions about determinism: if I have two simultaneous transactions, one of them must commit "first", before the other, and that choice is not only arbitrary, the program can evolve totally differently depending on that choice.
There are some situations where this "competitive concurrency" is desirable, but I think most of the time, we want concurrency for the sake of modularity and efficiency, not as a source of nondeterminism. When using any concurrency primitive that allows nondeterminism, if you don't want that behavior, you have to very carefully avoid it. As such, I'm most (and mostly) interested in models of concurrency that guarantee deterministic behavior.
Both LVars and logic programming are founded on monotonic updates to a database. Monotonicity guarantees that if you "knew" something before, you "know" it forever -- there's nothing that can be done to invalidate knowledge you've obtained. This aspect isn't present in most other approaches to concurrency, be it STM or locks.
The CALM theorem [2] is a beautiful, relatively recent result identifying consistency of distributed systems with logical monotonicity, and I think the most significant fruits of CALM are yet to come. Here's hoping for a resurgence in logic programming research!
> There's one exception in particular that I've seen begin to gain steam in the industry which I think is interesting, and that's Elixir.
I've not used Elixir, but I very badly want to. It (and Erlang) has a very pleasant "functional core, imperative shell" flavor to it, and its "imperative shell" is like none other I've seen before.
Concurrent constraint programming is a nifty approach in this vein -- it builds on a logic programming foundation where knowledge only increases monotonically, and replaces get/set on registers with ask/tell on lattice-valued cells. LVars is a related (but much more recent) approach.
A different approach, "session types", works at the type system level. Both ends of a half-duplex (i.e. turn-taking) channel have compatible (dual) signatures, such that one side may send when the other side may receive. Not everything can be modeled with half-duplex communications, but the ideas are pretty useful to keep in mind.
I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.
(Note, of course, that local variables are local in the truest sense: other programmatic agents cannot make assumptions about them or change them. Short-lived, local state is as good as functional non-state in most cases.)