← Back to context

Comment by lucasdicioccio

1 year ago

IMHO in the context of the Tar-Pit Paper (programming for services and applications rather than electronics) mutation merely is a mean to an end. The end being: some piece of memory holds the value I expect (e.g., to transmit, store, show information). Thus I disagree that mutable state is essential complexity. For the rest of the post I don't understand what you mean with "walking the line between avoiding/ignoring mutable state" and I wish you ad elaborated about what you mean by "play very well with Turing neighbors" because I cannot really connect with that.

Regarding, functional programming and mutation. In a typed "pure-FP" language like Haskell: - you need to explicitly declare what constitutes a valid value - which drives what valid values are observable between program steps - which informs how corrupted a data-structure can be in your program

For instance, using a tree-shaped structure like `Tree1 = Leaf Int32 | Node Int32 Tree1 Tree1` and you have some `MutableRef (Tree1)`. You know exactly that from the ref you can read a whole Tree1, you can change the content of the MutableRef but you need to give a whole new Tree1. In particular, you are sure to never read "an half-initialized tree" or "a tree that was changed while I was reading it" because these things just do not exist in the realm of valid Tree1s. Such simple guarantees are really important to many programs but are not guaranteed in most programming languages.

Of course, if for some reason (e.g., for performance) you need something more flexible and are willing to pay the extra complexity, you can do it. As an illustration, you can augment your Tree1 with some MutableRef as well. `Tree2 = Leaf Int32 | Node Int32 Tree2 Tree2 | Cell (MutableRef Tree1)`. Here Tree2 contains mutable references to a valid Tree1 so Tree2 is not fractally complicated but you could do that too. With Tree2, you can interleave reads, writes, and other operations while working on a Tree2. These extra effects could be required (for performance reasons for instance) at the expense of inviting surprising behaviors (bugs). In pure-FP it is clear that with Tree2 we lost the possibility to "read/write a whole tree in a single program step". Thus, if the control flow of the code is not linear (e.g., multi-threaded, callback-heavy) you may have fun things occurring, like printing a half of the tree at a given time and the second half of the tree after a mutation, resulting in printing a tree that never existed. Enforcing global properties like the heap-invariant becomes inconvenient because it is actually hard if we let mutation in. I'd go as far as saying that the Tree2 doesn't exist as a tree: given a Tree2 we are merely capable of enumerating chunks with tree-shapes piecewise.

This reply is long already, so I won't go into how Haskell has various flavors of mutable refs in IO, ST, STM and recently even got linear-types to allow some fine-grained composition-of-atomic-mutations/interleaving/prevention of effects. But overall I find pure-FP takes mutability much more seriously than other more mainstream languages. Here, pure-FP just keeps us honest about where non-determinism bites. The collective failure to realize is one of the key reason why we are in The Tar-Pit: violated in-memory-invariants can compound even outside programs by forcing outside/future users to deal with things like misleading information being shown to users, urgently upgrading your fleet of servers, days of one-off data-cleanup and so on and so forth.