← Back to context

Comment by enlyth

1 year ago

> There is no essential mutable state in computer science.

Yes, theoretically. Now imagine your mutable state is 2GB in size, have fun creating a copy of it on every change.

You are not getting the point of my comment or the original article. It's clear to me that the kind of complexity being talked about in the article is cognitive complexity not computational complexity. It's not about choosing between an O(n) copying of data and O(1) in-place mutation; it's about choosing code that's easy to comprehend and maintain.

  • Their point is that you can't choose the cognitively simpler choice because of real world design constraints, such as not consuming all available RAM — i.e., because of essential complexity.

    • that's complexity introduced by your approach to the solution (using an inadequately large computer) rather than essential to the problem you're trying to solve

      5 replies →

Btrfs does stuff like instantaneous snapshots for a multi terabyte size filesystem. I have no idea how they do it, but apparently it’s 100% possible.

  • Well since it's claiming to not mutate state, it must be simple! Go read the code and tell us what you learn.

    In the event you find the source code TLDR, indeed it seems implementing an immutable api is complex.

To see immutability on huge state in action see git.

That is essentially what must be going on under the hood to model mutable state. Every mutation made to a variable is a commit. The only commits that are saved are the ones that have existing references.

Now you're thinking it could be a big memory leak. But you pair this with reference counting you can eliminate the leak.

Essentially when the reference of the earliest commit goes out of scope, the next earliest commit is checked out as the initial root commit and the current commit is freed.

The memory model is then like a moving git window/linked list. Mutations are committed at the tail end while memory is freed at the head.