← Back to context

Comment by kragen

2 years ago

i think that's a purely semantic or terminological debate, so it isn't important which part we declare 'essential' and which part we declare 'accidental', and arguments that one definition or the other is better are pointless

what is important is that we understand which definitions people were using in particular utterances, that our definitions have adequate ontological coherence (avoiding "eargrayish" reasoning errors), and that other people can understand which definitions we are using

in brooks's 01986 paper he clearly considers questions like copying 2 gigabytes of data 'accidental' http://worrydream.com/refs/Brooks-NoSilverBullet.pdf

> All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. ... The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed. ...

so it isn't important what mike acton thinks the terms should mean unless we're trying to understand how he uses them; brooks, moseley, and marks lump space and speed constraints into the category of 'accidental' rather than 'essential', and their arguments need to be interpreted using the definitions they intended to use, not conflicting definitions invented by a youtuber decades later

similarly, people commenting on moseley and marks's paper should be assumed to be using the same definitions as moseley and marks, not conflicting definitions, unless they explicitly state otherwise

Totally fair. Sorry, I'm walking in halfway to this conversation with a different axe to grind.

  • there's an interesting aspect to recursive decomposition there; an 'essential problem' at one level of abstraction may merely be an accident introduced one level higher up

    like, lots of programs are specified to do one or another thing with the filesystem, and have to include extra complexity to do it, but the filesystem is something we introduced and could do without; it doesn't exist in objective reality outside the computer. is that complexity accidental or essential? at the level of the program it's essential (especially if the program is something like find(1) or cp(1), whose job can't be defined at all without presupposing a filesystem) but at the level of the system it's accidental

    • 100%

      Division of labor: making accidental complexity essential since 1945.

      Then again, perhaps we need to account for opportunity cost. Even if persistent storage can take many forms, it's hard to imagine that equivalent features could be 10x simpler. Maybe Chuck Moore would disagree, but his modus is usually to insist you don't need something.. (also totally fair)