← Back to context

Comment by donatj

5 hours ago

Can you explain how that's helpful? I'm not being obtuse, I just don't follow

One thought is that it's always a whole number of bits (3) to bit-address within a byte. It's 3.5 bits to bit address a 10 bit byte. Sorta just works out nicer in general to have powers of 2 when working on base 2.

  • This is basically the reason.

    Another part of it is the fact that it's a lot easier to represent stuff with hex if the bytes line up.

    I can represent "255" with "0xFF" which fits nice and neat in 1 byte. However, now if a byte is 10bits that hex no longer really works. You have 1024 values to represent. The max value would be 0x3FF which just looks funky.

    Coming up with an alphanumeric system to represent 2^10 cleanly just ends up weird and unintuitive.

    • We probably wouldn't have chosen hex in a theoretical world where bytes were 10 bits, right? It would probably be two groups of 5 like 02:21 == 85 (like an ip address) or five groups of two 0x01111 == 85. It just has to be one of its divisors.

Many circuits have ceil(log_2(N_bits)) scaling wrt to propagation delay/other dimensions so you’re just leaving efficiency on the table if you aren’t using a power of 2 for your bit size.

It's easier to go from a bit number to (byte, bit) if you don't have to divide by 10.

Because modern computing has settled on the Boolean (binary) logic (0/1 or true/false) in the chip design, which has given us 8 bit bytes (a power of two). It is the easiest and most reliable to design and implement in the hardware.

On the other hand, if computing settled on a three-valued logic (e.g. 0/1/«something» where «something» has been proposed as -1, «undefined»/«unknown»/«undecided» or a «shade of grey»), we would have had 9 bit bytes (a power of three).

10 was tried numerous times at the dawn of computing and… it was found too unwieldy in the circuit design.

  • > On the other hand, if computing settled on a three-valued logic (e.g. 0/1/«something» where «something» has been proposed as -1, «undefined»/«unknown/undecided» or a «shade of grey»), we would have had 9 bit bytes (a power of three).

    Is this true? 4 ternary bits give you really convenient base 12 which has a lot of desirable properties for things like multiplication and fixed point. Though I have no idea what ternary building blocks would look like so it’s hard to visualize potential hardware.

    • It is hard to say whether it would have been 9 or 12, now that people have stopped experimenting with alternative hardware designs. 9-bit byte designs certainly did exist (and maybe even the 12-bit designs), too, although they were still based on the Boolean logic.

      I have certainly heard an argument that ternary logic would have been a better choice, if it won over, but it is history now, and we are left with the vestiges of the ternary logic in SQL (NULL values which are semantically «no value» / «undefined» values).