Comment by pjdesno
6 hours ago
During an internship in 1986 I wrote C code for a machine with 10-bit bytes, the BBN C/70. It was a horrible experience, and the existence of the machine in the first place was due to a cosmic accident of the negative kind.
I wrote code on a DECSYSTEM-20, the C compiler was not officially supported. It had a 36-bit word and a 7-bit byte. Yep, when you packed bytes into a word there were bits left over.
And I was tasked with reading a tape with binary data in 8-bit format. Hilarity ensued.
Hah. Why did they do that?
Somehow this machine found its way onto The Heart of Gold in a highly improbable chain of events.
I programmed the Intel Intellivision cpu which had a 10 bit "decl". A wacky machine. It wasn't powerful enough for C.
I've worked on a machine with 9-bit bytes (and 81-bit instructions) and others with 6-bit ones - nether has a C compiler
The Nintendo64 had 9-bit RAM. But, C viewed it as 8 bit. The 9th bit was only there for the RSP (GPU).
I think the pdp-10 could have 9 bit bytes, depending on decisions you made in the compiler. I notice it's hard to Google information about this though. People say lots of confusing, conflicting things. When I google pdp-10 byte size it says a c++ compiler chose to represent char as 36 bits.
10-bit arithmetics are actually not uncommon on fpgas these days and are used in production in relatively modern applications.
10-bit C, however, ..........
How so? Arithmetic on FPGA usually use the minimum size that works, because any size over that will use more resources than needed.
9-bit bytes are pretty common in block RAM though, with the extra bit being used for either for ECC or user storage.
C itself was developed on machines that had 18 bit ints.