← Back to context

Comment by FooBarBizBazz

3 hours ago

Whether double floats can silently have 80 bit accumulators is a controversial thing. Numerical analysis people like it. Computer science types seem not to because it's unpredictable. I lean towards, "we should have it, but it should be explicit", but this is not the most considered opinion. I think there's a legitimate reason why Intel included it in x87, and why DSPs include it.

Numerical analysis people do not like it. Having _explicitly controlled_ wider accumulation available is great. Having compilers deciding to do it for you or not in unpredictable ways is anathema.

  • It isn’t harmful, right? Just like getting a little accuracy from a fused multiply add. It just isn’t useful if you can’t depend on it.

    • It can be harmful. In GCC while compiling a 32 bit executable, making an std::map< float, T > can cause infinite loops or crashes in your program.

      This is because when you insert a value into the map, it has 80 bit precision, and that number of bits is used when comparing the value you are inserting during the traversal of the tree.

      After the float is stored in the tree, it's clamped to 32 bits.

      This can cause the element to be inserted into into the wrong order in the tree, and this breaks the assumptions of the algorithm leaidng to the crash or infinite loop.

      Compiling for 64 bits or explicitly disabling x87 float math makes this problem go away.

      I have actually had this bug in production and it was very hard to track down.

      3 replies →

    • I suppose it could be harmful if you write code that depends on it without realizing it, and then something changes so it stops doing that.

    • If not done properly, double rounding (round to extended precision then rounding to working precision) can actually introduce larger approximation error than round to nearest working precision directly. So it can actually make some numerical algorithms perform worse.