There’s a past thread about floating arithmetic as it affected multiplication.

I’ve run into a similarly curious result on simply totaling a field of two decimal numbers. The field is set to float and has no output pattern that would hide more digits after a decimal.

601.49+198.44 = 799.9300000000001?

And the grand total works out to a dozen places after the decimal.

It’s easy enough to work around, but surprising to me.

I think you have an old copy of Panorama X there. The recent versions show 15 significant figures, not 16. 16 is right about the limit to the precision you can expect from a 64 bit floating point number. In some numeric ranges, 64 bits is more than enough. In others it’s not, and you can expect an error in that 16th digit.

My favorite analogy stems from my sophomore year in high school. They were adding a wing onto the building, and one wall, and all the lockers on that wall, had been demolished. This left us with fewer lockers than there were students. Some students had to share a locker.

Two students, who share a locker, have their geometry books in that locker. One of them needs a geometry book, and grabs one from the locker. She might get her own, or she might get her locker mate’s.

Floating point numbers have 53 binary significant figures. In some ranges of numbers, there are more 16 digit decimal numbers than there are 53 digit binaries. Some of those decimals need to share a binary locker. The result you get out, may be the number you want, or it might be its locker mate.

The original thread that you linked to links to a wealth of information about the mysteries of floating point arithmetic.

The problem is that neither 601.49 or 198.44 can be exactly represented in IEEE 64 bit floating point format. So approximations are used, and you get an approximate result. But it’s very close.