Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally I think one of the hardest things to wrap your head around is how it's power-of-two fractions that can be represented exactly, not power-of-10 fractions. In other words, 0.5 can be represented exactly as IEEE floating point, but 0.1 cannot.

This is not inherent to floating point; a floating point representation that was based on decimal digits instead of binary ones would not have this issue.

It's not too hard to understand limited precision and rounding; we deal with both all the time. The part that's hard to understand is that you have to perform rounding simply to put a number like "0.1" into a floating-point value.

Specifically, in 0.1 as an IEEE float is represented as:

  val = 0x3dcccccd
  sign = 0
  significand = 4ccccd (1.6000000)
  exponent = 0x7b (-4)
So 0.1 is internally represented as 1.6000000 * 2^-4, while 0.5 has the much more understandable representation 1 * 2^-1.

This is made even more confusing by the fact that rounding performed on output will make it appear that 0.1 has been exactly represented in many cases.

So take a data type that can't exactly represent some numbers that look very simple in decimal, but that appears to represent them exactly when you print them out, but then compares not equal when you try to actually compare them. No wonder people get confused!



It's easier to explain that decimal 0.1 in binary is 0.000110011001100110011001100110011001100110011001100110011001100110011001100... It's impossible to present accurately with any combination of significand*2^exponent in a finite-memory binary system. Just like 1/3 in decimal system can't be written down accurately.


Sure, but it's not obvious what numbers can be represented exactly and which can't, and since the numbers get rounded on output to look exact, you can easily be misled into thinking that 0.1 can be represented.


Any rational number can be represented as a fraction p/q, where p and q are integers with no common factors (reduced fraction).

If that fraction is representable as an exact "decimal" number in base b that means that p/q = m/b^n = m * b^(-n), where m and n are integers too. For example, 3/4 = 75/10^2 or 0.75 in decimal, 3/2^2 or 0.11 in binary.

That means p * b^n = m * q. We said p and q have no common factors, so all of q's factors go into b^n. That means that q divides b^n, or in other words:

p/q is representable as an exact "decimal" number in base b if and only if q divides some power of b.

For example, 0.1 is 1/10. But there is no power of 2 which is divisible by 10, so 0.1 is not exactly representable in binary.

As another example, 1/3=0.33333.... because there is no power of 10 divisible by 3.


> Just like 1/3 in decimal system can't be written down accurately.

Or like integers in base pi! ;)


You mean, some integers? 0 is perfectly representable. And so might be 1.


Am I missing something, or isn't 1.6 * 2^-4 = 1.6 / 16 = 0.1? Seems exact to me.


Yes, 1.6 is. But the actual value being stored is between 1.6 and 1.6000000000000001 (in the double case) -- you have to round it to display it as decimal.

I'm working on a program right now to give a detailed dump of a double/float's value, to make this as clear as possible (and I'm learning a lot in the process).

EDIT: got an initial revision out there; check out: https://github.com/haberman/dumpfp




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: