Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The worst thing that strikes fear into me is seeing floating points used for real world currency. Dear god. So many things can go wrong. I always use unsigned integers counting number of cents. And if I gotta handle multiple currencies, then I'll use or make a wrapper class.


Floating point math shouldn't be that scary. The rules are well defined in standards, and for many domains are the only realistic option for performance reasons.

I've spent most of my career writing trading systems that have executed 100's of billions of dollars worth of trades, and have never had any floating point related bugs.

Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.


You can certainly make trading systems that work using floating point, but there are just so many fewer edge cases to consider when using fixed point.

With fixed point and at least 2 decimal places, 10.01 + 0.01 is always exactly equal to 10.02. But with FP you may end up with something like 10.0199999999, and then you have to be extra careful anywhere you convert that to a string that it doesn't get truncated to 10.01. That could be logging (not great but maybe not the end of the world if that goes wrong), or you could be generating an order message and then it is a real problem. And either way, you have to take care every time you do that, as opposed to solving the problem once at the source, in the way the value is represented.

> Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.

In the case of HFT, this would have to depend very greatly on the particulars. I know the systems I write are almost never limited by arithmetical operations, either FP or integer.


I work on game engines and the problem with floats isn't on small values like 10.01 but on large ones like 400,010.01 that's when the precision wildly varies.


The issue with floats is the mental model. The best way to think about them is like a ruler with many points clustered around 0 and exponentially fewer as the magnitude grows. Don't think of it like a real value - assume that there are hardly any values represented with perfect precision. Even "normalish" numbers like 10.1 are not in the set actually. When values are converted to strings, even in debuggers sometimes, they are often rounded which throws people off further ("hey, the value is exactly 10.1 - it is right there in the debugger"). What you can count on however is that integers are represented with perfect precision up to a point (e.g. 2^53 -1 for f64).

The other "metal model" issue is that associative operations in math. Adding a + (b + c) != (a + b) + c due to rounding. This is where fp-precise vs fp-fast comes in. Let's not talk about 80 bit registers (though that used to be another thing to think about).


Lua is telling me 0.1 + 0.1 == 0.2, but 0.1 + 0.2 != 0.3. That's 64-bit precision. The issue is not with precision, but with 1/10th being a repeating decimal in binary.


Not an issue on Scheme and Common Lisp and even Forth operating directly with rationals with custom words.


Not only that but the precision loss accumulates. Multiply too many numbers with small inaccuracies and you wind up with numbers with large inaccuracies


It depends on what you're doing. If your system is a linear regression on 30 features, you should probably use floating point. My recollection is fixed is prohibitively slower and with far less FOSS support.


I'm wondering if trading systems would run into the same issues as a bank or scientific calculation. You might not be making as many repeated calculations, and might not care if things are "off" by a tiny amount, because you're trading between money and securities, and the "loss" is part of your overhead. If a bank lost $0.01 after every 1 million transactions it would be a minor scandal.


Personally, I would be more concerned about something like determining whether the spread is more than a penny. Something like:

    if (ask - bid > 0.01) {
        // etc
    }
With floating point, I have to think about the following questions: * What if the constant 0.01 is actually slightly greater than mathematical 0.01? * What if the constant 0.01 is actually slightly less than mathematical 0.01? * What if ask - bid is actually slightly greater than the mathematical result? * What if ask - bid is actually slightly less than the mathematical result?

With floating point, that seemingly obvious code is anything but. With fixed point, you have none of those problems.

Granted, this only works for things that are priced in specific denominations (typically hundredths, thousandths, or ten thousandths), which is most securities.


So the spread is 0.0099999 instead of 0.01. When will that difference matter?


It matters if the strategy is designed to do very different things depending on whether or not the offers are locked (when bid == ask, or spread is less than 0.01).

In this example, I’m talking about securities that are priced in whole cents. If you represent prices as floats, then it’s possible that the spread appears to be less (or greater) than 0.01 when it’s actually not, due to the inability of floats to exactly represent most real numbers.


But I'm still not understanding the real-world consequences. What will those be, exactly? Any good examples or case studies to look at?


Many trading strategies operate on very thin margins. Most of the time it's less than one cent per share, often as little as a tenth of a cent per share or less.

A different example: let's say that you're trying to buy some security, and you've determined that the maximum price you can pay and still be profitable is 10.01. If you mistakenly use an order price of 10.00, you'll probably get fewer shares than you wanted, possibly none. If you mistakenly use a price of 10.02, you may end up paying too much and then that trade ends up not being profitable. If you use a price of 10.0199999 (assuming it's even possible to represent such a price via whatever protocol you're using), either your broker or the exchange will likely reject the order for having an invalid price.


I can imagine sth like: if (bid ask blah blah) { send order to buy 10 million of AAPL; }


All your price field messages are sent to the exchange and back via fixed point, so you are using fixed point for at least some of the process (unless you're targeting those few crypto exchanges that use fp prices).

If you need to be extremely fast (like fpga fast), you don't waste compute transforming their fixed point representation into floating.


Sure, string encodings are used for most APIs and ultra HFT may pattern match on the raw bytes, but for regular HFT if you're doing much math, it's going to be floating point math.


We might have different definitions of "HFT"


> Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.

May I ask why? (generally curious)


For starters, it's giving up a lot of performance, since fixed-point isn't accelerated by hardware like floating-point is.


Isn't fixed point just integer?


Yes, but you're not going to have efficient transcendental functions implemented in hardware.


Ah okay, fair enough. But what sort of transcendental functions would you use for HFT?

I guess I understood GGGGP's comment about using fixed point for interacting with currency to be about accounting. I'd expect floating point to be used for trading algorithms, but that's mostly statistics and I presume you'd switch back to fixed point before making trades etc.


Yes, integer combined with bit-shifts.


The problem with fixed point is in its, well, fixed point. You assign a fixed number of bits to the fractional part of the number. This gives you the same absolute precision everywhere, but the relative precision (distance to the next highest or lowest number) is worse for small numbers - which is a problem, because those tend to be pretty important. It's just overall a less efficient use of the bit encoding space (not just performance-wise, but also in the accuracy of the results you get back). Remember that fixed point does not mean absence of rounding errors, and if you use binary fixed point, you still cannot represent many decimal fractions such as 0.1.


With fixed point you either scale it up or use rationals.


Fundamentally there is uncertainty associated with any physical measurement which is usually proportional to the magnitude being measured. As long as floating point is << this uncertainty results are equally predictive. Floating point numbers bake these assumptions in.


It's the front of house/back of house distinction. Front of house should use fixed point, back of house should use floating point. Unless you're doing trading, you want really strict rules with regards to rounding and such, which are going to be easier to achieve with fixed point.


I don't think it is that clear. The split I think is between calculating settlement amounts which lead to real transfers of money and so should be fixed point whilst risk, pricing (thus trading) and valuation use models which need many calculations so need to be floating point.


How do you handle the lack of commutativity? I've always wondered about the practical implications.


I asked an ex-Bloomberg coder this question once after he told me he used floating points to represent currency all the time, and his response was along the lines of “unless you have blindingly-obvious problems like doing operations on near-zero numbers against very large numbers, these calculations are off by small amounts on their least-significant digits. Why would you waste the time or the electricity dealing with a discrepancy that’s not even worth the money to fix?”


Floating-point is completely commutative (ignoring NaN payloads).

It's the associativity law that it fails to uphold.


Nitpick: FP arithmetic is commutative. It's not associative.


I inherited systems that trade real world money using f64. They work surprisingly well, and the errors and bugs are almost never due to rounding. Those that are also have easy fixes. So I'm always baffled by this "expert opinion" of using integers for cents. It is pretty much up there with "never use python pickle it is unsafe" and "never use http, even if the program will never leave the subnet".


you can't accurately represent 10 cents with floats, 0.1 is not directly representable. same with 1 cent, 0.01. Seems like if you do and significant math on prices you should run into rounding issues pretty quickly?


no. Float64 has 16 digits of precision. Therefore even if you're dealing with trillions of dollars, you have accuracy down to the thousandth of a cent.


You might want to re-study this topic.

The decimal number 0.1 has an infinitely repeating binary fraction.

Consider how 1/3 in decimal is 0.33333… If you truncate that to some finite prefix, you no longer have 1/3. Now let’s suppose we know, in some context, that we’ll only ever have a finite number of digits — let’s say 5 digits after the decimal point. Then, if someone asks “what fraction is equivalent to 0.33333?”, then it is reasonable to reply with “1/3”. That might sound like we’re lying, but remember that we agreed that, in this context of discussion, we have a finite number of digits — so the value 1/3 outside of this context has no way of being represented faithfully inside this context, so we can only assume that the person is asking about the nearest approximation of “1/3 as it means outside this context”. If the person asking feels lied to, that’s on them for not keeping the base assumptions straight.

So back to floating point, and the case of 0.1 represented as 64 bit floating point number. In base 2, the decimal number 0.1 looks like 0.0001100110011… (the 0011 being repeated infinitely). But we don’t have an infinite number of digits. The finite truncation of that is the closest we can get to the decimal number 0.1, and by the same rationale as earlier (where I said that equating 1/3 with 0.33333 is reasonable), your programming language will likely parse “0.1” as a f64 and print it back out as such. However, if you try something like (a=0.1; a+a+a) you’ll likely be surprised at what you find.


> you’ll likely be surprised at what you find.

I very much doubt it. My day job is writing symbolic-numeric code. The result of 0.1+0.1+0.1 != 0.3, but for rounding to bring it up to 0.31 (i.e. rounding causing an error of 1 cent), you would need to accumulate at least .005 error, which will not happen unless you lose 13 out of your 16 digits of precision, which will not happen unless you do something incredibly stupid.


At the end of a long chain of calculations you're going to round to the nearest 0.01. It will be a LONG time before errors caused by double-precision floats cause you to gain or lose a penny.


I'm curious where you got this idea from because it is trivially disprovable by typing 0.1 or 0.01 into any python or JS REPL?


Do you believe that the way the REPL prints a number is the way it's stored internally? If so, explaining this will be a fun exercise:

    $ python3
    Python 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> a = 0.1
    >>> a + a + a
    0.30000000000000004
By way of explanation, the algorithm used to render a floating point number to text used in most languages these days is to find the shortest string representation that will parse back to an identical bit pattern. This has the direct effect of causing a REPL to print what you typed in. (Well, within certain ranges of "reasonable" inputs.) But this doesn't mean that the language stores what you typed in - just an approximation of it.


Oddly, tcl prints 0.30000000000000004 while jimtcl prints 0.3, while with 1/7 both crap out and round it to a simple 0.

Edit: Now it does it fine after inputting floats:

puts [ expr { 1.0/7.0 } ]

Eforth on top of Subleq, a very small and dumb virtual machine:

     1 f 7 f f/ f.
     0.143 ok

Still, using rationals where possible (and mod operations otherwise) gives a great 'precision', except for irrationals.


:facepalm: my bad, I completely missed the more rational intepretation of OP's comment...

I interpreted "directly representable" as "uniquely representable", all < 15 digit decimals are uniquely represented in fp64 so it is always safe to roundtrip between those decimals <-> f64, though indeed this guarantee is lost once you perform any math.


https://docs.python.org/3/tutorial/floatingpoint.html

Stop at any finite number of bits, and you get an approximation. On most machines today, floats are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two. In the case of 1/10, the binary fraction is 3602879701896397 / 2 * 55 which is close to but not exactly equal to the true value of 1/10.

Many users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display:

  0.1
  0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead:

  1 / 10
  0.1
That being said, double should be fine unless you're aggregating trillions of low cost transactions. (API calls?)


For anyone curious about testing it themselves and/or wanting to try other numbers:

  >>> from decimal import Decimal
  >>> Decimal(0.1)
  Decimal('0.1000000000000000055511151231257827021181583404541015625')


You can make money modeling buy/sell decisions in floats and then having the bank execute them, but if the bank models your account as a float and loses a cent here and there, it will be sued into bankruptcy.


A double-precision float has ~16 decimal digits of precision. Which means as long as your bank account is less than a quadrillion dollars, it can accurately store the balance to the nearest cent.


You will not lose a cent here and there just by using float64, for the range of values that banks deal with. For added assurance, just round to the nearest cent after each operation.


You can round to the nearest .0078125 (1/128) but not to the nearest .01 (1/100) because that number cannot be represented in float64.

To round to the nearest cent, you would need to make cents your units (i.e. the quantity "1 dollar" would be represented as 100 instead of 1.0).


Within the general context of this discussion, "cannot be represented" is a red herring.

You don't need to have a representation of the exact number 0.1 if you can tolerate errors after the 7th decimal (and it turns out you can). And 0.1+0.1+0.1 does not have to be comparable with 0.3 using operator==. You have an is_close function for that. And accumulation is not an issue because you have rounding and fma for that.


Ok, but in the context of this thread, it is important. Remember that I'm replying to the statement "For added assurance, just round to the nearest cent after each operation." That is misleading advice and the behavior of floats is just context for why.

First of all, a lot of languages don't include arbitrary rounding in their math libraries at all, only having rounding to integers. Second, in the docs of Python, which does have arbitrary rounding, it specifically says:

    Note: The behavior of round() for floats can be surprising: for example, round(2.675, 2) gives 2.67 instead of the expected 2.68. [...]
Thus I think what I said stands: you cannot round to nearest cent reliably all the time, assuming cent means 0.01. The only rounding you can sort of trust is display rounding because it actually happens after converting to base 10. It's why 2.675 will print as 2.675 in Python even though it won't round as you'd expect. But you'd only do that once at the end of a chain of operations.

In a lot of cases, errors like these don't matter, but the key point is that if the errors don't matter, then they don't need to be "assured" away by dubious rounding either.


> First of all, a lot of languages don't include arbitrary rounding in their math libraries at all, only having rounding to integers.

You do the simple, obvious and correct thing: multiply by 100, round to int, convert to double, divide by 100. It does not matter whether the final division by 100 results in an exact value. (You might argue this is inefficient but it's not a correctness problem.)

> for example, round(2.675, 2) gives 2.67 instead of the expected 2.68.

You are not going to execute round(2.675, 2) if you follow my advice of rounding after every operation. Because the error will never reach 0.005. Your argument is moot.


You can certainly encounter 2.675 as a multiplier, even if you wouldn't have it as a balance.

It doesn't matter that some error starts off way less than 0.005 if rounding then amplifies it. We can find two 2-digit numbers that multiply to get exactly 2.675 in reals, but whose product differs from the float number closest to 2.675 enough to affect rounding:

    abs(      2.14 * 1.25     -       2.675    ) < 0.005
    abs(round(2.14 * 1.25, 2) - round(2.675, 2)) > 0.01
And regarding integer vs fractional rounding, we can see different results for what is nominally the same computation, depending on where the decimal point is:

    abs(round(1.0  * 1.5, 0) - 2.0 ) < 0.1
    abs(round(0.01 * 1.5, 2) - 0.02) > 0.001
Now, I never said that floats were bad. I am only saying that rounding them doesn't work the way one might expect, and shouldn't be done any more than necessary; in many cases, it's not necessary at all.


floats are also a red herring. Maybe you can continue with someone else.


I've been having an interesting challenge relating to this recently. I'm trying to calculate costs for LLM usage, but the amounts of money involved are so tiny. Gemini 1.5 Flash 8B is $0.0375 per million tokens!

Should I be running my accounting system on units of 10 billionths of a dollar?


Fixed point Decimal is your friend here. I’m guessing you buy tokens in increments of 1,000,000 so it isn’t too much of an issue to account for. You can then normalise in your accounting so 1,000,000 is just “1 unit,” or you can just account in increments of 1,000,000 but that does start looking weird (but might be necessary!)


No, billing happens per-token. It’s entirely necessary to use billionths of a dollar here, if you don’t use floating point.


In which case, I’d look at this thread https://news.ycombinator.com/item?id=44145263


Accounting happens on the unities people pay, not the ones that generate expenses.

But you probably should run your billing in fixed point or floating decimals with a billionth of a dollar precision, yes. Either that or you should consolidate the expenses into larger bunches.


You're better off representing values as rationals; a ratio between two different numbers. For example, 0.0375 would be represented as 375 over 10000, or 3 over 80


From Forth, here's how I'd set the rationals:

    : gcd begin dup while tuck mod repeat drop ;
    : lcm 2dup \* abs -rot gcd / ;
    : reduce 2dup gcd tuck / >r / r> ;
    : q+ rot 2dup \* >r rot \* -rot \* + r> reduce ;
    : q- swap negate swap q+ ;
    : q\* rot \* >r  \* r> reduce ;
    : q/ >r \* swap r> \* swap reduce ;
Example: to compute 70 * 0.25 = 35/2

70 1 1 4 q* reduce .s 35 2 ok

On stack managing words like 2dup, rot and such, these are easily grasped under either Google/DDG or any Forth with the words "see" and/or "help".

as a hint, q- swaps the top two numbers in the stack, (which compose a rational), makes the last one negative and then turns back its position. And then it calls q+.

So, 2/5 - 3/2 = 2/5 + -3/2.


Sounds hard to model in SQLite.


Two columns?


Convert to money as late as possible


This is surely the right answer: simply count the number of tokens used, and do the billing reconciliation as a separate step.

As an added benefit, it makes it much easier to deal with price changes.



I've used Auroa Units to do this. You can define the dollars dimension, and then all the nano-micro-whatever scale comes with.


For far too many years I had inherited a billing system that used floats for all calculations then rounded up or down. Also doing some calculations in JS and mirroring them on the Python backend, so “just switch to Decimal” wasn’t an easy change to make…


I've found fear of the use of floating-point in finance to be a good litmus test for how knowledgeable people are about floating-point. Because as far as I can tell, finance people almost exclusively uses (binary) floating-point [1], whereas a lot of floating-point FUD focuses on how disastrous it is for finance. And honestly, it's a bit baffling to me why so many people seem to think that floating-point is disastrous.

My best guess for the latter proposition is that people are reacting to the default float printing logic of languages like Java, which display a float as the shortest base-10 number that would correctly round to that value, which extremely exaggerates the effect of being off by a few ULPs. By contrast, C-style printf specifies the number of decimal digits to round to, so all the numbers that are off by a few ULPs are still correct.

[1] I'm not entirely sure about the COBOL mainframe applications, given that COBOL itself predates binary floating-point. I know that modern COBOL does have some support for IEEE 754, but that tells me very little about what the applications running around in COBOL do with it.


The answer is accounting. In accounting you want predictability and reproducibility more than anything, and you are prepared to throw away precision on that alter.

If you're summing up the cost of items in a webshop, then you're in the domain of accounting. If the result appears to be off by a single cent because of a rounding subtlety, then you're in trouble, because even though no one should care about that single cent, it will give the appearance that you don't know what you're doing. Not to mention the trouble you could get in for computing taxes wrong.

If, on the other hand, you're doing financial forecasting or computing stock price targets, then you're not in the domain of accounting, and using floating point for money is just fine.

I'm guessing from your post that your finance people are more like the latter. I could be wrong though - accountants do tend to use Excel.


To get the right answers for accounting, all you have to do is pay attention to how you're doing rounding, which is no harder for floating-point than it is for fixed-point. Actually, it might be slightly easier for floating-point, since you're probably not as likely to skip over the part of the contract that tells you what the rounding rules you have to follow are.


Agreed. To do accounting, you need to employ some kind of discipline to ensure that you get rounding right. So many people erroneously believe that such a discipline has to be based on fixed point or decimal floating point numbers. But binary floating point can work just fine.


I agree overall but my take is that it shows more ignorance about the domain of finance (or a particular subdomain) than it does about floating-point ignorance.

It’s really more of a concern in accounting, when monetary amounts are concrete and represent real money movement between distinct parties. A ton of financial software systems (HFT, trading in general) deal with money in a more abstract way in most of their code, and the particular kinds of imprecision that FP introduces doesn’t result in bad business outcomes that outweigh its convenience and other benefits.


FP does not introduce imprecision. Quite the contrary: The continuous rounding (or truncation) triggered by using scaled integers is what introduces imprecision. Whereas exponent scaling in floating point ensures that all the bits in the mantissa are put to good use.

It's a trade-off between precision and predictability. Floating point provides the former. Scaled integers provide the latter.


I was using imprecision in a more general and less mathematical sense than the way you’re interpreting it, but yes this is a good point about why FP is useful in many financial contexts, when the monetary amount is derived from some model.


Wait until you learn that Excel calculates everything using floating-point, and doesn't even fully observe IEEE 754.

https://learn.microsoft.com/en-us/office/troubleshoot/excel/...

(It nevertheless happens to work just fine for most of what Excel is used for.)


Wouldn't it be better to use a decimal type?


This is what’s called a fixed point decimal type. If you need variable precision, then a decimal type might be a good idea, but fixed point removes a lot of potential foot guns if the constraints work for you.


I meant fixed point decimal type (like C#) 128 bit. I don't understand why the parent commenter (top voted comment?) used unsigned integers to track individual cents. Why roll your own decimal type?

Using arbitrary precision doesn't make sense if the data needs to be stored in a database (for most situations at least). Regardless, infinite precision is magical thinking anyway: try adding Pi to your bank account without loss of precision.


the C# decimal type is not fixed point, its a floating point implementation, but just uses a base 10 exponent instead of a base 2 one like IEE754 floats.

Fixed point is a general technique that is commonly done with machine integers when the necessary precision is known at compile time. It is frequently used on embedded devices that don't have a floating point unit to avoid slow software based floating point implementations. Limiting the precision to $0.01 makes sense if you only do addition or subtraction. Precision of $0.001 (Tenths of a cent also called mils) may be necessary when calculating taxes or applying other percentages although this is typically called out in the relevant laws or regulations.


Good to know. In a scientific domain so haven't used it previously.


Fun fact there is a decimal type on some hardware. I believe Power PC, and presumably mainframes. You can actually use it from C although it’s a software implementation on most hardware. IEEE754-2008 if you are curious.


IEEE754 defines a floating point decimal type. What are your opinions on that?


It’s very cool, but not present on most hardware. Fixed point is a lot simpler though if you are dealing with something with inherent granularity like currency


Wrappers are good even when non dealing with multiple currencies since in many places some transactions are in fractions of cents, so depending on the usecase may need to push that decimal a few places out.

I always have a wrapper class to put the logic of converting to whole currency units when and if needed, as well as when requirements change and now you need 4 digits past the decimal instead of 2, etc.


One of the things I always appreciate about the crypto community is that you do not have to ask what numeric type is being used for money, it is always 8-digit fixed-point. No floating-point rounding errors to be found anywhere.


Correction: Bitcoin is 8-digit fixed-point. But Lightning is 10, IIRC. Other currencies have different conventions. Still, it's fixed within a given system and always fixed-point. As far as I'm aware, there are no floating-point cryptocurrencies at all, because it would be an obvious exploit vector - keep withdrawing 0.000000001 units from your account that has 1.0 units.


How does this avoid rounding error? Division and multiplication and still result in nonrepresentable numbers, right?


It is not hard to remember what int division is about, when your types are ints in code. It also comes up almost never, and isn't what floating-point rounding error means. You aren't multiplying money 99% of the time, and when you are, you don't care about exacting precision (e.g. 20% discount). Floating-point rounding error, on the other hand, is about how 0.1 + 0.2 != 0.3.


How do you store negative numbers?


Maybe as in accounting, one column for benefits, one for debts?


You use a signed integer type, so you just store a negative number.

You can think of fixed point as equivalent to ieee754 floats with a fixed exponent and a two’s complement mantissa instead of a sign bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: