Inaccurate Results From Floating Point Arithmetic (JavaScript)

This post was originally supposed to be a quick note about rounding in JavaScript. Further research into the scenario I describe below lead me deep into the land of floating point arithmetic and loss of precision.

While running testing for a financial application , I realized some automatically calculated balance totals did not match. The application allows users to post transactions and requires the sum of all line items match the amount entered for the transaction. Some debugging (console.log) showed that the sum automatically calculated was different to the entered amount by a fraction.

Rounding error.

No problem! ECMAScript standardizes a Math#round function. Good!

One problem however, Math#round only rounds to the nearest integer, i.e. whole number. This won't do for most financial applications because fractions do exist in that space. Luckily some googling around quickly revealed various methods for rounding to the desired precision. The one I preferred was the following:

Math.round(x * Math.pow(10,n)) / Math.pow(10, n)

where x is the number and n is the number of decimal places we want to retain. So for example, rounding 100.2079 to two decimal places would be:

Math.round(100.2079 * Math.pow(10,2)) / Math.pow(10, 2) 

or 100.21.

This appears to work just fine at first try and indeed the answer is correct but there is more to take into consideration here than one might think at first.

Let's define the formula above as a function called, naiveRound (in Typescript):

const naiveRound = (x:number, n:number) => 
  Math.round(x * Math.pow(10,n)) / Math.pow(10, n);

Where x is the number we want to round and n is the number of decimal places we want to retain.

Let's apply this function to a few values in the node REPL:

naiveRound(1,2) === 1 - Good!
naiveRound(100.005,2) === 100.01 - Great!
naiveRound(10.005,2) === 10.01 - Fantastic!
naiveRound(1.005,2) === 1 - What?

For all intents and purposes, rounding 1.005 to two decimal places should give 1.01, however that is not the case here. Why? To understand that we need to examine our naiveRound formula.

The expression (x * Math.pow(10,n) is where the peculiarity begins. If you were to substitute x for 1.005 and n for 2 in your calculator you would most likely get a result like 100.5.

In JavaScript the result is 100.49999999999999, that's an astounding 0.00000000000001 missing!

That's because all numbers in JavaScript are double precision floating-point numbers in line with the IEEE 754 standard. This format allows for the representation of numbers in the approximate range of 5 x 2^-324 to 1.79 * 2^308 inclusive of fractions. It however, has the side-effect of loss of precision when carrying out math operations on numbers with fractions, the details of which are beyond the scope of this post.

Loss of precision is nothing new to programming languages that support floating point numbers. Python actually gave me the same result, interestingly however, Perl gave me 100.5.

This loss of 0.00000000000001 means that when Math#round rounds it sees  .4 instead of a .5, thus rounding to 100 instead of 101.

The next part of our function, the division by Math.pow(10, n) will then become 100/100 thus our result 1.

It took me some time to wrap my head around this, as I initially thought the imprecise result was somehow due to how Math.round works but that's not the case. In fact I went on to add a round function
to the @quenk/noni library to force -0.5 to always round away from zero but this only coincidentally addresses the problem.

The problem here is loss of precision. It turns out, it's actually impossible to represent some numbers exactly in a base two format similar to representing 1/3 in a base 10 format. You can read more about that in this paper.

So what's the solution?

There are a few:

Scaling

This is where you increase your numbers by a power of 10 equal to the number of places after the decimal point you wish to retain (your desired preciseness). So for example, 300.50 becomes 30050 and 1200 becomes 120000 if you wish your calculations to be precise to 2 digits.

In this scenario, a 15% interest on $1200.00 would be calculated as (15 x 120000)/(100 x 100) or 180.

Scaling like this would ensure your numbers are all integers eliminating precision loss caused by arithmetic with fractions. Scaling however, makes the chance of passing the maximum safe integer more likely. When this happens, your number values may silently change thus
giving the wrong results.

Tolerate The Loss

Alternatively, you can simply tolerate some level of precision loss. What's a few 1 cent pieces? They have been phased out any way right? Tolerating some level of loss is probably the least complicated
solution here, however, it would mean equality operators can no longer be used on results. Instead, one would have to check if the difference between the values to compare is less than or equal to our tolerance
threshold.

Example:


const tolerance = 0.30;

const expected = 20.30;

let actual = 10.10 + 10.20; //20.299999999999997

if(Math.abs(expected - actual) <= tolerance) {
 
   //we good, proceed

} else {

  // too much failure, abort transaction

}

In the example above, we are willing to loose as much as 30 cents in our calculations, so the calculation will proceed. To be honest however, 30 cents is a lot to be willing to have disappear and will add up quickly when dealing with thousands of transactions. To allow for a lower tolerance amount, the actual amount could be rounded off to about 4 places after the decimal point.

This will make the difference between the two numbers smaller (after applying Math#abs). Another point to note with this method is that it is probably easier to hit the smallest possible number value than it is to
hit the largest safe integer when scaling. When that happens, your values may again silently change.

Use Big Number

This is the probably the most accurate solution listed here but can be burdensome for its own reasons.

Many modern programming languages tend to have some notion of a BigNumber class that allows the use of numbers outside the natively supported range. Python has decimal, Java has BigNumber and JavaScript has a few libraries such as bignumber.js. JavaScript also has the recently added BigInt type but because it only handles integers and not floating point, it's not relevant here (except when scaling).

Using bignumber module one can do the following:


BigNumber(0.1).plus(0.2).toNumber();

which gives 0.3 .

This approach is probably safest but because JavaScript does
not support operator overloading, you have to use method calls to do your arithmetic. This can get awkward but it's the price you pay for accuracy.

So which method to use? For the application mentioned earlier, I decided to round values and tolerate some level of loss.  This should not be too much of a problem as the application does not do much arithmetic.

For scientific applications or applications dealing with trillions of dollars, a combination of scaling and a BigNumber library may be more feasible. Although, tolerating some level of loss may also be the best thing if it reduces the complexity of the application.

Floating point arithmetic is probably no stranger to any Computer Science sylabus. When you spend most of your development time routing data between clients and servers however, it can be easy to forget some of these subtle details.

Here are some useful links related to arithmetic on floating point:

What Every Programmer Should Know About Floating Point
A Comparisson Of Big Number Libraries In JavaScript
Stackoverflow Thread discussing Floating Point