Monday, March 16th, 2009

## Crock on Floating Points in JavaScript

Over on the YUI blog, Douglas Crockford wrote a piece discussing why JavaScript’s current numeric type is problematic:

JavaScript has a single number type: IEEE 754 Double Precision floating point… Unfortunately, a binary floating point type has some significant disadvantages. The worst is that it cannot accurately represent decimal fractions, which is a big problem because humanity has been doing commerce in decimals for a long, long time. As a consequence, 0.1 + 0.2 === 0.3 is false, which is the source of a lot of confusion.

While you might think it would therefore be a good idea to add a new numeric type to JavaScript that can accurately represent decimal fractions, Doug claims there’s significant advantage in just having one type:

Having a single number type is one of JavaScript’s best features. Multiple number types can be a source of complexity, confusion, and error. A single type is simplifying and stabilizing.

So what’s the solution?

There is work on a decimal flavor of IEEE 754, and we looked at incorporating it into the next edition of ECMAScript. Unfortunately, adding a second number type to a language having only one can do a lot of violence to the language, so we deferred consideration of the decimal type to a future edition. Also, the proposed decimal type is extremely slow in execution, and to my eye is much too complicated in its specification.

When using a rich-type language, you can always tell when a developer is working on their first e-commerce application: he’ll use floating points instead of integers (or higher-level accurate decimal helpers, like BigDecimal in Java) to perform currency-related math.

What do you think the right solution is for JavaScript?

what about Number((0.1 + 0.2).toFixed(1)) === 0.3; :D

I recently did a large financial app in JavaScript, and it was a real pain. I had to think about each operation and decide how to deal with it.

While I love BCD, I’ve rarely had it available to me (Atari BASIC did BCD). I hate the idea of adding a new type.

Why *anyone* would design a modern, interpreted language without an integer type is beyond me. What was the original rationale?

I’ll probably get peppered for this, but I am 100% positively certain about that if you do money maths in JS you’ve got *WAAAAYYYYY* more serious problems then floating point rounding errors…!

Man, if only we had better getter/setter support. We’re almost there. It wouldn’t work as a universal solution, but you could save yourself a lot of headache in some object oriented code by adding a setter that would automatically round to the sane number you meant to use (if you know, for instance, that you’re dealing with currencies).

Something like (I hope this formats right):

`var sanePrice = function() {`

`var rawPrice = 0;`

`this.__defineGetter__("price", function() {`

`return Math.round( (rawPrice * 1000000)) / 1000000;`

`});`

`this.__defineSetter__("price", function(fpPrice) {`

`if(typeof fpPrice == "number") {`

`rawPrice = fpPrice;`

`}`

`else {`

`throw new Error("Price must be a number")`

`}`

`});`

`this.__defineGetter__("fpPrice", function() { return rawPrice });`

`}`

`var p = new sanePrice();`

`p.price = (0.1 * 0.2);`

`console.log(p.price) // returns 0.02`

`console.log(p.fpPrice) // returns 0.020000000000000004`

IEEE754 can represent integers just fine. It breaks down when representing values that can’t be expressed as the sum-of-reciprocals-of-powers-of-two. 1/10 is one of those values, just like 1/3 can’t be represented accurately as a decimal.

Is it insane to try to evaluate mathematical expressions as late as possible? 1/10 + 2/10 === 3/10, 1/3 + 1/5 === 5/15 + 3/15 === 8/15, etc. The parser could figure out that 0.1 should be represented as 1/10 and then 0.1 + 0.2 === 0.3.

Javascript solves this like every other floating-point programming language in the world solves it: you introduce an error tolerance. You don’t ask if 0.1 + 0.2 === 0.3 . All computer language manuals teach you never to use equality with floating points.

You ask if 0.1 + 0.2 – 0.3 < 0.00001 (or some other suitably small number). This works for all real-world cases. It isn’t that big a deal. Please do not flame me with complaints that it increases download size.

While waiting for a solution, i’m using this class that fix the problem: http://jsfromhell.com/classes/bignumber

I recently encountered this issue on a project where we were constructing a calculator which calculated Capital Gains Tax scenarios on the fly. Since we were using GWT we found the best solution was to employ the GWT Math library (http://code.google.com/p/gwt-math/) which contains a partial implementation of Java’s BigDecimal class. That was pretty neat.

If you don’t have the luxury of a such an implementation you have a couple of options. The first is to operate in cents. Do all your calculations in cents treating monetary amounts as integers. You must be careful however to avoid performing calcs that could result in a fractional result. Doing so would force the JS runtime to coerce the value into a floating point data type. The other option is to create a basic BigDecimal implementation yourself. Just create a wrapper type which stores both sides of the decimal point as an integer. Similar to treating everything as cents, simply provide a set of methods (an API) that allow you to operate on the number without having to think about how to manipulate the separate parts of the number.