# Scientific Calculator Reliability

So, imagine that your Scientific Calculator’s code is just one piece of a large app that computes all the different kinds of cool things — ie, imagine it’s Meta Calculator. To your knowledge the code works great, but then, someone emails you about a bug. In this case, someone emailed that typing “—5″ into the Scientific Calc didn’t produce the expected value of “-5″ .

Ok, first off, if you’re wondering who would ever actually type “—5″ into a calculator, you’re asking an understandable , but flawed, question. The point is not ‘why focus on weird, unrealistic, use-cases’. But rather that a flaw in scientific calculator’s parser existed. There was a good chance , after all, that no one would ever go to our site and type those characters on purpose! In fact, the guy who reported the bug was purposefully looking for bugs (and it was the only one that he could find).

So, now you’re in the situation of knowing that all of your code works for the countless math expressions but your parser misses one situation. Now, it’s not just the scientific calc that uses the parser–so does our client side graphing calculator , the one that we license out to several companies.

Ok, so what do you do? You get that bug fixed!

Darnit–here’s the dilemma:

How do you know that your bug fix does not introduce new errors in some other part of the calculator? This is called a regression bug–when you fix one bug, only to introduce a new one somewhere else.

And here’s the solution: Enter *unit testing* and a battery of tests that new algorithms for the scientific or graphing calcs must pass before we push them out to the site. So that every time that you make a change to anything under the hood, you submit the new algorithm to hundreds and hundreds of test calculations to make sure that the app always arrives at the correct solutions. In the case of Meta Calculator, currently there are over 1,100 unit tests that we test before pushing out a change to the live site. Let’s look at one unit test– Consider, for instance, the expression “3 * sin(30)” . We use excel or some other reliable tool to determine an expected resulting value of 1.5 (assuming we’re in degrees). Then we see if the calculator gets the same result (taking, rounding error into consideration of course). This process occurs for each and every unit test. Each unit test involves creating a test expression like 3 * sin(30) and as well as the true answer like 1.5 .

Here are some other screen shots of successful set of tests looks like. As you can see in screen shot below, 1,126 assertions or ‘tests’ passed and there were no errors. When this happens, we know that changes that were made to the calculator did not introduce any other new bugs.