Loss of Significance
The loss of numerical accuracy in computer calculation

The term loss of significance refers to an undesirable effect in calculations using floating-point arithmetic.

Computers and calculators employ floating-point arithmetic to express and work with fractional numbers. Numbers in this arithmetic consist of a fixed number of digits, with a decimal point that “floats” among them. In calculations, only that fixed number of digits is maintained, so that the results of calculations are only approximate — which is often quite adequate.

Loss of significance occurs in the subtraction of two nearly equal numbers, which produces a result much smaller than either of the original numbers. The effect can be an unacceptable reduction in the number of accurate (significant) digits in the result.

An algorithm for calculating solutions of a mathematical problem is called stable if a small change to its input does not result in a large change to its result.

A floating-point representation of a fractional number can be thought of as a small change in the number. So, loss of significance will always result if an unstable algorithm is used with floating-point arithmetic.

One of the main goals in the mathematical field of numerical analysis is to find stable algorithms for mathematical problems.

Here are a couple of examples.

Floating-point subtraction

Consider the fractional number

.1234567891234567890 .

A floating-point representation of this number on a machine that keeps 10 floating-point digits would be

.1234567891 ,

which seems pretty close—the difference is very small in comparison with either of the two numbers.

Now perform the calculation

.1234567891234567890 − .1234567890 .

The real answer, accurate to 10 digits, is

.0000000001234567890

But on the 10-digit floating-point machine, the calculation yields

.1234567891 − .1234567890 = .0000000001 .

Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first digit. This amounts to loss of information.

Another way of looking at it is that, when the input to the calculation is changed by a small amount, the relative change in the result is drastic. Try changing the input to the calculation very slightly, say, from .1234567891 to .1234567892. Once again, in 10-digit floating-point arithmetic,

.1234567892 − .1234567890 = .0000000002 ,

which is twice the size as the previous result of .0000000001. An algorithm that shows this kind of behavior is called unstable.

It is possible to do all rational arithmetic keeping all significant digits, but to do so is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the input to the algorithm is empirical data that is itself accurate to only 10 digits? The same effect will occur.

The best solution is to avoid such circumstances altogether. Where they are not completely avoidable, one must be on the watch for them.

The root of the problem always lies in floating-point subtraction. A familiar practical algorithm will serve as an example.

Instability of the quadratic formula

For example, consider the venerable quadratic formula for solving a quadratic equation

a x2 + b x + c = 0 .

The quadratic formula gives the two solutions as

x = ( −b ± ( b2 − 4 a c )1/2 ) / ( 2 a ) .

The case a = 1, b = 200, c = -0.000015 will illustrate the problem:

x2 + 200 x − 0.000015 = 0 .

Then

(b2 − 4 a c)1/2 = (2002 + 4 * 1 * .000015)1/2 = 200.00000015…

In real arithmetic, the roots are

( −200 − 200.00000015 ) / 2 = −200.000000075 ,
( −200 + 200.00000015 ) / 2 = .000000075 .

In 10-digit floating-point arithmetic,

( −200 − 200.0000001 ) / 2 = −200.00000005 ,
( −200 + 200.0000001 ) / 2 = .00000005 .

Although the bigger solution is accurate to ten digits, the first nonzero digit of the smaller solution is wrong.

The quadratic formula does not constitute a stable algorithm to calculate the two roots of a quadratic equation. The instability is due to the subtraction of two numbers of about the same size.

A better algorithm

A better algorithm for solving quadratic equations is based on these observations:

If

x1 = ( −b + ( b2 − 4 a c )1/2 ) / ( 2 a )

and

x2 = ( −b − ( b2 − 4 a c )1/2 ) / ( 2 a ) ,

then the identity holds:

x1 x2 = c / a ,

which expresses the value of one root in terms of the other.

The algorithm is: use the quadratic formula to find the larger of the two solutions (the one where two like values aren’t subtracted), then use this identity to calculate the other root. Since no subtraction is involved in this second calculation, no loss of precision occurs.

Applying this algorithm to the example problem, and using 10-digit floating-point arithmetic, the larger of the two solutions, as before, is x1 = −200.00000005. The other solution is then

x2 = c / (−200.00000005) = 0.000000075 ,

which is accurate.

Discussion

The example of the quadratic formula is quite typical of numerical problems that arise in computer calculations.

The problem is always due to subtraction of two like quantities. Multiplication, division, and addition of like quantities are not to blame.

Subtraction of like quantities amounts to loss of information.

A problem, whose solutions undergo limited change upon a small change of input to the problem, is called a well-posed problem.

If a problem is not well-posed, there is little hope of any numerical algorithm behaving any better at calculating its solutions. On the other hand, if a problem is well-posed, and an algorithm for calculating its solutions is unstable, one could say the algorithm is bad. It should always be possible to arrange for a stable algorithm to a well-posed problem—but this is where the art lies.

Other examples

Cubic and Quartic formulas

The traditional formulas for solving cubic and quartic equations are also unstable for the same reasons as for the quadratic.

Gaussian elimination

This venerable method for solving systems of linear equations, as taught in high school algebra courses, is quite unstable numerically. It is unstable even for systems which themselves are quite well-posed.

The instability is a result of subtractions of terms of similar size in the algorithm. The field of study called numerical linear algebra is largely concerned with finding better algorithms for solving linear problems.

Eigenanalysis

The decomposition of a square matrix into eigenvalues and eigenvectors is worse than the previous examples. In general, the problem itself is awfully unstable. (There are classes of matrices for which it is stable.) There is nothing that a numerical algorithm can do to help this.

Every time I have seen this messy fact came up, someone was doing an eigenanalysis where it was quite uncalled-for. For example, a simple iteration will give the largest eigenvalue; often that is all that is needed.

Numerical differentiation

The calculus process of differentiation involves a subtraction of numbers that are by nature almost the same. This causes a lot of trouble in numerical calculations.

The obvious view of solving differential equations, is one of integrating a differential expression; the usual approach is to try to get an adequate result a step size large enough that the loss of significance in the differences begins to dominate. More often, practitioners rely on dumb luck.

Note that the inverse process, integration, does not suffer from this…often re-writing the problem in terms of integrals provides an attractive alternative algorithm.

This is a nice example of the dictum: an instability is a stability in reverse.