The term loss of significance refers to an undesirable effect in calculations using floating-point arithmetic.
Floating-point arithmetic is used for fractional numbers on digital computers and calculators. Only a small number of digits of the results of floating-point calculations are maintained; floating-point numbers can only approximate most fractional numbers.
Loss of significance occurs when two nearly equal numbers are subtracted to produce a result much smaller than either of the original numbers. The effect is that the number of accurate (significant) digits in the result may be reduced unacceptably.
An algorithm for calculating solutions of a mathematical problem is called stable if a small change to its input does not result in a large change to its result.
A floating-point representation of a fractional number can be thought of as a small change in the number. So loss of significance will always result if an unstable algorithm is used with floating-point arithmetic.
One of the main goals in the mathematical field of numerical analysis is to find stable algorithms for mathematical problems.
Here are a couple of examples.
Consider the fractional number
A floating-point representation of this number on a machine that keeps 10 floating-point digits would be
which seems pretty close—the difference is very small in comparison with either of the two numbers.
Now perform the calculation
The real answer, accurate to 10 digits, is
But on the 10-digit floating-point machine, the calculation yields
Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first digit. This amounts to loss of information.
Another way of looking at this effect is to note that if the input to the calculation is changed by a small amount, the relative change in the result is drastic. Let’s change the input to the calculation very slightly, from .1234567891 to .1234567892. Once again, in 10-digit floating-point arithmetic,
which is twice the size as the previous result of .0000000001. When an algorithm shows this kind of behaviour, we call it unstable.
It is possible to do all rational arithmetic keeping all significant digits, but to do so is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the input to the algorithm is emperical data that is itself accurate to only 10 digits? The same effect will occur.
The best solution is to avoid such circumstances altogether. Where they can’t be completely avoided, one must be on the watch for them.
The root of the problem always lies in floating-point subtraction. Let’s see how we can avoid it in a practical algorithm.
For example, consider the venerable quadratic formula for solving a quadratic equation
The quadratic formula gives the two solutions as
The case a = 1, b = 200, c = -0.000015 will serve to illustrate the problem:
We have
In real arithmetic, the roots are
In 10-digit floating-point arithmetic,
Although the bigger solution is accurate to ten digits, the first nonzero digit of the smaller solution is wrong.
The quadratic formula does not constitute a stable algorithm to calculate the two roots of a quadratic equation. The instability is due to the subtraction of two numbers of about the same size.
A better algorithm for solving quadratic equations is based on these observations:
If
and
then we have the identity
which allows us to calculate one root from another.
The algorithm is: use the quadratic formula to find the larger of the two solutions (the one where two like values aren’t subtracted), then use this identity to calculate the other root. Since no subtraction is involved in this second calculation, no loss of precision occurs.
Applying this algorithm to our problem, and using 10-digit floating-point arithmetic, the larger of the two solutions, as before, is x_{1} = −200.00000005. The other solution is then
which is accurate.
The example of the quadratic formula is quite typical of numerical problems that arise in computer calculations.
The problem is always due to subtraction of two like quantities. Multiplication, division, and addition of like quantities are not to blame.
Subtraction of like quantities amounts to loss of information.
A problem whose solutions undergo limited change upon a small change of input to to the problem, is called a well-posed problem.
If a problem is not well-posed, there is little hope of any numerical algorithm behaving any better at calculating its solutions. On the other hand, if a problem is well-posed, and an algorithm for calculating its solutions is unstable, one could say the algorithm is bad. It should always be possible to arrange for a stable algorithm to a well-posed problem—but this is where the art lies.
The traditional formulas for solving cubic and quartic equations are also unstable for the same reasons as for the quadratic.
This venerable method for solving systems of linear equations, as taught in high school algebra courses, is quite unstable numerically. It is unstable even for systems which themselves are quiter stable
The instability is a result of subtractions of terms of similar size in the algorithm. The field of study called numerical linear algebra is largely concerned with finding better algorithms for solving linear problems.
The decomposition of a square matrix into eigenvalues and eigenvectors is worse than the previous examples. In general, the problem itself is awfully unstable. (There are classes of matrices for which it is stable.) There is nothing that a numerical algorithm can do to help this.
Every time I have seen this messy fact came up, someone was doing an eigenanalysis where it was quite uncalled for. For example, a simple iteration will give the largest eigenvalue; often that is all that is needed.
The calculus process of differentiation involves a subtraction of numbers that are by nature almost the same. This is a lot of trouble in numerical calculations.
The obvious view of solving differential equations, is one of integrating a differential expression; the usual approach is to try to get an adequate result a step size large enough that the loss of significance in the differences begins to dominate. More often, practitioners rely on dumb luck.
Note that the inverse process, integration, does not suffer from this…often re-writing the problem in terms of integrals provides an attractive alternative algorithm.
This is a nice example of the dictum: an instability is a stability in reverse.