Project 22 considers some very simple consequences of the facts of measurement errors in data. Presumably, the reason we are trying to solve A*x=b, or perform other numerical calculations, is because we measured physical quantities that defined A and b. Since these measurements will be inexact, or may represent averaged data, the problem we are actually solving is not quite the problem we really should be solving.
It's natural to assume the following "folk theorem":
Small errors in problem data will result in small errors in the answer.[???]But in fact, while this might seem a reasonable assumption, the situation for every problem is different. No matter how small an error in the data, there is a problem for which that data error results in a huge error in the solution.
The degree to which the computed solution depends on the input data is termed its sensitivity. Problems with a high degree of sensitivity can be difficult, dangerous or even impossible to solve. But if you suspect that sensitivity is a problem, there are ways to find out, in advance, whether the problem you are working on is a highly sensitive one.
The very first case study looks at the quadratic formula for finding the two roots of a quadratic polynomial. What could be simpler? Since the formula depends on the values of A, B and C, however, we must suppose that in a physical situation, these parameters are only known to a certain number of decimal places. Are there critical values of these parameters for which the formula will give us very bad results?
In order to understand sensitivity, we then look at two important ideas in the context of the solution of linear systems of equations:
Another way to detect sensitivity is simply to perform Monte Carlo sampling, that is, to vary the problem parameters and see how the answer changes. This technique is used to examine linear systems, a linear programming problem, and a differential equation.
You can go up one level to the Computational Science Projects page.