After briefly looking at an
of the need for numerical methods, we
used the example of weather/climate modeling to look at two huge
issues in scientific computing:
Fact: error is always present in numerical calculations. There
are several sources of error.
A typical scientific computing application involves several
real system --> mathematical model
--> discrete model
--> linearized model
--> computed solution
At each step, approximation (loss of accuracy) happens:
Floating point (round-off) error
Our goal: control/balance/estimate errors.
Scientific computing problems are very large in terms of
requirements for computer time and memory.
The problem size grows with model complexity (e.g., more
equations and unknowns in the model) and with the resolution of the
Example: if we have N = 5*103*103*102
equations and unknowns, then the time complexity of the
most naive algorithm grows like O(N3);
a better algorithm grows like O(N7/3);
and a sopisticated algorithm may be able to achieve O(N log(N))
complexity. This can make the difference between millenia of running
time and minutes of running time.
So the obvious implications of this huge demand for computing
power in the scientific computing field are these:
We care about good algorithms!
We care about good implementations, which pay attention to
the underlying systems.
We care about languages and compilers---they need to
generate fast code.
We care about fast systems: processors, networks, memories,