CS3414 Afterclass Notes --- 5 June, 2002
- Least Squares Approximation of Data
- Recall the basic idea. Given m data points
(xi,yi) and n basis functions
bj(x), find a linear combination of the basis functions
(call it F(x)), that approximates the given data in some sense.
- More formally, this corresponds to solving the `overdetermined
linear system' B alpha = y, where B is an m x n matrix with
Bij = bj(xi), and alpha is an
n-vector of unknown coefficients (which define the particular linear
- Note that the "least squares" solution to this problem is the
vector alpha that minimizes the two-norm of residual
r = y - B alpha. (Minimizing with respect to other norms
yields other approaches, mentioned briefly last time.)
- How to find the least squares solution?
- Form and solve the `normal equations'
- Can show that the least squares solution satisfies the n x n
linear system BT B alpha = BT y.
- But the condition number of BT B can be very
large (roughly the square of cond(B)), which means this method
may be unstable. And depending on the choice of basis functions,
B (and therefor BT B) may be dense.
- Do something more clever...
- Choose basis functions with `small support' (nonzero on only
a small region), so that BT B is very sparse and has
- Choose `orthonormal' basis functions so that BT B
is the identity matrix (see Section 10.2).
- Use an orthogonal factorization method (see CS 5485).
- We concluded by looking at
a Matlab example of
approximating some data using a cubic polynomial and a linear spline.