With interpolation we were given a formula or data about a function f(x), and we made a model p(x) that passed through a given set of data points. We now consider approximation, in which we are still trying to build a model, but specify other conditions that the model must satisfy.
As with interpolation, we build this model out of a set of basis functions. The model is then a "recipe", the linear coefficients c that specify how much of each basis function to use when building the model.
Polynomials will be our most typical set of basis function. However, we will want to use a different set of basis polynomials than the obvious set of 1, x, x2, and so on. We'll still be building the same kind of objects, just using a more convenient set of tools.
Once we have our basis set, we will consider how we can determine the approximating function p(x), and we will look at the behavior of the approximation error.
The Legendre polyonomials are a basis for the set of polynomials, appropriate for use on the interval [-1,1]. The first few Legendre polynomials are:
P0(x) = 1
P1(x) = x
P2(x) = ( 3 x2 - 1 ) / 2
P3(x) = ( 5 x3 - 3 x ) / 2
P4(x) = ( 35 x4 - 30 x2 + 3 ) / 8
The value at x of any Legendre polynomial Pi can be determined using the following recursion:
P0(x) = 1
P1(x) = x
...
Pi(x) = ( (2*i-1) * x * Pi-1(x) - (i-1) * Pi-2(x) ) / i
Exercise: write a routine legpoly.m to evaluate the i-th Legendre polynomial Pi(x). Your routine should have the form:
function pval = legpoly ( i, x )You don't have to write code that will work with a vector of arguments x; I'll give that to you. So unless you're feeling ambitious, assume that the input x and the output pval are just single numbers. Your code might look something like this:
if i is 0 or 1 set pval and return otherwise p(0) = 1 p(1) = x for j from 2 to i p(j) = ? pval = ?MATLAB won't like this code, since we are asking it to use the vector p with an index of 0. So once you write your routine and believe it's correct, simply increase every index of p by 1.
As a test of your routine, try to verify the values at x=-0.5:
P\x| -0.5 0.0 1.0 ---+---------------------------- P0 | 1.0000 1.0000 1.0000 P1 | -0.5000 0.0000 1.0000 P2 | -0.1250 -0.5000 1.0000 P3 | 0.4375 0.0000 1.0000 P4 | -0.2891 0.3750 1.0000 P5 | -0.0898 0.0000 1.0000
The Legendre polynomials form a basis for the linear space of polynomials. One thing we like any set of basis vectors to do is be orthogonal. If we were working with regular geometric vectors, we could draw them and see this condition. In generalized vector spaces, we have to define a dot product, and say that two "vectors" f(x) and g(x) are orthogonal if their dot product (f,g) is equal to zero.
For any pair f(x) and g(x) of continuous functions in [-1,1], our dot product (f,g) will be the integral from -1 to 1 of f(x)*g(x). With this dot product, if i =/= j, then:
( Pi(x), Pj(x) ) = 0while
( Pi(x), Pi(x) ) = 2 / ( 2 * i + 1 )
MATLAB hint: To compute the dot product of two row vectors v and w in MATLAB, you write
dot = v * w';and yes, it does make a difference which vector you put the apostrophe on!
Exercise: Using your Legendre polynomial routine legpoly, estimate the following dot products:
In linear algebra, the length of a vector v can be computed as
||v|| = sqrt ( v, v )In the same way, in our generalized vector space, the "length" of a vector is its L2 norm, which is the square root of its dot product with itself.
It will be useful for our Legendre polynomials to have unit length. For the rest of our work, we will use normalized Legendre polynomials.
It will also be most convenient to have a "vector" version of the Legendre polynomial routine, that is, something that we can give a vector x of arguments to, and which will return the corresponding vector of values. This will make it easy to plot, compute integrals, and so on. But I'd rather you didn't spend an hour working out the details.
For all the subsequent exercises, copy my routine legpoly.m, from the web page. It has the same form as your legpoly, but uses the normalized polynomials, and allows vector arguments.
Exercise: Using the normalized Legendre polynomials, estimate the L2-norms of:
A function f(x) defined on [-1,1] may be approximated by the polynomial:
c0 * P0(x) + c1 * P1(x) + ...+ cn * Pn(x)The coefficients c can be computed using the dot product:
ci = ( f, Pi )
Note - this is where approximation differs from interpolation. Our choice of coefficients essentially says, using only the given basis functions, make a model p(x) of the function f(x) with the property that p(x) "casts the same shadow" as f(x) does. The dot product can be thought of as measuring the shadow or projection of the vector f onto each of the basis functions. The resulting approximation p(x) will indeed satisfy this equation:
( p, Pi ) = ( f, Pi )for every basis function i. In other words, if the dot product was our only measurement device, we couldn't tell p(x) apart from f(x). But this means that the error p(x)-f(x) that we make is orthogonal to our space - we can't do any better!
Exercise: write a routine legcoef.m to compute the Legendre coefficients for a given function f(x). Your routine should have the form:
function c = legcoef ( n, f )It should compute the dot products c(i+1) = ( f, Pi ), for i from 0 to n. Your routine could have the form:
xvec = ? fvec = ? for i from 0 to n pvec = ? c(i+1) = approximate integral of f(x) * Pi(x)
Test your routine by computing the Legendre approximation to the function ex with n = 5. Your coefficients should be close to these values:
i | 0 1 2 3 4 5 --|----------------------------------------------------------- c | 1.661985 0.901117 0.226302 0.037660 0.004698 0.000469These are the values the book prints. You may find it very difficult to compute these numbers. Try increasing the accuracy of your integral approximation a few times. When a simple problem like this is hard to solve accurately, it's a bad sign. In this case, it means that our integrals are very hard to compute accurately.
Now we have all the pieces in place. Given a function f defined on [-1,1], we can compute the Legendre coefficients c by computing dot products. Then, to evaluate the approximation at some point x, we simply add up c0 times the zero-th Legendre polynomial, and so on.
Exercise: write a routine legval.m to approximate a function f(x) over [-1,1] given its Legendre coefficients. Your routine should have the form:
function pvec = legval ( c, x )Here, define n to be one less than the length of c, so that c(1) contains the coefficient we've called c0, and c(n+1) contains cn. The point or vector x is where we want to evaluate. Your code might look like this:
n = length ( c ) - 1; pvec = zeros ( size ( x ) ) for i from 0 to n p = ? pvec = pvec + ?
Exercise: Test your evaluation routine by evaluating the approximation to ex from the previous exercise. Plot the true and approximated values over the interval [-1,1]. Once you have computed the coefficients, you might like to see what the approximation looks like, using plotting. You could do this with the commands:
c = legcoef ( n, f )You could also plot the error p-y and see if it oscillates evenly in the way that has been discussed. (Because our coefficients are inaccurate, you may see some bad behavior near the endpoints.)
x = linspace ( -1, 1, 101 )
y = exp ( x )
p = legval ( c, x )
You should wonder what happens over a larger interval, say [-2,2]. You can evaluate the original and approximating functions over this interval, and plot them. It's not a pretty sight, but it's worth realizing that outside of [-1,1], your approximation has no guaranteed behavior whatsoever!
What we have done only "works" on the interval [-1,1]. Suppose you want to approximate sin(x) from 0 to 2*pi? Everything we've done can be adjusted to work in this case, but you have to be careful. The easiest approach is to essentially consider the function sin(x(t)), where x(-1)=0 and x(+1)=2*pi. Now look at this as a function of t...
Another basis that may be used is the Chebyshev polynomials. These are also defined over [-1,1], but use a different dot product involving a weight function. There are also Laguerre, Jacobi, and Hermite polynomials which are appropriate for other dot products or intervals.
Finally, why couldn't we just work with the basis 1, x, x2? There are two reasons, convenience and accuracy. Because this basis is not orthogonal, it's much more work to solve for the coefficients. And because the functions xn and xn+1 are very similar, it becomes quite difficult to compute the coefficients accurately.
Compute the Legendre polynomial approximation to the function
y = atan ( 5 * x )on the interval [-1,1], using the first 6 basis functions. The coefficients you get will not be "random". Send me