** Curve Fitting - Approximation.**

Copyright (c) Susan Laflin. August 1999.

** Minimax Approximation. **

This is obtained by when we define the `best-fitting' curve as that which corresponds to the minimum value of the L-infinity-norm. Since the L-infinity-norm is defined as the maximum value of the error function over the interval or set of points, this curve gives the MINImum value of the MAXimum error - hence the name `minimax'.

This problem was studied by Chebyshev and he formulated and proved the `Chebyshev Equioscillation Theorem', which is the basis for these methods.

** Chebyshev Equioscillation Theorem **

Let f(x) be a continuous function defined on a finite interval [a,b]. If pn(x) is a minimax polynomial approximation to f(x) over the interval [a,b], with an error function en(x)=pn(x)-f(x), then there are at least n+2 points x1, x2,...,x{n+2} in [a,b], where a <= x1 < x2 < ..... < x{n+2} <= b , such that |en(x)| = En, i = 1,2, n+2 where En is the error bound for the approximation, and

en(xi) = - en(x{i+1}) i = 1,2,...,n+1

There is ONLY ONE polynomial of degree < = n with this property.

** Example of minimax curve **

Replace the function f(x) by the best-fitting straight line.

Decreasing the error in one place will increase it somewhere else. Applying the equioscillation theorem, we note

f(0) - (a0 + a1*0) = -E

f(X) - (a0 + a1*X) = E

f(1) - (a0 + a1*1) = -E

We also note that the error has a maximum at the point X. hence

d/dx (f(x) - a0 - a1*x ) = 0 at x = X

or df/dx - a1 = 0

In the diagram above, f(x) = x^2 and so a0 = -E X^2 - a0 - a1 X = E 1 - a0 - a1 = -E 2X = a1 which gives y = x - 1/8 as the best fitting line, with E = 1/8 .

** Second Remez Algorithm **

1. Solve the set of equations f(xi) = sum ak.Qk(xi) + (-1)^i M for i = 1, 2, ... ,m+2, to find ak and M.

2. Locate points yk at which the maxima and minima of the error E occur.

a = y0 < y1 < .... < y{k+1} = b

This involves calculating the error E(yk) at each point. Let the largest
error be Emax

3. Test for Convergence. If Emax < = M then print results and stop. Otherwise substitute yk for xk and repeat from 1.

Points to Consider:

1. Choice of Qk(x).

2. Method of locating yk.

3. In discrete case, evaluate error at all Xi.

4. Choice of starting values (initial xk).

Although the Remez algorithm is an iterative method, it is a useful means of calculating the best fitting polynomial to replace either a more complicated function or a set of points.