|
Chebyshev approximation
One can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree.This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the usual trigonometric functions.
If one calculates the coefficients in the Chebyshev expansion for a function:$$ f(x)\sim \sum _{i=0}^{\infty }c_{i}T_{i}(x) $$
and then cuts off the series after the $ T_{N} $ term, one gets an $N$th-degree polynomial approximating $f(x)$.
The reason this polynomial is nearly optimal is that, for functions with rapidly converging power series, if the series is cut off after some term, the total error arising from the cutoff is close to the first term after the cutoff. That is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of bucking polynomials. If a Chebyshev expansion is cut off after $ T_{N+1} $. The Chebyshev polynomials have the property that they are level – they oscillate between +1 and −1 in the interval $[−1, 1]$. $ T_{N} $ is close to a level function with $N+2$ extrema, so it is close to the optimal $N$th-degree polynomial.
In the graphs above, the blue error function is sometimes better than (inside of) the red function, but sometimes worse, meaning that it is not quite the optimal polynomial. The discrepancy is less serious for the exp function, which has an extremely rapidly converging power series, than for the log function.
Chebyshev approximation is the basis for Clenshaw–Curtis quadrature, a numerical integration technique.
|
|