Difference between revisions of "Numerical Differentiation"
m (→Lagrange Polynomials) |
(Update taylor series approach to generating derivative stencils) |
||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
+ | Often we need to approximate the derivative of a function when we | ||
+ | cannot obtain it analytically. Here we discuss several ways to do this | ||
+ | numerically. | ||
== Taylor Series == | == Taylor Series == | ||
+ | Let's consider the situation where we have samples of a function | ||
+ | <math>f(x)</math> at discrete points <math>\begin{array}{cccc}x_1 & | ||
+ | x_2 & \cdots & x_n\end{array}</math> seperated by spacing | ||
+ | <math>h</math> as depicted in the following figure: | ||
+ | [[Image:uniform_grid_1D.png|thumb|400px|center]] | ||
+ | Consider a [[Taylor_Series|Taylor series]] expansions about some | ||
+ | arbitrary point <math>x_i</math>. Since <math>x_{i+1}-x_i = | ||
+ | x_i-x_{i-1} = h</math> we can write these as follows: | ||
+ | :{| border=1 cellpadding=5 cellspacing=0 style="text-align:left" | ||
+ | ! Approximation location | ||
+ | ! Taylor Series Expansion about <math>x_i</math> | ||
+ | |- | ||
+ | | <math>x_{i+1}</math> | ||
+ | ||<math> | ||
+ | f(x_{i+1}) = f(x_i) | ||
+ | + f^\prime(x_i) h | ||
+ | + \tfrac{1}{2}f^{\prime\prime}(x_i) h^2 | ||
+ | + \tfrac{1}{6}f^{\prime\prime\prime}(x_i) h^3 | ||
+ | + \tfrac{1}{24}f^{(4)}(x_i) h^4 | ||
+ | + \cdots | ||
+ | </math> | ||
+ | |- | ||
+ | | <math>x_{i-1}</math> | ||
+ | ||<math> | ||
+ | f(x_{i-1}) = f(x_i) | ||
+ | - f^\prime(x_i)h | ||
+ | + \tfrac{1}{2}f^{\prime\prime}(x_i)h^2 | ||
+ | - \tfrac{1}{6}f^{\prime\prime\prime}(x_i)h^3 | ||
+ | + \tfrac{1}{24}f^{(4)}(x_i)h^4 | ||
+ | - \cdots | ||
+ | </math> | ||
+ | |} | ||
+ | If we subtract the Taylor series expansion at <math>x_{i-1}</math> | ||
+ | from the one at <math>x_{i-1}</math>, we find | ||
+ | :<math> | ||
+ | f(x_{i+1})-f(x_{i-1}) = 2f^{\prime}(x_i)h + \tfrac{1}{3} f^{\prime\prime\prime}(x_i) h^3 + \cdots | ||
+ | </math> | ||
+ | Now we solve this for <math>f^\prime(x_i)</math> to find | ||
+ | :<math> | ||
+ | f^{\prime}(x_i) = \frac{f(x_{i+1})-f(x_{i-1})}{2h} - \tfrac{1}{6} f^{\prime\prime\prime}(x_i)h^2 - \cdots | ||
+ | </math> | ||
+ | Now if <math>h</math> is small, then the second term (with the | ||
+ | <math>h^2</math> in it) is small and we can approximate the derivative | ||
+ | as | ||
+ | :{|border=2 | ||
+ | |- | ||
+ | |<math> | ||
+ | f^\prime(x_i) = \frac{f(x_{i+1})-f(x_{i-1})}{2h} | ||
+ | </math> | ||
+ | |} | ||
+ | We call this a '''second order approximation''' to | ||
+ | <math>f^\prime(x)</math> because when we truncated the series | ||
+ | approximation to <math>f^\prime(x_i)</math> the largest term there was | ||
+ | of the order of <math>h^2</math>. | ||
− | + | Note that we now have a way to approximate the derivative of a function if we have the function's values at two locations. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | On a uniform mesh, we can use this technique to generate a variety of | |
+ | approximations to derivatives, as summarized in the following table: | ||
− | + | {| align=center | |
− | {| border= | + | | |
− | + | {| border=1 cellpadding=5 cellspacing=0 style="text-align:center" | |
− | + | ! Formula for <math> \left. \frac{\mathrm{d\phi}}{\mathrm{d}x} \right|_{i}</math> | |
− | |||
− | |||
! Order | ! Order | ||
|- | |- | ||
− | + | |<math> | |
− | | <math> | + | \frac{\phi_{i+1}-\phi_{i}}{h}</math> |
− | | <math>\mathcal{O}\left( | + | ||<math>\mathcal{O}\left( h \right) </math> |
|- | |- | ||
− | + | |<math> | |
− | | <math> | + | \frac{\phi_{i}-\phi_{i-1}}{h}</math> |
− | | <math>\mathcal{O}\left( | + | ||<math>\mathcal{O}\left( h \right) </math> |
|- | |- | ||
− | + | |<math> | |
− | + | \frac{\phi_{i+1}-\phi_{i-1}}{2 h}</math> | |
− | | <math>\mathcal{O}\left( | + | | <math>\mathcal{O}\left(h^2 \right) </math> |
|- | |- | ||
+ | |<math> | ||
+ | \frac{\phi_{i-2} - 8 \phi_{i-1} + 8 \phi_{i+1} - \phi_{i+1}}{12 h} | ||
+ | </math> | ||
+ | ||<math>\mathcal{O}\left( h^4 \right) </math> | ||
+ | |} | ||
+ | || | ||
+ | [[Image:derConverge_uniform_mesh.png|thumb|450px|Error in approximation | ||
+ | of <math>\frac{df}{dx}</math> as a function of the size of the | ||
+ | interval <math>h</math>. Click [[media:der_demo.m|here]] to download | ||
+ | the matlab file that produced this plot.]] | ||
|} | |} | ||
− | |||
+ | As can be seen in the figure above, higher order approximations result | ||
+ | in significantly lower error for a given spacing <math>h</math>. Note | ||
+ | that for <math>h<10^{-3}</math> the fourth-order approximation is | ||
+ | contaminated by roundoff error. The same would happen for the other | ||
+ | derivative approximations, but at smaller <math>h</math>. | ||
− | < | + | <!-- jcs need to fill this in |
− | {| border= | + | {| border=1 cellpadding=5 cellspacing=0 align=center style="text-align:center" |
− | |+ | + | |+ Some Second Derivative Expressions for a Uniform Mesh |
|- | |- | ||
! Derivative at point <math>i</math> | ! Derivative at point <math>i</math> | ||
! Discrete Representation (uniform mesh) | ! Discrete Representation (uniform mesh) | ||
! Order | ! Order | ||
− | </center> | + | |} |
+ | --> | ||
+ | |||
+ | == Lagrange Polynomials == | ||
+ | |||
+ | Lagrange polynomials, which are commonly used for [[Interpolation#Lagrange_Polynomial_Interpolation|interpolation]], can also be used for differentiation. The formula is | ||
+ | <center><math>f^{\prime}(x) = \sum_{k=0}^n y_k L_{k}^{\prime}(x),</math></center> | ||
+ | where <math>L_{k}^{\prime}(x)</math> is given as | ||
+ | <center><math>L_{k}^{\prime}(x) = \left[ \sum_{{j=0}\atop{j\ne k}}^{n} \left( \prod_{ {i=0}\atop{i \ne j,k} }^n (x-x_i) \right) \right] \left[ \prod_{{i=0}\atop{i\ne k}}^{n} (x_k-x_i) \right]^{-1}. </math></center> | ||
+ | Here <math>n</math> is the order of the polynomial and we require <math>n_p=n+1</math> points to form the Lagrange polynomial. |
Revision as of 20:09, 5 August 2009
Introduction
Often we need to approximate the derivative of a function when we cannot obtain it analytically. Here we discuss several ways to do this numerically.
Taylor Series
Let's consider the situation where we have samples of a function at discrete points seperated by spacing as depicted in the following figure:
Consider a Taylor series expansions about some arbitrary point . Since we can write these as follows:
Approximation location Taylor Series Expansion about
If we subtract the Taylor series expansion at from the one at , we find
Now we solve this for to find
Now if is small, then the second term (with the in it) is small and we can approximate the derivative as
We call this a second order approximation to because when we truncated the series approximation to the largest term there was of the order of .
Note that we now have a way to approximate the derivative of a function if we have the function's values at two locations.
On a uniform mesh, we can use this technique to generate a variety of approximations to derivatives, as summarized in the following table:
|
As can be seen in the figure above, higher order approximations result in significantly lower error for a given spacing . Note that for the fourth-order approximation is contaminated by roundoff error. The same would happen for the other derivative approximations, but at smaller .
Lagrange Polynomials
Lagrange polynomials, which are commonly used for interpolation, can also be used for differentiation. The formula is
where is given as
Here is the order of the polynomial and we require points to form the Lagrange polynomial.