Difference between revisions of "Taylor Series"

From Sutherland_wiki
Jump to: navigation, search
(Single Variable Taylor Series: - fix mistake in table)
(Single Variable Taylor Series: - fix another mistake in table)
 
Line 42: Line 42:
 
* <math>f_0(x) = \exp(x_0)</math>
 
* <math>f_0(x) = \exp(x_0)</math>
 
* <math>f_1(x) = f_0(x) + (x-x_0)\exp(x_0)</math>
 
* <math>f_1(x) = f_0(x) + (x-x_0)\exp(x_0)</math>
* <math>f_2(x) = f_1(x) + \tfrac{1}{2}(x-x_0)\exp(x)</math>
+
* <math>f_2(x) = f_1(x) + \tfrac{1}{2}(x-x_0)^2\exp(x)</math>
* <math>f_3(x) = f_2(x) + \tfrac{1}{6}(x-x_0)\exp(x)</math>
+
* <math>f_3(x) = f_2(x) + \tfrac{1}{6}(x-x_0)^3\exp(x)</math>
 
Note that as we retain more terms we become increasingly more
 
Note that as we retain more terms we become increasingly more
 
accurate. However, this is an infinite series for the exponential
 
accurate. However, this is an infinite series for the exponential

Latest revision as of 19:19, 5 August 2009

Single Variable Taylor Series

If we know a function value and its derivatives at some point x0, we can use a Taylor series to approximate its value at a nearb point x as


  f(x) = f(x_0)
       + (x-x_0)f^\prime (x_0)
       + \frac{1}{2} (x-x_0)^2 f^{\prime\prime}(x_0)
       + \frac{1}{3!} (x-x_0)^3 f^{\prime\prime\prime}(x_0)
       + \frac{1}{4!} (x-x_0)^4 f^{\prime\prime\prime\prime}(x_0)
       + \ldots

If x-x0 is small then we can truncate the series to find


  f(x) \approx f(x_0) + (x-x_0)f^\prime (x_0)

Note that for an nth order polynomial functions, the Taylor series converges in exactly n+1 terms.


Taylor series approximation of various functions showing the effect of retaining up to 4 terms in the expansion.
Approximations to f(x)=x^3-x
Approximations to f(x)=\exp(x)
The expansions are:
  • f_0(x) = x_0^3 - x_0
  • f_1(x) = f_0(x) + (x-x_0)(3x_0^2-1)
  • f_2(x) = f_1(x) + \tfrac{1}{2}(x-x_0)^2(6x_0)
  • f_3(x) = f_2(x) + \tfrac{1}{6}(x-x_0)^3(6)

Note that the third expansion, f_3(x) recovers the function exactly at all points since all higher derivatives (and therefore all higher terms in the expansion) are zero.

The expansions are:
  • f_0(x) = \exp(x_0)
  • f_1(x) = f_0(x) + (x-x_0)\exp(x_0)
  • f_2(x) = f_1(x) + \tfrac{1}{2}(x-x_0)^2\exp(x)
  • f_3(x) = f_2(x) + \tfrac{1}{6}(x-x_0)^3\exp(x)

Note that as we retain more terms we become increasingly more accurate. However, this is an infinite series for the exponential function since all of its derivatives are nonzero.

Multivariate Taylor Series

Consider the case where we have several equations that are functions of multiple variables,


\begin{array}{c}
f_{1}\left(x_{1},x_{2},\cdots x_{n}\right)\\
f_{2}\left(x_{1},x_{2},\cdots x_{n}\right)\\
\vdots \\
f_{n}\left(x_{1},x_{2},\cdots x_{n}\right)\end{array}

We can write a multivariate Taylor series expansion for the ith function f_i(x_j) as


 f_{i}(\mathbf{x}) \approx f_{i}(\mathbf{x}_{0})
 + \sum_{j=1}^{n}
 + (x_{j}-x_{j0})\frac{\partial f_{i}}{\partial x_{j}}
 + \frac{1}{2}(x_{j}-x_{j0})^{2}\frac{\partial^{2}f}{\partial x^{2}}
 + \frac{1}{3!}(x_{j}-x_{j0})^{3}\frac{\partial^{3}f}{\partial x^{3}}
 + \cdots

where \mathbf{x}_0 is the location where we evaluate the function and n is the number of independent variables.


The term \frac{\partial f_{i}}{\partial x_{j}} is a matrix called the Jacobian matrix, [\mathbf{J}] It is used in solving nonlinear systems of equations using Newton's method.


Example

Consider the equations


 \begin{cases}
   f(x,y) = x^2 + y^2 -1 \\
   g(x,y) = y - x^3 + x - 1
 \end{cases}

Here n=2. Let's first look at the expansion for f(x,y). Applying our general equation above we find


 \begin{align}
  f(x,y) \approx f(x_{0},y_{0})
    &+ (x-x_{0})\frac{\partial f}{\partial x}
     + \tfrac{1}{2}(x-x_{0})^{2}\frac{\partial^{2}f}{\partial x^{2}}
     + \tfrac{1}{6}(x-x_{0})^{3}\frac{\partial^{3}f}{\partial x^{3}}
     + \cdots \\
    &+ (y-y_{0})\frac{\partial f}{\partial y}
     + \tfrac{1}{2}(y-y_{0})^{2}\frac{\partial^{2}f}{\partial x^{2}}
     + \tfrac{1}{6}(y-y_{0})^{3}\frac{\partial^{3}f}{\partial x^{3}}
     +\cdots
 \end{align}

The partial derivatives of f(x,y) are:

\alpha \frac{\partial f}{\partial \alpha} \frac{\partial^2 f}{\partial \alpha^2} \frac{\partial^3 f}{\partial \alpha^3} \frac{\partial^4 f}{\partial \alpha^4}
x 2x 2 0 0
y 2y 2 0 0

substituting this we find


 f = f(x_0,y_0)
   + (x-x_0) 2x + (x-x_0)^2 + (y-y_0) 2y + (y-y_0)^2

We can do the same thing for g(x,y)


 \begin{align}
  g(x,y) \approx g(x_{0},y_{0})
    &+ (x-x_{0})\frac{\partial g}{\partial x}
     + \tfrac{1}{2}(x-x_{0})^{2}\frac{\partial^{2}g}{\partial x^{2}}
     + \tfrac{1}{6}(x-x_{0})^{3}\frac{\partial^{3}g}{\partial x^{3}}
     + \cdots \\
    &+ (y-y_{0})\frac{\partial g}{\partial y}
     + \tfrac{1}{2}(y-y_{0})^{2}\frac{\partial^{2}g}{\partial x^{2}}
     + \tfrac{1}{6}(y-y_{0})^{3}\frac{\partial^{3}g}{\partial x^{3}}
     +\cdots
 \end{align}

The partial derivatives of g(x,y) are:

\alpha \frac{\partial g}{\partial \alpha} \frac{\partial^2 g}{\partial \alpha^2} \frac{\partial^3 g}{\partial \alpha^3} \frac{\partial^4 g}{\partial \alpha^4}
x -3x^2+1 -6x -6 0
y 1 0 0 0

substituting this we find


 g = g(x_0,y_0) + (x-x_0)(-3x^2+1) + (x-x_0)^2(-3x) + (x-x_0)^3(-x)
   + (y-y_0)