Linear Algebra

From Sutherland_wiki
Revision as of 12:08, 25 August 2008 by 00033394 (talk | contribs)
Jump to: navigation, search

Basics

Let's first make some definitions:

Vector 
A one-dimensional collection of numbers. Examples:
a=\left( \begin{array}{cccc} 1 & 2 & 3 & 4 \end{array} \right), \quad b=\left( \begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array} \right)
Matrix 
A two-dimensional collection of numbers. Examples:
 A = \left[ \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \end{array} \right], \quad B = \left[ \begin{array}{cc} 9 & 6 \\ 2 & 3 \end{array}\right]
Array 
An n-dimensional collection of numbers. A vector is a 1-dimensional array while a matrix is a 2-dimensional array.
Transpose 
An operation that involves interchanging the rows and columns of an array. It is indicated by a ^\mathsf{T} superscript. For example:
a=\left( \begin{array}{cccc} 1 & 2 & 3 & 4 \end{array} \right) \; \Rightarrow \; a^\mathsf{T} = \left( \begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array} \right)
 A = \left[ \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \end{array} \right] \; \Rightarrow \;
  A^\mathsf{T} = \left[ \begin{array}{cc} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{array} \right]
Vector Magnitude, \left|\vec{a}\right| 
A measure of the length of a vector, \left|\vec{a}\right| = \sqrt{ \sum_{i=1}^{n} a_i^2 }
Example: \vec{a}=\left( \begin{array}{ccc} 2 & 8 & 3 \end{array} \right) \; \Rightarrow \; \left|\vec{a}\right| = \sqrt{ 2^2 + 8^2 + 3^2 }
Example: \vec{b}=\left( \begin{array}{cccc} b_1 & b_2 & b_3 & b_4\end{array} \right) \; \Rightarrow \; \left|\vec{b}\right| = \sqrt{ b_1^2 + b_2^2 + b_3^2 +b_4^2 }


Matrix & Vector Algebra

There are many common operations involving matrices and vectors including:

These are each discussed in the following sub-sections.

Vector Dot Product

The dot product of two vectors produces a scalar. Physically, the dot product of a and b represents the projection of a onto b.

Given two vectors a and b, their dot product is formed as


 c = \vec{a} \cdot \vec{b} = \sum_{i=1}^n a_i b_i,

where a_i and b_i are the components in vectors a and b respectively. This is most useful when we know the components of the vectors a and b.

Occasionally we know the magnitude of the two vectors and the angle between them. In this case, we can calculate the dot product as


 |c| = |a|\,|b|\,\cos(\theta),

where θ is the angle between the vectors a and b.

For more information on the dot product, see wikipedia's article.

Thought examples

Consider the cartoon shown to the right.
DotProdExample.png
  • At noon, when the sun is directly overhead, a rocket is launched directly toward the sun at 1000 mph. How fast is its shadow moving?
Define the rocket's velocity as \vec{R}.
Define the unit normal on the ground (i.e. the direction of the rocket's shadow on the ground) as \vec{s}, where |\vec{s}| = 1.
Intuition tells us that if the it is moving directly toward the sun, then its shadow does not appear to move at all.  : The dot product \vec{R}\cdot\vec{s} = |\vec{R}| \, |\vec{s}| \cos(\theta) = 0 since cos(90°)=0.
  • Consider the same rocket going parallel to the earth's surface. How fast is its shadow moving?
If the rocket is going parallel to the earth's surface, our intuition tells us that is shadow is moving at the same speed. This is confirmed by the mathematics, since the angle between the rocket's path and the ground is 0. Therefore, \vec{R}\cdot\vec{s} = |\vec{R}| \, |\vec{s}| \cos(\theta) = 0 since cos(0°)=1.
  • What if the rocket were going at a 45° angle?
Our intuition tells us that the shadow will appear to move, but it will not be moving as fast as the rocket is moving. Mathematically, we have v_s = \vec{R}\cdot\vec{s} = |\vec{R}| |\vec{s}| \cos(45^\circ) = 500\sqrt{2} = 707.1 \, \textrm{mph}.

Vector Cross Product

The cross product of two vectors produces a vector perpendicular to the original two. The common right-hand rule is used to determine the direction of the resulting vector.

CrossProd.png

Assume that we have vectors a and b defined as


\begin{align}
 \vec{a} &= a_x \hat{i} + a_y \hat{j} + a_z \hat{k} \\
 \vec{b} &= b_x \hat{i} + b_y \hat{j} + b_z \hat{k}
\end{align}

where \hat{i}, \hat{j}, and \hat{k} represent the unit-normal vectors in the x, y, and z directions, respectively. The cross product of a and b is then defined as


 \vec{c} = \vec{a} \times \vec{b} = \left( a_y b_z -a_z b_y \right) \hat{i} + \left(a_z b_x -a_x b_z\right) \hat{j} + \left(a_x b_y -a_y b_x\right) \hat{k}

If we know the magnitude of a and b and the angle between them (θ), then the cross-product is given as


 \vec{a}\times\vec{b} = |\vec{a}|\,|\vec{b}|\,\sin(\theta) \, \vec{n},

where n is the unit-normal vector perpendicular to the plane defined by the vectors a and b.

For more information, see wikipedia's article on the cross product.

Matrix-Vector Product

The matrix-vector product is frequently encountered in solving linear systems of equations, although there are many other uses for it.

Consider a matrix A with m rows and n columns and vector b with n rows:


A=\left[\begin{array}{cccc}
 a_{11} & a_{12} & \cdots & a_{1n}\\
 a_{21} & a_{22} & \cdots & a_{2n}\\
 \vdots & \vdots & \ddots & \vdots\\
 a_{m1} & a_{m2} & \cdots & a_{mn}\end{array}\right],
\quad
b=
\left(\begin{array}{c}
 b_{1}\\ b_{2}\\ \vdots\\ b_{n}
\end{array}\right)

We may multiply A and b if the number of columns in A is equal to the number of rows in b. The product, c is a vector with m rows. The ith entry in c is given as


  c_i=\sum_{j=1}^n A_{i,j}b_j

For example, let's consider the following case:


A = \left[\begin{array}{ccc}
1 & 3 & 2 \\
2 & 5 & 4
\end{array}\right],
\quad
b = \left(\begin{array}{c}
 10 \\ 4 \\ 6
\end{array} \right)

Here we have m=2 and n=3. We define the product, c=Ab, as follows:

  1. Be sure that A and b are compatible for multiplication.
    • Since A has three columns and b has three rows, they are compatible.
  2. Determine how many entries are in c.
    • The c vector will have as many rows as A has.
    • Since A has two rows, c will have two rows.
  3. Determine the elements in c
    • Applying the above formula for i=1 we find
      
 \begin{align}
   c_1 &= \sum_{j=1}^{3} A_{1,j} b_j \\
       &= A_{1,1} b_1 + A_{1,2} b_2 + A_{1,3} b_3 \\
       &= 1 \cdot 10 + 3\cdot 4 + 2 \cdot 6 \\
       &= 34.
 \end{align}
    • Applying the above formula for i=2 we find
      
 \begin{align}
   c_2 &= \sum_{j=1}^{3} A_{2,j} b_j \\
       &= A_{2,1} b_1 + A_{2,2} b_2 + A_{2,3} b_3 \\
       &= 2 \cdot 10 + 4\cdot 4 + 4 \cdot 6 \\
       &= 64.
 \end{align}

Therefore, we have


 c=\left(\begin{array}{c} 34 \\ 64 \end{array}\right).

Matrix-Matrix Product

The matrix-matrix product is a generalization of the matrix-vector product. Consider the general case where we want to multiply two matrices, C = A B, where A and B are matrices. The following rules apply:

  1. The number of columns in A must be equal to the number of rows in B.
  2. The number of columns in C is equal to the number of columns in B.
  3. The number of rows in C is equal to the number of rows in A.

We can state this differently. Suppose that A has m rows and p columns, and that B has p rows and n columns. This satisfies rule 1 above. Then, according to rules 2 and 3, C has m rows and n columns.

The general formula for matrix multiplication is


  C_{i,j} = \sum_{k=1}^n A_{i,k} B_{k,j}

Let's show how this works for a simple example:


 A = \left[ \begin{array}{cc}
   1 & 2 \\ 3 & 5 
  \end{array} \right],
 \quad
 B = \left[ \begin{array}{ccc}
   2 & 4 & 10 \\ 6 & 3 & 1
  \end{array} \right].

We can form C=A B since rule 1 is satisfied. From rules 2 and 3, we conclude that the resulting matrix, C has 2 rows and 3 columns.

We are now ready to build each element of C.

  • i=1 and j=1.
    \begin{align}
  C_{1,1} &= \sum_{k=1}^2 A_{1,k} B_{k,1} \\
          &= A_{1,1} B_{1,1} + A_{1,2} B_{2,1} \\
          &= 1 \cdot 2 + 2 \cdot 6 \\
          &= 14.
  \end{align}
  • i=2 and j=1.
    \begin{align}
  C_{2,1} &= \sum_{k=1}^2 A_{2,k} B_{k,1} \\
          &= A_{2,1} B_{1,1} + A_{2,2} B_{2,1} \\
          &= 3 \cdot 2 + 5 \cdot 6 \\
          &= 36.
  \end{align}
  • i=1 and j=2.
    \begin{align}
  C_{1,2} &= \sum_{k=1}^2 A_{1,k} B_{k,2} \\
          &= A_{1,1} B_{1,2} + A_{1,2} B_{2,2} \\
          &= 1 \cdot 4 + 2 \cdot 3 \\
          &= 10.
  \end{align}
  • i=2 and j=2.
    \begin{align}
  C_{2,2} &= \sum_{k=1}^2 A_{2,k} B_{k,2} \\
          &= A_{2,1} B_{1,2} + A_{2,2} B_{2,2} \\
          &= 3 \cdot 4 + 5 \cdot 3 \\
          &= 27.
  \end{align}
  • i=1 and j=3.
    \begin{align}
  C_{1,3} &= \sum_{k=1}^2 A_{1,k} B_{k,3} \\
          &= A_{1,1} B_{1,3} + A_{1,2} B_{2,3} \\
          &= 1 \cdot 10 + 2 \cdot 1 \\
          &= 12.
  \end{align}
  • i=2 and j=3.
    \begin{align}
  C_{2,3} &= \sum_{k=1}^2 A_{2,k} B_{k,3} \\
          &= A_{2,1} B_{1,3} + A_{2,2} B_{2,3} \\
          &= 3 \cdot 10 + 5 \cdot 1 \\
          &= 35.
  \end{align}

Now, putting all of these together, we have


 C=\left[\begin{array}{ccc}
  14 & 10 & 12 \\
  36 & 27 & 35
\end{array}\right].


Linear Systems of Equations

Warn.jpg
This section is a stub and needs to be expanded.
If you can provide information or finish this section you're welcome to do so and then remove this message afterwards.


Solving Linear Systems of Equations

Warn.jpg
This section is a stub and needs to be expanded.
If you can provide information or finish this section you're welcome to do so and then remove this message afterwards.

Direct Solvers

Gaussian Elimination

The Thomas Algorithm (Tridiagonal Systems)

Iterative Solvers

Jacobi Method

Gauss-Seidel Method

Other Methods

  • Conjugate-Gradient
  • GMRES