ME209 Numerical Analysis

Direct Methods

Gaussian Elimination

Consider a system of three equations:

$$ a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = c_1 $$
$$ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = c_2 $$
$$ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = c_3 $$

\( x_1 \) can be eliminated from the second equation by multiplying the first equation by the coefficient of the first term of the second equation divided by the coefficient of the first term in the first equation.

The modified first equation then becomes:

$$ a_{21} x_1 + \frac{a_{21}}{a_{11}} a_{12} x_2 + \frac{a_{21}}{a_{11}} a_{13} x_3 = \frac{a_{21}}{a_{11}} c_1 $$

Subtracting this from the second equation gives

$$ a'_{22} x_2 + a'_{23} x_3 = c'_2 $$

We can then modify equation 1 again to eliminate \( x_1 \) from equation 3, which yields

$$ a'_{32} x_2 + a'_{33} x_3 = c'_3 $$

\( x_3 \) can be eliminated from equation 3 by the same method, this leaves the three initial equations as

$$ a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = c_1 $$
$$ a'_{22} x_2 + a'_{23} x_3 = c'_2 $$
$$ a''_{33} x_3 = c''_3 $$

This can be written as a matrix in triangular form:

$$
\left [
\begin{matrix}
a_{11} & a_{12} & a_{13} \\
0 & a'_{22} & a'_{23} \\
0 & 0 & a''_{33} \\
\end{matrix}
\right ]
$$

Potential issues: division by zero; round-off and truncation errors

LU Decomposition

For a system

$$ [A] \{ x \} = \{ C \} $$

The matrix \( [A] \) can be rewritten as the product of the Lower Triangular Matrix \( [L] \) and the Upper Triangular Matrix \( [U] \)

$$ [L][U] \{ x \} = \{ C \} $$

Where \( [L][U] \) is defined as

$$
\left [
\begin{matrix}
L_{11} & 0 & 0 \\
L_{21} & L_{22} & 0 \\
L_{31} & L_{32} & L_{33} \\
\end{matrix}
\right ]
\left [
\begin{matrix}
1 & U_{12} & U_{13} \\
0 & 1 & U_{23} \\
0 & 0 & 1 \\
\end{matrix}
\right ]
$$

The product of \( [U] \) and \( \{ x \} \) will then be the column vector \( \{ D \} \).

$$ [U] \{ x \} = \{ D \} $$
$$ [L] \{ D \} = \{ C \} $$

The LU Decomposition method can then be summarised as follows:

Step 1: Decomposition
$$ [A] = [L][U] $$

Step 2: Use \( [L] \) and \( \{ C \} \) to find \( \{ D \} \) by Forward Substitution
$$ [L] \{ D \} = \{ C \} $$

Step 3: Use \( [U] \) and \( \{ D \} \) to find \( \{ x \} \) by Backward Substitution
$$ [U] \{ x \} = \{ D \} $$

A matrix can be decomposed if it has an inverse. If either \( [L] \) or \( [U] \) has a leading diagonal of \( 1 \) then the decomposition is unique.

Inverse Method

For a system

$$ [A] \{ x \} = \{ C \} $$
$$ [A]^{-1} [A] \{ x \} = \{ C \} $$
$$ [I] \{ x \} = [A]^{-1} \{ C \} $$
$$ \{ x \} = [A]^{-1} \{ C \} $$

Where \( [A]^{-1} \) is the inverse of the matrix \( [A] \).

The inverse matrix composition can be simplified by using the inverse of its decomposition

$$ [A]^{-1} = [L]^{-1} [U]^{-1} $$

Iterative Methods

Jacobi Iteration

Given a system of equations

$$ a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = c_1 $$
$$ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = c_2 $$
$$ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = c_3 $$

The equations can be rearranged to solve for each variable:

$$ x_1 = \frac{c_1}{a_{11}} - \frac{a_{12}}{a_{11}} x_2 - \frac{a_{13}}{a_{11}} x_3 $$
$$ x_2 = \frac{c_2}{a_{21}} - \frac{a_{11}}{a_{21}} x_1 - \frac{a_{13}}{a_{21}} x_3 $$
$$ x_3 = \frac{c_3}{a_{31}} - \frac{a_{11}}{a_{31}} x_1 - \frac{a_{12}}{a_{31}} x_2 $$

For an initial estimate of

$$ x_{1,0} = x_{2,0} = x_{3,0} = 0 $$

The estimate can be inserted into the right-hand side of each of the previous equations to yield the first iteration. The results of which can be inserted again to yield the second iteration.

Gauss-Seidel Iteration

The system of equations is rearranged as with the Jacobi iteration:

$$ x_1 = \frac{c_1}{a_{11}} - \frac{a_{12}}{a_{11}} x_2 - \frac{a_{13}}{a_{11}} x_3 $$
$$ x_2 = \frac{c_2}{a_{21}} - \frac{a_{11}}{a_{21}} x_1 - \frac{a_{13}}{a_{21}} x_3 $$
$$ x_3 = \frac{c_3}{a_{31}} - \frac{a_{11}}{a_{31}} x_1 - \frac{a_{12}}{a_{31}} x_2 $$

In each iteration, estimates are updated to the most current version instead of from the previous iteration. In the second iteration, for example, calculating \( x_{2,2} \) would take the second iteration \( x_1 \) value instead of that of the first iteration.

Convergence Condition: The system of equations must be arranged as a diagonally dominant system.

Newton Raphson - Roots of Polynomial - Nonlinear Simultaneous Equations
Secant Method - Roots of Polynomial

Euler's Method - First Order Equations
Modified Euler
Runga-Kutta Method
Multistep Methods
Higher Order Equations - Initial and boundary value

Numerical Integration Methods
Trapezium and Simpson's Rules
Multidimensional Numerical Integration