## Blog post

Many of the large systems of linear equations can be encountered in science and engineering are derived from the solution of partial differential equations. To this end, a whole range of numerical techniques have been developed in order to solve a linear system of the form: \begin{equation} \textbf{A} \textbf{x} = \textbf{b} \label{eq:linear_eq} \end{equation} where $\textbf{A} \in \Re^{n \times n}$. There are two main broad categories of algorithms that are used to solve \eqref{eq:linear_eq}: direct and iterative methods. The choice of a direct or an iterative method is a combination of the efficiency of the algorithm, the particular structure of the matrix $\textbf{A}$ and a trade-off between computational time and the memory. The term iterative method refers to a wide range of techniques that use successive approximations to obtain more accurate solutions to a linear system at each step and in general are more efficient. There are two types of iterative methods. Stationary methods are older, simpler to understand and implement, but usually not as effective. Nonstationary methods are a relatively recent development; their analysis is usually harder to understand, but they can be highly effective.

Below is a short classification of these methods:

1. Stationary methods
1. Gauss-Seidel
2. Jacobi
3. Successive Overrelaxation (SOR)
4. Symmetric Successive Overrelaxation (SSOR)
2. Non-Stationary methods
6. Chebyshev Iteration
7. Minimum Residual (MINRES) and Symmetric LQ (SYMMLQ)
8. Conjugate Gradient on the Normal Equations: CGNE and CGNR
9. Generalized Minimal Residual (GMRES)
10. Quasi-Minimal Residual (QMR)

In this page the conjugate gradient method is introduced, as an effective method for symmetric positive definite matrices $\textbf{A}$. A real matrix $\textbf{A}\in \Re^{n \times n}$ is called positive definite if: \begin{equation} \textbf{x}^{T}\textbf{A}\textbf{x} > 0 \quad \forall \, \textbf{x} \in \Re^{n}, \quad where \quad \textbf{x} > 0 \label{eq:positive_definite} \end{equation} The method proceeds by generating vector sequences of iterates (i.e., successsive approximations to the solution), residuals corresponding to the iterates, and search directions used in updating the iterates and residuals. Although the length of these sequences can become large, only a small number of vectors needs to be kept in memory. In every iteration of the method, two inner products are performed in order to compute update scalars that are defined to make the sequences satisfy certain orthogonality conditions. On a symmetric positive definite linear system these conditions imply that the distance to the true solution is minimized in some norm.

### A python implementation of conjugate gradient algorithm

A common implementation of CG algorithm is given in the following python snippet.

For the linear solution of the system: \begin{equation} \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \end{equation}

the python snippet gives:

 Iteration $x_{1}$ $x_{2}$ norm 1 0.23564955 0.33836858 8.0019370424E-01 2 0.09090909 0.63636364 5.5511151231E-17

It is worth noting that the exact solution of the system has been reached after 2 iterations!

### References

1. Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C. and van der Vorst, H. (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition, SIAM, Philadelphia, PA.
2. Saad, Y. (2003). Iterative Methods for Sparse Linear Systems, 2nd Edition, SIAM.