Restarted GMRES

GMRES solves the problem $Ax = b$ approximately for $x$ where $A$ is a general, linear operator and $b$ the right-hand side vector. The method is optimal in the sense that it selects the solution with minimal residual from a Krylov subspace, but the price of optimality is increasing storage and computation effort per iteration. Restarts are necessary to fix these costs.

Usage

IterativeSolvers.gmres!Function
gmres!(x, A, b; kwargs...) -> x, [history]

Solves the problem $Ax = b$ with restarted GMRES.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b
  • restart::Int = min(20, size(A, 2)): restarts GMRES after specified number of iterations;
  • maxiter::Int = size(A, 2): maximum number of inner iterations of GMRES;
  • Pl: left preconditioner;
  • Pr: right preconditioner;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.
  • orth_meth::OrthogonalizationMethod = ModifiedGramSchmidt(): orthogonalization method (ModifiedGramSchmidt(), ClassicalGramSchmidt(), DGKS())

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The implementation pre-allocates a matrix $V$ of size n by restart whose columns form an orthonormal basis for the Krylov subspace. This allows BLAS2 operations when updating the solution vector $x$. The Hessenberg matrix is also pre-allocated.

By default, modified Gram-Schmidt is used to orthogonalize the columns of $V$, since it is numerically more stable than classical Gram-Schmidt. Modified Gram-Schmidt is however inherently sequential, and if stability is not a concern, classical Gram-Schmidt can be used, which is implemented using BLAS2 operations. As a compromise the "DGKS criterion" can be used, which conditionally applies classical Gram-Schmidt repeatedly to stabilize it, and is typically one to two times slower than classical Gram-Schmidt.

The computation of the residual norm is implemented in a non-standard way, namely keeping track of a vector $\gamma$ in the null-space of $H_k^*$, which is the adjoint of the $(k + 1) \times k$ Hessenberg matrix $H_k$ at the $k$th iteration. Only when $x$ needs to be updated is the Hessenberg matrix mutated with Givens rotations.

Tip

GMRES can be used as an iterator. This makes it possible to access the Hessenberg matrix and Krylov basis vectors during the iterations.