====== Linear solvers ======
===== Introduction =====
Since 20/07/2005, Metafor includes several linear solvers to solve the system at each iteration when implicitly integrating motion equations. Skyline is the traditional solver, but the direct solver [[http://www.pardiso-project.org/|Pardiso]] and iterative sparse solver (ISS), a GMRES implemented in the MKL, can also be used.
By default, as the stiffness matrix is non symmetric the solver used is non symmetric (with symmetric structure). It is however possible to force the stiffness matrix to be symmetric (computing the mean value between upper and lower terms of the matrix) and to use a symmetric solver.
use :
metafor = domain.getMetafor()
solvermanager = metafor.getSolverManager()
solvermanager.setSymmetric(True) # False by default
===== Skyline solver =====
Default solver. It does not require any specific configuration. However, it is sequential, occupies a lot of memory and is quite slow on big simulations. However, it is very robust, and the code source is available so it can run on every OS. The Skyline is automatically optimized using Sloan Algorithm.
===== Pardiso (DSS) Solver=====
Pardiso solver is a sparse direct solver using CSR format for storing the matrix. Several CPU can be used (SMP parallelism, using OpenMP). It takes less memory and is faster than Skyline, however it has trouble to handle pivots (does not appear that often). In addition, it requires MKL or CXML to run.
Use:
metafor = domain.getMetafor()
solvermanager = metafor.getSolverManager();
try:
solvermanager.setSolver(DSSolver());
except NameError:
pass
The number of CPUs is set using
Blas.setNumThreads(n)
where ''n'' is the number of CPUs.
===== ISS (Iterative Sparse Solver) =====
ISS solver is an iterative solver (GMRES) available in MKL library. Iterative solvers are not as robust as direct ones. They iterate successively, and iterations can converge quite slowly is the matrix is ill-conditioned. To improve convergence, it is almost necessary to use a preconditioner to decrease the condition number. However, iterative solvers are quite fast if parameterized properly, but the optimal parameters are quite difficult to find. Concerning memory, they use CSR format so do not require too much space. Only the preconditioner is added to nonzero elements.
Use (see for example ''apps.qs.cont2ILU0''):
solver = ISSolver()
solver.setRestart(5) # restart parameter of the GMRES
solver.setItMax(10) # maximal number of iterations
solver.setTol(1e-1) # tolerance on the residual
solvermanager = metafor.getSolverManager()
solvermanager.setSolver(solver)
By default, the solver uses a ILU0 preconditioner. It is an incomplete factorization (A~=LU) for which the structure of L and U is the same as A.
A more elaborated preconditioner is also available (ILUT). It is also an incomplete factorization, but where the ''nFill'' largest elements are kept for each line of the matrix. It is used with the command:
solver.useILUT(20) # nFill=20 (only 20 elements are kept on the lines of L and U)
===== MUMPS (MUltifrontal Massively Parallel sparse direct Solver) =====
MUMPS is a sparse direct solver for the solution of large linear algebric systems on distributed memory parallel computers. It implements the multifrontal method, which is a version of Gaussian elimination for large sparse systems of equations, especially those arising from the finite element method. It is written in Fortran 90 with parallelism by MPI and it uses BLAS and ScaLAPACK kernels for dense matrix computations.
The input matrix can be supplied to MUMPS in assembled format in coordinate COO (distributed or centralized) or in elemental format.
Use:
metafor = domain.getMetafor()
solvermanager = metafor.getSolverManager();
try:
solvermanager.setSolver(MUMPSolver());
except NameError:
pass
MUMPS can be used with multiple threads (CPU cores) by using
Blas.setBlasNumThreads(n)
where ''n'' is the number of threads.