Showing revision 1

BLITBlock Iterative Linear Solver Toolbox

High-performance sparse solvers for symmetric FEM matrices

Real Symmetric Complex Symmetric (A = Aᵀ) Sparse & Dense
Python
MATLAB
Fortran
C++

Why Block QMR?

🧮

Block Solver

Multiple RHS simultaneously, sharing Krylov subspace for 4× fewer iterations

Complex Symmetry

Quasi inner product ⟨x,y⟩ = Σxₖyₖ — natural for A = Aᵀ systems

🛡️

Preconditioned

Built-in ILU, Jacobi, and split preconditioners

Up to 15× Faster

Outperforms direct solvers for large systems

💾

Memory Efficient

Short three-term recurrences, not full Krylov basis

📈

Smooth Convergence

Monotonic quasi-minimal residual reduction

Applications

🔬 Diffuse Optical Tomography
📡 Frequency-Domain EM
🔊 Acoustic Scattering
🩺 Microwave Imaging
🌊 Helmholtz Equations
⚙️ Structural Dynamics

Simple API

# pip install blocksolver
from blocksolver import blqmr
from scipy.sparse import diags

# Complex symmetric FEM matrix
A = diags([-1, 4, -1], [-1, 0, 1], shape=(5000, 5000))
A = A + 0.1j * diags([1], [0], shape=(5000, 5000))

# Block solve: 16 RHS at once
B = np.random.randn(5000, 16) + 0j
result = blqmr(A, B, tol=1e-10, precond_type='diag')

print(f"Converged: {result.converged}")
% Complex symmetric FEM matrix
n = 5000;
K = spdiags([-ones(n,1) 4*ones(n,1) -ones(n,1)], -1:1, n, n);
A = K + 0.1i * speye(n);

% Block solve: 16 RHS at once
B = rand(n, 16) + 1i*rand(n, 16);
opt.precond = 'diag';
[X, flag] = blqmr(A, B, 1e-10, 1000, [], [], [], opt);
program solve_fem
    use blit_blqmr_real
    implicit none
    
    type(BLQMRSolver) :: qmr
    integer :: n, nnz, nrhs
    integer, allocatable :: Ap(:), Ai(:)
    real(8), allocatable :: Ax(:), B(:,:), X(:,:)
    
    n = 5000; nnz = 14998; nrhs = 16
    ! ... allocate and fill CSC arrays ...
    
    call BLQMRCreate(qmr, n)
    qmr%maxit = 1000
    qmr%qtol = 1.0d-10
    qmr%pcond_type = 3  ! Jacobi
    
    call BLQMRPrep(qmr, Ap, Ai, Ax, nnz)
    call BLQMRSolve(qmr, Ap, Ai, Ax, nnz, X, B, nrhs)
    call BLQMRDestroy(qmr)
end program
#include "blit_solvers.h"

int main() {
    const int n = 5000, nnz = 14998, nrhs = 16;
    std::vector<int> Ap(n+1), Ai(nnz);
    std::vector<double> Ax(nnz), b(n*nrhs), x(n*nrhs);
    
    BlitBLQMR<double> solver(n, nrhs, 1000, 1e-3);
    solver.Prepare(Ap.data(), Ai.data(), Ax.data(), nnz);
    solver.Solve(x.data(), b.data(), nrhs);
    return 0;
}

Performance

Grid Size BLQMR vs Direct Speedup
20³ 8,000 nodes
1.2×
30³ 27,000 nodes
3.6×
40³ 64,000 nodes
6.8×
50³ 125,000 nodes
14.7×

Get Started

1

Python

pip install blocksolver

2

Fortran Backend (Optional)

Linux: apt install gfortran libsuitesparse-dev

3

MATLAB/Octave

addpath('/path/to/blit/matlab')

Powered by Habitat