An efficient python code to solve diffusion equation on a Cartesian grid:
- It takes as input a gap field
$g$ . - It analyzes its connectivity and removes isolated islands and checks for percolation (whether a flow problem can be solved).
- It dilates non-zero gap field to properly handle impenetrability of channels, it allows not to erode the domain for flux calculation.
- It applies an inlet pressure
$p_i=1$ on one side$x=0$ and an outlet pressure$p_0=0$ on the opposite side$x=1$ and uses periodic boundary conditions on the lateral sides$y={0,1}$ . - It constructs a sparse matrix with conductivity proportional to
$g^3$ . - Different solvers (direct and iterative with appropriate preconditioners) are selected and tuned to solve efficiently the resulting linear system of equations.
- Total flux is properly computed.
- Clone the repository
git clone [email protected]:vyastreb/FDTransportCode.git
cd FDTransportCode- Install the package and its dependencies
pip install -e .
pip install -r requirements.txtor with conda
conda install --file requirements.txt- Run a minimal test (incompressible potential flow around a circular inclusion)
import numpy as np
from fluxflow import transport as FS
import matplotlib.pyplot as plt
n = 100
X, Y = np.meshgrid(np.linspace(0, 1, n), np.linspace(0, 1, n))
gaps = (np.sqrt((X - 0.5)**2 + (Y - 0.5)**2) > 0.2).astype(float)
_, _, flux = FS.solve_fluid_problem(gaps, solver="auto")
if flux is not None:
plt.imshow(np.sqrt(flux[:, :, 0]**2 + flux[:, :, 1]**2),
origin='lower', cmap='jet')
plt.show()- Run the test suit
python -m pytest -q- Or run these tests manually
/tests/test_evolution.py,/tests/test_solve.pyand/tests/test_solvers.py.
The fluid flow solver supports several linear system solvers and preconditioners for efficient and robust solution of large sparse systems:
| Solver String | Solver Type | Preconditioner | Backend | Description |
|---|---|---|---|---|
petsc-cg.hypre |
Iterative (CG) | HYPRE | PETSc | 🥇 CG with HYPRE BoomerAMG. The fastest for moderate problems. |
pardiso |
Direct | - | Intel MKL | 🥇PARDISO direct solver. The fastest for bigger problems, but consumes a lot of memory. |
scipy.amg-rs |
Iterative (CG) | AMG (Ruge-Stuben) | SciPy/PyAMG | CG with Ruge-Stuben AMG. Only two times slower than the fastest. |
scipy.amg-smooth_aggregation |
Iterative (CG) | AMG (Smoothed Aggregation) | SciPy/PyAMG | CG with Smoothed Aggregation AMG. Memory efficient, but relatively slow. |
cholesky |
Direct | - | scikit-sparse | CHOLMOD Cholesky decomposition. Slightly lower memory consumption for huge problems, but it is slow. |
petsc-cg.gamg |
Iterative (CG) | GAMG | PETSc | CG with Geometric Algebraic Multigrid. Not very reliable in performance, 2-3 times slower than the fastest solver. |
petsc-mumps |
Direct | - | PETSc/MUMPS | MUMPS direct solver via PETSc. For moderate problems, five times slower than the fastest solver. |
petsc-cg.ilu |
Iterative (CG) | ILU | PETSc | CG with Incomplete LU factorization. The slowest. |
Relevant CPU times for a relatively small problem with
| Solver | CPU time (s) |
|---|---|
| petsc-cg.hypre | 4.46 |
| pardiso | 8.53 |
| scipy.amg-rs | 8.96 |
| petsc-cg.gamg | 11.96 |
| scipy.amg-smooth_aggregation | 15.48 |
| cholesky | 20.61 |
| petsc-mumps | 26.14 |
| petsc-cg.ilu | 134.98 |
Rules of thumb:
- For fastest computation: use
pardiso(consumes a lot of memory) orpetsc-cg.hypre(the only difficulty is to install PETSc); - For best memory efficiency: use
scipy.amg-rs; - For small-scale problems (for
$N<2000$ ): usepardiso; - For large-scale problems (for
$N>2000$ ): usepetsc-cg.hypre; - Avoid
petsc-cg.ilu.
The most reliable solvers for big problems are petsc-cg.hypre and pardiso. Here are the test data obtained on rough "contact" problems on Intel(R) Xeon(R) Platinum 8488C. Only solver's time is shown.
| N | CPU time (s) | |
|---|---|---|
| PETSc-CG.Hypre | Intel MKL Pardiso | |
| 20 000 | 1059.22 | ∅ |
| 10 000 | 278.18 | 112.38 |
| 5 000 | 70.62 | 28.42 |
| 2 500 | 17.72 | 6.34 |
| 1 250 | 4.47 | 1.93 |
∅ pardiso could not run as it required more than 256 GB or memory.
CPU/RAM Performance
Performance of the code on a truncated rough surface is shown below. The peak memory consumption and the CPU time required to perform connectivity analysis, constructing the matrix and solving the linear system are provided. The real number of DOFs is reported which corresponds to approximately 84% of the square grid
An example of a fluid flow simulation, solved on the grid petsc is only 97 seconds and the peak memory consumption is 25.8 GB.
Another example for a grid petsc-cg.gamg solver.
- Author: Vladislav A. Yastrebov (CNRS, Mines Paris - PSL)
- AI usage: Cursor & Copilot (different models), ChatGPT 4o, 5, Claude Sonnet 3.7, 4, 4.5
- License: BSD 3-clause
- Date: Sept-Oct 2025




