Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"name": "MFC Container",
"image": "sbryngelson/mfc:latest-cpu",
"workspaceFolder": "/opt/MFC",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"editor.formatOnSave": true
},
}
79 changes: 79 additions & 0 deletions .github/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
node_modules/
package.json
yarn.lock

.venv/
.vscode/
src/*/autogen/

*.swo
*.swp

*:Zone.Identifier

.nfs*

__pycache__

*.egg-info

.DS_Store

# NVIDIA Nsight Compute
*.nsys-rep
*.sqlite

docs/*/initial*
docs/*/result*
docs/documentation/*-example.png
docs/documentation/examples.md

examples/*batch/*/
examples/**/D/*
examples/**/p*
examples/**/D_*
examples/**/*.inf
examples/**/*.inp
examples/**/*.o*
examples/**/silo*
examples/**/restart_data*
examples/**/*.out
examples/**/binary
examples/**/fort.1
examples/**/*.sh
examples/**/*.err
examples/**/viz/
examples/*.jpg
examples/*.png
examples/*/workloads/
examples/*/run-*/
examples/*/logs/
examples/**/*.f90
workloads/

benchmarks/*batch/*/
benchmarks/*/D/*
benchmarks/*/p*
benchmarks/*/D_*
benchmarks/*/*.inf
benchmarks/*/*.inp
benchmarks/*/*.dat
benchmarks/*/*.o*
benchmarks/*/silo*
benchmarks/*/restart_data*
benchmarks/*/*.out
benchmarks/*/binary
benchmarks/*/fort.1
benchmarks/*/*.sh
benchmarks/*/*.err
benchmarks/*/viz/
benchmarks/*.jpg
benchmarks/*.png

*.mod

# Video Files
*.mp4
*.mov
*.mkv
*.avi
55 changes: 55 additions & 0 deletions .github/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
ARG BASE_IMAGE
FROM ${BASE_IMAGE}

ARG TARGET
ARG CC_COMPILER
ARG CXX_COMPILER
ARG FC_COMPILER
ARG COMPILER_PATH
ARG COMPILER_LD_LIBRARY_PATH

RUN apt-get update -y && \
if [ "$TARGET" != "gpu" ]; then \
apt-get install -y \
build-essential git make cmake gcc g++ gfortran bc\
python3 python3-venv python3-pip \
openmpi-bin libopenmpi-dev libfftw3-dev \
mpich libmpich-dev; \
else \
apt-get install -y \
build-essential git make cmake bc\
python3 python3-venv python3-pip \
libfftw3-dev \
openmpi-bin libopenmpi-dev; \
fi && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ENV OMPI_ALLOW_RUN_AS_ROOT=1
ENV OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
ENV PATH="/opt/MFC:$PATH"

COPY ../ /opt/MFC
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Build Context Restricts Parent Directory Access

The COPY ../ instruction tries to access the parent directory of the build context. Since the build context is /mnt/share, ../ resolves to /mnt, preventing the repository files at /mnt/share from being copied into the image. This results in missing or incorrect files.

Fix in Cursor Fix in Web


ENV CC=${CC_COMPILER}
ENV CXX=${CXX_COMPILER}
ENV FC=${FC_COMPILER}
ENV PATH="${COMPILER_PATH}:$PATH"
ENV LD_LIBRARY_PATH="${COMPILER_LD_LIBRARY_PATH}:${LD_LIBRARY_PATH:-}"

RUN echo "TARGET=$TARGET CC=$CC_COMPILER FC=$FC_COMPILER" && \
cd /opt/MFC && \
if [ "$TARGET" = "gpu" ]; then \
./mfc.sh build --gpu -j $(nproc); \
else \
./mfc.sh build -j $(nproc); \
fi

RUN cd /opt/MFC && \
if [ "$TARGET" = "gpu" ]; then \
./mfc.sh test -a --dry-run --gpu -j $(nproc); \
else \
./mfc.sh test -a --dry-run -j $(nproc); \
fi

WORKDIR /opt/MFC
ENTRYPOINT ["tail", "-f", "/dev/null"]
132 changes: 132 additions & 0 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
name: Containerization

on:
release:
types: [published]
workflow_dispatch:
inputs:
tag:
description: 'tag to containerize'
required: true

concurrency:
group: Containerization
cancel-in-progress: false

jobs:
Container:
strategy:
matrix:
config:
- { name: 'cpu', runner: 'ubuntu-22.04', base_image: 'ubuntu:22.04' }
- { name: 'gpu', runner: 'ubuntu-22.04', base_image: 'nvcr.io/nvidia/nvhpc:23.11-devel-cuda_multi-ubuntu22.04' }
- { name: 'gpu', runner: 'ubuntu-22.04-arm', base_image: 'nvcr.io/nvidia/nvhpc:23.11-devel-cuda_multi-ubuntu22.04' }
runs-on: ${{ matrix.config.runner }}
outputs:
tag: ${{ steps.clone.outputs.tag }}
steps:
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
tool-cache: false
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true

- name: Login
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}

- name: Setup Buildx
uses: docker/setup-buildx-action@v3

- name: Setup QEMU
uses: docker/setup-qemu-action@v3

- name: Clone
id: clone
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
echo "tag=$TAG" >> $GITHUB_OUTPUT
echo "TAG=$TAG" >> $GITHUB_ENV
git clone --branch "$TAG" --depth 1 https://github.com/MFlowCode/MFC.git mfc

- name: Stage
run: |
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo mkdir -p /home/runner/tmp
export TMPDIR=/home/runner/tmp
free -h
sudo mkdir -p /mnt/share
sudo chmod 777 /mnt/share
cp -r mfc/* /mnt/share/
cp -r mfc/.git /mnt/share/.git
cp mfc/.github/Dockerfile /mnt/share/
cp mfc/.github/.dockerignore /mnt/share/
docker buildx create --name mfcbuilder --driver docker-container --use

- name: Build and push image (cpu)
if: ${{ matrix.config.name == 'cpu' }}
uses: docker/build-push-action@v6
with:
builder: mfcbuilder
context: /mnt/share
file: /mnt/share/Dockerfile
platforms: linux/amd64,linux/arm64
build-args: |
BASE_IMAGE=${{ matrix.config.base_image }}
TARGET=${{ matrix.config.name }}
CC_COMPILER=${{ 'gcc' }}
CXX_COMPILER=${{ 'g++' }}
FC_COMPILER=${{ 'gfortran' }}
COMPILER_PATH=${{ '/usr/bin' }}
COMPILER_LD_LIBRARY_PATH=${{ '/usr/lib' }}
tags: ${{ secrets.DOCKERHUB_USERNAME }}/mfc:${{ env.TAG }}-${{ matrix.config.name }}
push: true

- name: Build and push image (gpu)
if: ${{ matrix.config.name == 'gpu' }}
uses: docker/build-push-action@v5
with:
builder: default
context: /mnt/share
file: /mnt/share/Dockerfile
build-args: |
BASE_IMAGE=${{ matrix.config.base_image }}
TARGET=${{ matrix.config.name }}
CC_COMPILER=${{ 'nvc' }}
CXX_COMPILER=${{ 'nvc++' }}
FC_COMPILER=${{ 'nvfortran' }}
COMPILER_PATH=${{ '/opt/nvidia/hpc_sdk/Linux_x86_64/compilers/bin' }}
COMPILER_LD_LIBRARY_PATH=${{ '/opt/nvidia/hpc_sdk/Linux_x86_64/compilers/lib' }}
tags: ${{ secrets.DOCKERHUB_USERNAME }}/mfc:${{ env.TAG }}-${{ matrix.config.name }}-${{ matrix.config.runner}}
push: true

manifests:
runs-on: ubuntu-latest
needs: Container
steps:
- name: Login
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}

- name: Create and Push Manifest Lists
env:
TAG: ${{ needs.Container.outputs.tag }}
REGISTRY: ${{ secrets.DOCKERHUB_USERNAME }}/mfc
run: |
docker buildx imagetools create -t $REGISTRY:latest-cpu $REGISTRY:$TAG-cpu
docker manifest create $REGISTRY:$TAG-gpu $REGISTRY:$TAG-gpu-ubuntu-22.04 $REGISTRY:$TAG-gpu-ubuntu-22.04-arm
docker manifest create $REGISTRY:latest-gpu $REGISTRY:$TAG-gpu-ubuntu-22.04 $REGISTRY:$TAG-gpu-ubuntu-22.04-arm
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Manifest Tags Overwritten, Causing Race Conditions

The latest-cpu and latest-gpu manifest tags are unconditionally overwritten by each workflow run. This means latest might point to an older or incorrect version, especially with concurrent runs or out-of-order processing, creating a race condition for the latest reference.

Fix in Cursor Fix in Web

docker manifest push $REGISTRY:$TAG-gpu
docker manifest push $REGISTRY:latest-gpu
42 changes: 24 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,9 @@

**Welcome!**
MFC simulates compressible multi-phase flows, [among other things](#what-else-can-this-thing-do).
It uses metaprogramming to stay short and portable (~20K lines).
MFC conducted the largest known, open CFD simulation at <a href="https://arxiv.org/abs/2505.07392" target="_blank">200 trillion grid points</a>, and 1 quadrillion degrees of freedom (as of September 2025), and is a 2025 Gordon Bell Prize finalist.
It uses metaprogramming and is short (20K lines) and portable.
MFC conducted the largest known CFD simulation at <a href="https://arxiv.org/abs/2505.07392" target="_blank">200 trillion grid points</a>, and 1 quadrillion degrees of freedom (as of September 2025).
MFC is a 2025 Gordon Bell Prize Finalist.

<p align="center">
<a href="https://doi.org/10.48550/arXiv.2503.07953" target="_blank">
Expand Down Expand Up @@ -76,7 +77,7 @@ This one simulates high-Mach flow over an airfoil:
<img src="docs/res/airfoil.png" alt="Airfoil Example" width="700"/><br/>
</p>

And here is a high amplitude acoustic wave reflecting and emerging through a circular orifice:
And here is a high-amplitude acoustic wave reflecting and emerging through a circular orifice:

<p align="center">
<img src="docs/res/orifice.png" alt="Orifice Example" width="700"/><br/>
Expand All @@ -85,15 +86,23 @@ And here is a high amplitude acoustic wave reflecting and emerging through a cir

## Getting started

You can navigate [to this webpage](https://mflowcode.github.io/documentation/md_getting-started.html) to get started using MFC!
For a _very_ quick start, open a GitHub Codespace to load a pre-configured Docker container and familiarize yourself with MFC commands.
Click <kbd> <> Code</kbd> (green button at top right) → <kbd>Codespaces</kbd> (right tab) → <kbd>+</kbd> (create a codespace).

> ****Note:**** Codespaces is a free service with a monthly quota of compute time and storage usage.
> It is recommended for testing commands, troubleshooting, and running simple case files without installing dependencies or building MFC on your device.
> Don't conduct any critical work here!
> To learn more, please see [how Docker & Containers work](https://mflowcode.github.io/documentation/docker.html).

You can navigate [to this webpage](https://mflowcode.github.io/documentation/md_getting-started.html) to get you get started using MFC on your local machine, cluster, or supercomputer!
It's rather straightforward.
We'll give a brief intro. here for MacOS.
We'll give a brief introdocution for MacOS below.
Using [brew](https://brew.sh), install MFC's dependencies:
```shell
brew install coreutils python cmake fftw hdf5 gcc boost open-mpi lapack
```
You're now ready to build and test MFC!
Put it to a convenient directory via
Put it to a local directory via
```shell
git clone https://github.com/MFlowCode/MFC
cd MFC
Expand Down Expand Up @@ -123,17 +132,14 @@ You can visualize the output data in `examples/3d_shockdroplet/silo_hdf5` via Pa
## Is this _really_ exascale?

[OLCF Frontier](https://www.olcf.ornl.gov/frontier/) is the first exascale supercomputer.
The weak scaling of MFC on this machine shows near-ideal utilization.
The weak scaling of MFC on this machine shows near-ideal utilization.
We also scale ideally to >98% of LLNL El Capitan.

<p align="center">
<img src="docs/res/scaling.png" alt="Scaling" width="400"/>
</p>


## What else can this thing do

MFC has many features.
They are organized below.
## What else can this thing do?

### Physics

Expand Down Expand Up @@ -209,7 +215,7 @@ They are organized below.

If you use MFC, consider citing it as below.
Ref. 1 includes all modern MFC features, including GPU acceleration and many new physics features.
If referencing MFC's (GPU) performance, consider citing ref. 1 and 2, which describe the solver and how it was crafted.
If referencing MFC's (GPU) performance, consider citing ref. 1 and 2, which describe the solver and its design.
The original open-source release of MFC is ref. 3, which should be cited for provenance as appropriate.

```bibtex
Expand Down Expand Up @@ -249,11 +255,11 @@ MFC is under the MIT license (see [LICENSE](LICENSE) for full text).

## Acknowledgements

Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE), and the National Science Foundation (NSF).
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE) and National Nuclear Security Administration (NNSA), and the National Science Foundation (NSF).

MFC computations have used many supercomputing systems. A partial list is below
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson)
* LLNL El Capitan, Tuolumne, and Lassen; El Capitan early access system Tioga
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson).
* LLNL El Capitan, Tuolumne, and Lassen; El Capitan early access system Tioga.
* NCSA Delta and DeltaAI, PSC Bridges(1/2), SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
* DOD systems Blueback, Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program
* Sandia National Labs systems Doom and Attaway and testbed systems Weaver and Vortex
* DOD systems Blueback, Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program.
* Sandia National Labs systems Doom and Attaway, and testbed systems Weaver and Vortex.
Loading