Skip to content

Code for paper "Beyond Closure Models: Learning Chaotic Systems via Physics-Informed Neural Operators".

Notifications You must be signed in to change notification settings

neuraloperator/pino-closure-models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PINO for long-term stats in chaotic dynamics

This repository contains the code implementation for the paper Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators. The paper is available at arxiv.org/abs/2408.05177.

Introduction

Our paper contains experiments on three chaotic dynamics, 1D Kuramoto–Sivashinsky, 2D Kolmogorov Flow (Navier-Stokes) with small (100) and challenging ($1.6\times 10^4$) Reynolds number.

For each dynamics (experiment), the codes consist of the following parts.

  1. Numerical Solver (for data generation).
  2. Training PINO Model (three steps as described in the paper).
  3. Evaluation (estimating long-term statistics and visulize the results)
  4. Baseline methods.

KS equation

Repo Structure

ks/
│
├── ks_train_pino.py, ks_train_pino_2.py, ks_train_pino_3.py  # Main entry point for training the model, corresponding to the three steps in our algorithm.
├── solver /          # Numerical Solver for data generation.
├── data/            # Codes for data preprocessing and datasets.
│     ├── stat_save/   # Statistics estimated with different methods.
├── evaluation/        # Codes for evaluations.
├── model_save/        # The model after training.
├── fig_save/         # Visualizations of the experiment results.
├── ../neuralop_base/    # Neural Operator Model.
└── ../config/ks_*.yaml    # Configuration files for experiments.

Data generation

cd ks/solver
python KS_solver.py

Control domain size, viscosity coefficient, the spatio and temporal grid size, the number of trajectories, and the time interval of saving snapshots in ks/solver/ParaCtrl.py

After generating the data, move the dataset file to ks/data and replace the links in Line 15,19,28,30 in ks/data_dict.py with the filename of the dataset.

Training

Before training, set wandb project name and username in configuration files (../config/ks_*.yaml)

wandb:
    project: # add project name here
    entity:  # add your username here

and create a file in config/ named wandb_api_key.txt , which contain your wandb key.

Train the PINO model with

cd ks/
python 
ks_train_pino.py   # Supervised-learning with CGS data.
ks_train_pino_2.py  # Supervised-learning with CGS and some FRS data.
ks_train_pino_3.py  # Training with PDE loss.

Evaluation

Before testing the model, one need to have ks/data/ks_stat_uv_emp_range.pt (used for computing total variation error), ks/data/ks_128_1500_random_init.pt (used as random initialization for evaluation experiments) ready. If they were not there, the former one is generated by running python range_of_emp_measure.py, and the second one is generated by saving $t=0$ snapshots from the solver code.

To apply the trained neural operator in coarse-grid simulations, run

cd ks/evaluation
python station.py

To visualize the results, run python final_compare_stat.py. Codes related to visulization and computing error are two functions, plot_all_stat and save_all_err in evaluation/plot_stat_new.py. Before running these visulization codes, one should make shure that the results of all methods have been saved in data/stat_save with file names same as in ks/evaluation/final_compare_stat.py Line 12-15.

Baseline Methods

See detailes in baseline/.

NS equation (small Reynolds number)

Repo Structure

kf/
│
├── kf_train1.py, kf_train2.py, kf_train3.py  # Main entry point for training the model, corresponding to the three steps in our algorithm.
├── solver /          # Numerical Solver for data generation.
├── data/            # Codes for data preprocessing and datasets.
│     ├── stat_save/   # Statistics estimated with different methods.
├── evaluation/        # Codes for evaluations.
├── model_save/        # The model after training.
├── fig_save/         # Visualizations of the experiment results.
├── ../neuralop_base/    # Neural Operator Model.
└── ../config/kf_*.yaml    # Configuration files for experiments.

Data generation

cd kf/solver
python gen_data2.py

Control domain size, viscosity coefficient, the spatio and temporal grid size, the number of trajectories, and the time interval of saving snapshots in Line 15-24 of kf/solver/gen_data2.py

After generating the data, move the dataset file to kf/data and rename it as filename_cgs.pt or filename_frs.pt, which should match the link name in kf/data_dict.py.

Training

Before training, set wandb project name and username in configuration files (../config/ks_*.yaml)

wandb:
    project: # add project name here
    entity:  # add your username here

and create a file in config/ named wandb_api_key.txt , which contain your wandb key.

Train the PINO model with

cd ks/
python 
kf_train1.py   # Supervised-learning with CGS data.
kf_train2.py  # Supervised-learning with CGS and some FRS data.
kf_train3.py  # Training with PDE loss.

Evaluation

Before testing the model, one need to have kf/data/kf_stat_uv_emp_range.pt (used for computing total variation error), and the random coarse-grid data set (used as initialization for evaluation experiments) ready. If they were not there, the former one is generated by running python range_of_emp_measure_kf.py, and the second one is generated by saving $t=0$ snapshots from the solver code.

To apply the trained neural operator in coarse-grid simulations, run

cd kf/evaluation
python station.py

To visualize the results, run python final_compare_stat.py. Codes related to visulization and computing error are two functions, plot_all_stat and save_all_err in evaluation/plot_stat_new.py. Before running these visulization codes, one should make shure that the results of all methods have been saved in data/stat_save with file names same as in kf/evaluation/final_compare_stat.py Line 11-16.

Baseline Methods

See detailes in baseline/.

NS equation (high Reynolds number)

Repo Structure

ns1w/
│
├── t1_train.py, t2_train.py, t3_train_pino.py  # Main entry point for training the model, corresponding to the three steps in our algorithm.
├── solver /          # Numerical Solver for data generation.
├── data/            # Codes for data preprocessing and datasets.
│     ├── stat_save/   # Statistics estimated with different methods.
├── evaluation/        # Codes for evaluations.
├── model_save/        # The model after training.
├── fig_save/         # Visualizations of the experiment results.
├── ../neuralop_advance/    # Modified Neural Operator Model that markedly reduce the peak CUDA memory during training to fit in GPU devices.
└── ../config/ns1w_*.yaml    # Configuration files for experiments.

Data generation

cd ns1w/solver
python gen_data2.py

Control domain size, viscosity coefficient, the spatio and temporal grid size, the number of trajectories, and the time interval of saving snapshots in Line 15-24 of ns1w/solver/gen_data2.py

After generating the data, move the dataset file to ns1w/data and rename it as filename_cgs.pt or filename_frs.pt, which should match the link name in kf/data_dict.py. The input dataset with file name filename_pde_input.pt (as in ns1w/data_dict.py) are high resolution random functions. They are used only as model input to compute PDE loss in the third stage of the training.

Training

Before training, set wandb project name and username in configuration files (../config/ks_*.yaml)

wandb:
    project: # add project name here
    entity:  # add your username here

and create a file in config/ named wandb_api_key.txt , which contain your wandb key.

Train the PINO model with

cd ks/
python 
t1_train.py # Supervised-learning with CGS data.
t2_train.py  # Supervised-learning with CGS and some FRS data.
t3_train_pino.py    # Training with PDE loss.

Evaluation

Before testing the model, one need to have ns1w/data/kf_stat_uv_emp_range.pt (used for computing total variation error), and the random coarse-grid data set (used as initialization for evaluation experiments) ready. If they were not there, the former one is generated by running python range_of_emp_measure_kf.py, and the second one is generated by saving $t=0$ snapshots from the solver code.

To apply the trained neural operator in coarse-grid simulations, run

cd ns1w/evaluation
python station.py

To visualize the results, run python final_compare_stat.py. Codes related to visulization and computing error are two functions, plot_all_stat and save_all_err in evaluation/plot_stat_new.py. Before running these visulization codes, one should make shure that the results of all methods have been saved in data/stat_save with file names same as in ns1w/evaluation/final_compare_stat.py Line 11-16.

Baseline Methods

See detailes in baseline/.

Contact

May you have any questions on our work or implementation, feel free to reach out to chuweiw at caltech dot edu.

Citation

If you find this repository useful, please consider giving a star ⭐ and cite our paper.

@article{wang2024beyond,
  title={Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators},
  author={Wang, Chuwei and Berner, Julius and Li, Zongyi and Zhou, Di and Wang, Jiayun and Bae, Jane and Anandkumar, Anima},
  journal={arXiv preprint arXiv:2408.05177},
  year={2024}
}

About

Code for paper "Beyond Closure Models: Learning Chaotic Systems via Physics-Informed Neural Operators".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages