diff --git a/docs/backbones/m3gnet.md b/docs/backbones/m3gnet.md index bafa325..cd1cd59 100644 --- a/docs/backbones/m3gnet.md +++ b/docs/backbones/m3gnet.md @@ -1,8 +1,19 @@ # M3GNet Backbone -The M3GNet backbone implements the M3GNet model architecture in MatterTune. It provides a powerful graph neural network designed specifically for materials science applications. +The M3GNet backbone implements the M3GNet model architecture in MatterTune. It provides a powerful graph neural network designed specifically for materials science applications. In MatterTune, we chose the M3GNet model implemented by MatGL and pretrained on MPTraj dataset. -## Overview +## Installation + +```bash +conda create -n matgl-tune python=3.10 -y +pip install matgl +pip install torch==2.2.1+cu121 -f https://download.pytorch.org/whl/torch_stable.html +pip uninstall dgl +pip install dgl -f https://data.dgl.ai/wheels/torch-2.2/cu121/repo.html +pip install dglgo -f https://data.dgl.ai/wheels-test/repo.html +``` + +## Key Features M3GNet supports predicting: - Total energy (with energy conservation) diff --git a/docs/backbones/mattersim.md b/docs/backbones/mattersim.md new file mode 100644 index 0000000..1cd2261 --- /dev/null +++ b/docs/backbones/mattersim.md @@ -0,0 +1,101 @@ +# MatterSim Backbone + +> Note: As of the latest MatterTune update, MatterSim has only released the M3GNet model. + +The MatterSim backbone integrates the MatterSim model architecture into MatterTune. MatterSim is a foundational atomistic model designed to simulate materials property across wide range of elements, temperatures and pressures. + +## Installation + +We strongly recommand to install MatterSim from source code + +```bash +git clone git@github.com:microsoft/mattersim.git +cd mattersim +``` + +Find the line 41 of the pyproject.toml in MatterSim, which is ```"pydantic==2.9.2",```. Change it to ```"pydantic>=2.9.2",```. After finishing this modification, install MatterSim by running: + +```bash +mamba env create -f environment.yaml +mamba activate mattersim +uv pip install -e . +python setup.py build_ext --inplace +``` + +## Key Features + +- Pretrained on materials data across wide range of elements, temperatures and pressures. +- Flexible model architecture selection + - MatterSim-v1.0.0-1M: A mini version of the M3GNet that is faster to run. + - MatterSim-v1.0.0-5M: A larger version of the M3GNet that is more accurate. + - TO BE RELEASED: Graphormer model with even larger parameter scale +- Support for property predictions: + - Energy (extensive/intensive) + - Forces (conservative for M3GNet and non-conservative for Graphormer) + - Stresses (conservative for M3GNet and non-conservative for Graphormer) + - Graph-level properties (available on Graphormer) + +## Configuration + +Here's a complete example showing how to configure the JMP backbone: + +```python +from mattertune import configs as MC +from pathlib import Path + +config = MC.MatterTunerConfig( + model=MC.MatterSimBackboneConfig( + # Required: Path to pre-trained checkpoint + pretrained_model="MatterSim-v1.0.0-5M", + + # Graph construction settings + graph_convertor=MC.MatterSimGraphConvertorConfig( + twobody_cutoff = 5.0 ## The cutoff distance for the two-body interactions. + has_threebody = True ## Whether to include three-body interactions. + threebody_cutoff = 4.0 ## The cutoff distance for the three-body interactions. + ) + + # Properties to predict + properties=[ + # Energy prediction + MC.EnergyPropertyConfig( + loss=MC.MAELossConfig(), + loss_coefficient=1.0 + ), + + # Force prediction (conservative) + MC.ForcesPropertyConfig( + loss=MC.MAELossConfig(), + loss_coefficient=10.0, + conservative=True + ), + + # Stress prediction (conservative) + MC.StressesPropertyConfig( + loss=MC.MAELossConfig(), + loss_coefficient=1.0, + conservative=True + ), + ], + + # Optimizer settings + optimizer=MC.AdamWConfig(lr=1e-4), + + # Optional: Learning rate scheduler + lr_scheduler=MC.CosineAnnealingLRConfig( + T_max=100, + eta_min=1e-6 + ) + ) +) +``` + +## Examples & Notebooks + +A notebook tutorial about how to fine-tune and use MatterSim model can be found in ```notebooks/mattersim-waterthermo.ipynb```([link](https://github.com/Fung-Lab/MatterTune/blob/main/notebooks/mattersim-waterthermo.ipynb)). + +Under ```water-thermodynamics```([link](https://github.com/Fung-Lab/MatterTune/tree/main/examples/water-thermodynamics)), we gave an advanced usage example fine-tuning MatterSim on PES data and applying to MD simulation + +## License + +The MatterSim backbone is available under MIT License \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index b50a763..26563f7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -31,6 +31,7 @@ backbones/jmp backbones/m3gnet backbones/orb backbones/eqv2 +backbones/mattersim ``` ```{toctree} diff --git a/docs/installation.md b/docs/installation.md index a3ed7f2..6652d31 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -50,6 +50,24 @@ pip install "git+https://github.com/FAIR-Chem/fairchem.git@omat24#subdirectory=p pip install ase "e3nn>=0.5" hydra-core lmdb numba "numpy>=1.26,<2.0" orjson "pymatgen>=2023.10.3" submitit tensorboard "torch>=2.4" wandb torch_geometric h5py netcdf4 opt-einsum spglib ``` +### MatterSim + +We strongly recommand to install MatterSim from source code + +```bash +git clone git@github.com:microsoft/mattersim.git +cd mattersim +``` + +Find the line 41 of the pyproject.toml in MatterSim, which is ```"pydantic==2.9.2",```. Change it to ```"pydantic>=2.9.2",```. After finishing this modification, install MatterSim by running: + +```bash +mamba env create -f environment.yaml +mamba activate mattersim +uv pip install -e . +python setup.py build_ext --inplace +``` + ## MatterTune Package Installation ```{important} diff --git a/docs/introduction.md b/docs/introduction.md index 1a3c15f..c580b18 100644 --- a/docs/introduction.md +++ b/docs/introduction.md @@ -14,6 +14,7 @@ Seamlessly work with multiple state-of-the-art pre-trained models including: - EquiformerV2 - M3GNet - ORB +- MatterSim ### Flexible Property Predictions Support for various molecular and materials properties: diff --git a/docs/license.md b/docs/license.md index 80ab1bb..23a2447 100644 --- a/docs/license.md +++ b/docs/license.md @@ -22,6 +22,11 @@ BSD 3-Clause License Apache License 2.0 [ORB License](https://github.com/orbital-materials/orb-models/blob/main/LICENSE) +### MatterSim Backbone +MIT License +[MatterSim License](https://github.com/microsoft/mattersim/blob/main/LICENSE.txt) + + ```{important} Please ensure compliance with the respective licenses when using specific model backbones in your project. For commercial use cases, carefully review each backbone's license terms or contact the respective authors for licensing options. ``` diff --git a/notebooks/eqv2-omat.ipynb b/notebooks/eqv2-omat.ipynb index 0d8270c..6c597c8 100644 --- a/notebooks/eqv2-omat.ipynb +++ b/notebooks/eqv2-omat.ipynb @@ -1092,8 +1092,8 @@ "\n", "\n", "hp = hparams()\n", - "model = MatterTuner(hp).tune()\n", - "model" + "tune_output = MatterTuner(hp).tune()\n", + "model, trainer = tune_output.model, tune_output.trainer" ] }, { diff --git a/notebooks/jmp-omat-autosplit.ipynb b/notebooks/jmp-omat-autosplit.ipynb index 7d67a01..394bc17 100644 --- a/notebooks/jmp-omat-autosplit.ipynb +++ b/notebooks/jmp-omat-autosplit.ipynb @@ -922,8 +922,8 @@ "\n", "\n", "hp = hparams()\n", - "model = MatterTuner(hp).tune()\n", - "model" + "tune_output = MatterTuner(hp).tune()\n", + "model, trainer = tune_output.model, tune_output.trainer" ] }, { diff --git a/notebooks/m3gnet-waterthermo.ipynb b/notebooks/m3gnet-waterthermo.ipynb index 961af54..d4ce568 100644 --- a/notebooks/m3gnet-waterthermo.ipynb +++ b/notebooks/m3gnet-waterthermo.ipynb @@ -219,7 +219,7 @@ " ## Data hparams\n", " hparams.data = MC.AutoSplitDataModuleConfig.draft()\n", " hparams.data.dataset = MC.XYZDatasetConfig.draft()\n", - " hparams.data.dataset.src = \"./data/water_ef.xyz\"\n", + " hparams.data.dataset.src = Path(\"../examples/water-thermodynamics/data/water_ef.xyz\")\n", " hparams.data.train_split = 0.8\n", " hparams.data.batch_size = 1\n", "\n", @@ -230,8 +230,8 @@ " return hparams\n", "\n", "\n", - "model = MatterTuner(hparams()).tune()\n", - "model\n", + "tune_output = MatterTuner(hp).tune()\n", + "model, trainer = tune_output.model, tune_output.trainer\n", "\n", "# pip install torch_sparse -f https://data.pyg.org/whl/torch-2.2.1+cu121.html" ] diff --git a/notebooks/mattersim-waterthermo.ipynb b/notebooks/mattersim-waterthermo.ipynb new file mode 100644 index 0000000..b049447 --- /dev/null +++ b/notebooks/mattersim-waterthermo.ipynb @@ -0,0 +1,594 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:root:Failed to import `lovely_tensors`. Ignoring pretty PyTorch tensor formatting\n", + "INFO:root:Failed to import `lovely_numpy`. Ignoring pretty numpy array formatting\n" + ] + } + ], + "source": [ + "import logging\n", + "\n", + "import nshutils as nu\n", + "import rich\n", + "\n", + "logging.basicConfig(level=logging.DEBUG)\n", + "\n", + "nu.pretty()" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
MatterTunerConfig(\n",
+       "    data=AutoSplitDataModuleConfig(\n",
+       "        batch_size=2,\n",
+       "        num_workers='auto',\n",
+       "        pin_memory=True,\n",
+       "        dataset=XYZDatasetConfig(type='xyz', src=PosixPath('../examples/water-thermodynamics/data/water_ef.xyz')),\n",
+       "        train_split=0.03,\n",
+       "        validation_split='auto',\n",
+       "        shuffle=True,\n",
+       "        shuffle_seed=42\n",
+       "    ),\n",
+       "    model=MatterSimBackboneConfig(\n",
+       "        properties=[\n",
+       "            EnergyPropertyConfig(\n",
+       "                name='energy',\n",
+       "                dtype='float',\n",
+       "                loss=MAELossConfig(name='mae', reduction='mean'),\n",
+       "                loss_coefficient=1.0,\n",
+       "                type='energy'\n",
+       "            ),\n",
+       "            ForcesPropertyConfig(\n",
+       "                name='forces',\n",
+       "                dtype='float',\n",
+       "                loss=MAELossConfig(name='mae', reduction='mean'),\n",
+       "                loss_coefficient=1.0,\n",
+       "                type='forces',\n",
+       "                conservative=True\n",
+       "            ),\n",
+       "            StressesPropertyConfig(\n",
+       "                name='stresses',\n",
+       "                dtype='float',\n",
+       "                loss=MAELossConfig(name='mae', reduction='mean'),\n",
+       "                loss_coefficient=1.0,\n",
+       "                type='stresses',\n",
+       "                conservative=True\n",
+       "            )\n",
+       "        ],\n",
+       "        optimizer=AdamWConfig(\n",
+       "            name='AdamW',\n",
+       "            lr=0.0001,\n",
+       "            eps=1e-08,\n",
+       "            betas=(0.9, 0.999),\n",
+       "            weight_decay=0.01,\n",
+       "            amsgrad=False\n",
+       "        ),\n",
+       "        lr_scheduler=None,\n",
+       "        ignore_gpu_batch_transform_error=True,\n",
+       "        normalizers={},\n",
+       "        name='mattersim',\n",
+       "        pretrained_model='MatterSim-v1.0.0-5M',\n",
+       "        model_type='m3gnet',\n",
+       "        graph_convertor=MatterSimGraphConvertorConfig(\n",
+       "            twobody_cutoff=5.0,\n",
+       "            has_threebody=True,\n",
+       "            threebody_cutoff=4.0\n",
+       "        )\n",
+       "    ),\n",
+       "    trainer=TrainerConfig(\n",
+       "        accelerator='auto',\n",
+       "        strategy='auto',\n",
+       "        num_nodes=1,\n",
+       "        devices='auto',\n",
+       "        precision='32-true',\n",
+       "        deterministic=None,\n",
+       "        max_epochs=None,\n",
+       "        min_epochs=None,\n",
+       "        max_steps=-1,\n",
+       "        min_steps=None,\n",
+       "        max_time=None,\n",
+       "        val_check_interval=None,\n",
+       "        check_val_every_n_epoch=1,\n",
+       "        log_every_n_steps=None,\n",
+       "        gradient_clip_val=None,\n",
+       "        gradient_clip_algorithm=None,\n",
+       "        checkpoint=None,\n",
+       "        early_stopping=None,\n",
+       "        loggers='default',\n",
+       "        additional_trainer_kwargs={'fast_dev_run': True}\n",
+       "    )\n",
+       ")\n",
+       "
\n" + ], + "text/plain": [ + "\u001b[1;35mMatterTunerConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mdata\u001b[0m=\u001b[1;35mAutoSplitDataModuleConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mbatch_size\u001b[0m=\u001b[1;36m2\u001b[0m,\n", + " \u001b[33mnum_workers\u001b[0m=\u001b[32m'auto'\u001b[0m,\n", + " \u001b[33mpin_memory\u001b[0m=\u001b[3;92mTrue\u001b[0m,\n", + " \u001b[33mdataset\u001b[0m=\u001b[1;35mXYZDatasetConfig\u001b[0m\u001b[1m(\u001b[0m\u001b[33mtype\u001b[0m=\u001b[32m'xyz'\u001b[0m, \u001b[33msrc\u001b[0m=\u001b[1;35mPosixPath\u001b[0m\u001b[1m(\u001b[0m\u001b[32m'../examples/water-thermodynamics/data/water_ef.xyz'\u001b[0m\u001b[1m)\u001b[0m\u001b[1m)\u001b[0m,\n", + " \u001b[33mtrain_split\u001b[0m=\u001b[1;36m0\u001b[0m\u001b[1;36m.03\u001b[0m,\n", + " \u001b[33mvalidation_split\u001b[0m=\u001b[32m'auto'\u001b[0m,\n", + " \u001b[33mshuffle\u001b[0m=\u001b[3;92mTrue\u001b[0m,\n", + " \u001b[33mshuffle_seed\u001b[0m=\u001b[1;36m42\u001b[0m\n", + " \u001b[1m)\u001b[0m,\n", + " \u001b[33mmodel\u001b[0m=\u001b[1;35mMatterSimBackboneConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mproperties\u001b[0m=\u001b[1m[\u001b[0m\n", + " \u001b[1;35mEnergyPropertyConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mname\u001b[0m=\u001b[32m'energy'\u001b[0m,\n", + " \u001b[33mdtype\u001b[0m=\u001b[32m'float'\u001b[0m,\n", + " \u001b[33mloss\u001b[0m=\u001b[1;35mMAELossConfig\u001b[0m\u001b[1m(\u001b[0m\u001b[33mname\u001b[0m=\u001b[32m'mae'\u001b[0m, \u001b[33mreduction\u001b[0m=\u001b[32m'mean'\u001b[0m\u001b[1m)\u001b[0m,\n", + " \u001b[33mloss_coefficient\u001b[0m=\u001b[1;36m1\u001b[0m\u001b[1;36m.0\u001b[0m,\n", + " \u001b[33mtype\u001b[0m=\u001b[32m'energy'\u001b[0m\n", + " \u001b[1m)\u001b[0m,\n", + " \u001b[1;35mForcesPropertyConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mname\u001b[0m=\u001b[32m'forces'\u001b[0m,\n", + " \u001b[33mdtype\u001b[0m=\u001b[32m'float'\u001b[0m,\n", + " \u001b[33mloss\u001b[0m=\u001b[1;35mMAELossConfig\u001b[0m\u001b[1m(\u001b[0m\u001b[33mname\u001b[0m=\u001b[32m'mae'\u001b[0m, \u001b[33mreduction\u001b[0m=\u001b[32m'mean'\u001b[0m\u001b[1m)\u001b[0m,\n", + " \u001b[33mloss_coefficient\u001b[0m=\u001b[1;36m1\u001b[0m\u001b[1;36m.0\u001b[0m,\n", + " \u001b[33mtype\u001b[0m=\u001b[32m'forces'\u001b[0m,\n", + " \u001b[33mconservative\u001b[0m=\u001b[3;92mTrue\u001b[0m\n", + " \u001b[1m)\u001b[0m,\n", + " \u001b[1;35mStressesPropertyConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mname\u001b[0m=\u001b[32m'stresses'\u001b[0m,\n", + " \u001b[33mdtype\u001b[0m=\u001b[32m'float'\u001b[0m,\n", + " \u001b[33mloss\u001b[0m=\u001b[1;35mMAELossConfig\u001b[0m\u001b[1m(\u001b[0m\u001b[33mname\u001b[0m=\u001b[32m'mae'\u001b[0m, \u001b[33mreduction\u001b[0m=\u001b[32m'mean'\u001b[0m\u001b[1m)\u001b[0m,\n", + " \u001b[33mloss_coefficient\u001b[0m=\u001b[1;36m1\u001b[0m\u001b[1;36m.0\u001b[0m,\n", + " \u001b[33mtype\u001b[0m=\u001b[32m'stresses'\u001b[0m,\n", + " \u001b[33mconservative\u001b[0m=\u001b[3;92mTrue\u001b[0m\n", + " \u001b[1m)\u001b[0m\n", + " \u001b[1m]\u001b[0m,\n", + " \u001b[33moptimizer\u001b[0m=\u001b[1;35mAdamWConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mname\u001b[0m=\u001b[32m'AdamW'\u001b[0m,\n", + " \u001b[33mlr\u001b[0m=\u001b[1;36m0\u001b[0m\u001b[1;36m.0001\u001b[0m,\n", + " \u001b[33meps\u001b[0m=\u001b[1;36m1e\u001b[0m\u001b[1;36m-08\u001b[0m,\n", + " \u001b[33mbetas\u001b[0m=\u001b[1m(\u001b[0m\u001b[1;36m0.9\u001b[0m, \u001b[1;36m0.999\u001b[0m\u001b[1m)\u001b[0m,\n", + " \u001b[33mweight_decay\u001b[0m=\u001b[1;36m0\u001b[0m\u001b[1;36m.01\u001b[0m,\n", + " \u001b[33mamsgrad\u001b[0m=\u001b[3;91mFalse\u001b[0m\n", + " \u001b[1m)\u001b[0m,\n", + " \u001b[33mlr_scheduler\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mignore_gpu_batch_transform_error\u001b[0m=\u001b[3;92mTrue\u001b[0m,\n", + " \u001b[33mnormalizers\u001b[0m=\u001b[1m{\u001b[0m\u001b[1m}\u001b[0m,\n", + " \u001b[33mname\u001b[0m=\u001b[32m'mattersim'\u001b[0m,\n", + " \u001b[33mpretrained_model\u001b[0m=\u001b[32m'MatterSim-v1.0.0-5M'\u001b[0m,\n", + " \u001b[33mmodel_type\u001b[0m=\u001b[32m'm3gnet'\u001b[0m,\n", + " \u001b[33mgraph_convertor\u001b[0m=\u001b[1;35mMatterSimGraphConvertorConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33mtwobody_cutoff\u001b[0m=\u001b[1;36m5\u001b[0m\u001b[1;36m.0\u001b[0m,\n", + " \u001b[33mhas_threebody\u001b[0m=\u001b[3;92mTrue\u001b[0m,\n", + " \u001b[33mthreebody_cutoff\u001b[0m=\u001b[1;36m4\u001b[0m\u001b[1;36m.0\u001b[0m\n", + " \u001b[1m)\u001b[0m\n", + " \u001b[1m)\u001b[0m,\n", + " \u001b[33mtrainer\u001b[0m=\u001b[1;35mTrainerConfig\u001b[0m\u001b[1m(\u001b[0m\n", + " \u001b[33maccelerator\u001b[0m=\u001b[32m'auto'\u001b[0m,\n", + " \u001b[33mstrategy\u001b[0m=\u001b[32m'auto'\u001b[0m,\n", + " \u001b[33mnum_nodes\u001b[0m=\u001b[1;36m1\u001b[0m,\n", + " \u001b[33mdevices\u001b[0m=\u001b[32m'auto'\u001b[0m,\n", + " \u001b[33mprecision\u001b[0m=\u001b[32m'32-true'\u001b[0m,\n", + " \u001b[33mdeterministic\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mmax_epochs\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mmin_epochs\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mmax_steps\u001b[0m=\u001b[1;36m-1\u001b[0m,\n", + " \u001b[33mmin_steps\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mmax_time\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mval_check_interval\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mcheck_val_every_n_epoch\u001b[0m=\u001b[1;36m1\u001b[0m,\n", + " \u001b[33mlog_every_n_steps\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mgradient_clip_val\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mgradient_clip_algorithm\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mcheckpoint\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mearly_stopping\u001b[0m=\u001b[3;35mNone\u001b[0m,\n", + " \u001b[33mloggers\u001b[0m=\u001b[32m'default'\u001b[0m,\n", + " \u001b[33madditional_trainer_kwargs\u001b[0m=\u001b[1m{\u001b[0m\u001b[32m'fast_dev_run'\u001b[0m: \u001b[3;92mTrue\u001b[0m\u001b[1m}\u001b[0m\n", + " \u001b[1m)\u001b[0m\n", + "\u001b[1m)\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\u001b[32m2024-12-16 12:36:10.897\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mmattersim.forcefield.potential\u001b[0m:\u001b[36mfrom_checkpoint\u001b[0m:\u001b[36m891\u001b[0m - \u001b[1mLoading the pre-trained mattersim-v1.0.0-5M.pth model\u001b[0m\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:mattertune.main:The model requires inference_mode to be disabled. Setting inference_mode=False.\n", + "INFO: Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO:lightning.pytorch.utilities.rank_zero:Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO: GPU available: True (cuda), used: True\n", + "INFO:lightning.pytorch.utilities.rank_zero:GPU available: True (cuda), used: True\n", + "INFO: TPU available: False, using: 0 TPU cores\n", + "INFO:lightning.pytorch.utilities.rank_zero:TPU available: False, using: 0 TPU cores\n", + "INFO: HPU available: False, using: 0 HPUs\n", + "INFO:lightning.pytorch.utilities.rank_zero:HPU available: False, using: 0 HPUs\n", + "INFO: Running in `fast_dev_run` mode: will run the requested loop using 1 batch(es). Logging and checkpointing is suppressed.\n", + "INFO:lightning.pytorch.utilities.rank_zero:Running in `fast_dev_run` mode: will run the requested loop using 1 batch(es). Logging and checkpointing is suppressed.\n", + "INFO:mattertune.data.xyz:Loaded 1593 atoms from ../examples/water-thermodynamics/data/water_ef.xyz\n", + "INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n", + "INFO:lightning.pytorch.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n", + "INFO: \n", + " | Name | Type | Params | Mode \n", + "----------------------------------------------------------\n", + "0 | backbone | Potential | 4.5 M | train\n", + "1 | train_metrics | FinetuneMetrics | 0 | train\n", + "2 | val_metrics | FinetuneMetrics | 0 | train\n", + "3 | test_metrics | FinetuneMetrics | 0 | train\n", + "4 | normalizers | ModuleDict | 0 | train\n", + "----------------------------------------------------------\n", + "4.5 M Trainable params\n", + "0 Non-trainable params\n", + "4.5 M Total params\n", + "18.196 Total estimated model params size (MB)\n", + "274 Modules in train mode\n", + "0 Modules in eval mode\n", + "INFO:lightning.pytorch.callbacks.model_summary:\n", + " | Name | Type | Params | Mode \n", + "----------------------------------------------------------\n", + "0 | backbone | Potential | 4.5 M | train\n", + "1 | train_metrics | FinetuneMetrics | 0 | train\n", + "2 | val_metrics | FinetuneMetrics | 0 | train\n", + "3 | test_metrics | FinetuneMetrics | 0 | train\n", + "4 | normalizers | ModuleDict | 0 | train\n", + "----------------------------------------------------------\n", + "4.5 M Trainable params\n", + "0 Non-trainable params\n", + "4.5 M Total params\n", + "18.196 Total estimated model params size (MB)\n", + "274 Modules in train mode\n", + "0 Modules in eval mode\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Epoch 0: 100%|██████████| 1/1 [00:08<00:00, 0.12it/s]" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO: `Trainer.fit` stopped: `max_steps=1` reached.\n", + "INFO:lightning.pytorch.utilities.rank_zero:`Trainer.fit` stopped: `max_steps=1` reached.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Epoch 0: 100%|██████████| 1/1 [00:08<00:00, 0.12it/s]\n" + ] + } + ], + "source": [ + "from __future__ import annotations\n", + "\n", + "import os\n", + "from pathlib import Path\n", + "\n", + "import mattertune.configs as MC\n", + "from mattertune import MatterTuner\n", + "\n", + "os.environ[\"CUDA_LAUNCH_BLOCKING\"] = \"1\"\n", + "\n", + "\n", + "def hparams():\n", + " hparams = MC.MatterTunerConfig.draft()\n", + "\n", + " # Model hparams \n", + " hparams.model = MC.MatterSimBackboneConfig.draft()\n", + " hparams.model.graph_convertor = MC.MatterSimGraphConvertorConfig.draft()\n", + "\n", + " hparams.model.pretrained_model = \"MatterSim-v1.0.0-5M\"\n", + " hparams.model.ignore_gpu_batch_transform_error = True\n", + " hparams.model.optimizer = MC.AdamWConfig(lr=1.0e-4)\n", + "\n", + " hparams.model.properties = []\n", + " energy = MC.EnergyPropertyConfig(loss=MC.MAELossConfig())\n", + " hparams.model.properties.append(energy)\n", + " forces = MC.ForcesPropertyConfig(loss=MC.MAELossConfig(), conservative=True)\n", + " hparams.model.properties.append(forces)\n", + " stresses = MC.StressesPropertyConfig(loss=MC.MAELossConfig(), conservative=True)\n", + " hparams.model.properties.append(stresses)\n", + "\n", + " # Data hparams\n", + " hparams.data = MC.AutoSplitDataModuleConfig.draft()\n", + " hparams.data.dataset = MC.XYZDatasetConfig.draft()\n", + " hparams.data.dataset.src = Path(\"../examples/water-thermodynamics/data/water_ef.xyz\")\n", + " hparams.data.train_split = 0.03\n", + " hparams.data.batch_size = 2\n", + "\n", + " # Trainer hparams\n", + " hparams.trainer.additional_trainer_kwargs = {\"fast_dev_run\": True}\n", + "\n", + " hparams = hparams.finalize()\n", + " rich.print(hparams)\n", + " return hparams\n", + "\n", + "\n", + "hp = hparams()\n", + "tune_output = MatterTuner(hp).tune()\n", + "model, trainer = tune_output.model, tune_output.trainer" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "\n" + ] + } + ], + "source": [ + "property_predictor = model.property_predictor()\n", + "print(property_predictor)\n", + "\n", + "calculator = model.ase_calculator()\n", + "print(calculator)" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Atoms(symbols='H2O', pbc=True, cell=[10.0, 10.0, 10.0])\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:mattertune.wrappers.property_predictor:The model requires inference_mode to be disabled. Setting inference_mode=False.\n", + "INFO: You are running in `Trainer(barebones=True)` mode. All features that may impact raw speed have been disabled to facilitate analyzing the Trainer overhead. Specifically, the following features are deactivated:\n", + " - Checkpointing: `Trainer(enable_checkpointing=True)`\n", + " - Progress bar: `Trainer(enable_progress_bar=True)`\n", + " - Model summary: `Trainer(enable_model_summary=True)`\n", + " - Logging: `Trainer(logger=True)`, `Trainer(log_every_n_steps>0)`, `LightningModule.log(...)`, `LightningModule.log_dict(...)`\n", + " - Sanity checking: `Trainer(num_sanity_val_steps>0)`\n", + " - Development run: `Trainer(fast_dev_run=True)`\n", + " - Anomaly detection: `Trainer(detect_anomaly=True)`\n", + " - Profiling: `Trainer(profiler=...)`\n", + "INFO:lightning.pytorch.utilities.rank_zero:You are running in `Trainer(barebones=True)` mode. All features that may impact raw speed have been disabled to facilitate analyzing the Trainer overhead. Specifically, the following features are deactivated:\n", + " - Checkpointing: `Trainer(enable_checkpointing=True)`\n", + " - Progress bar: `Trainer(enable_progress_bar=True)`\n", + " - Model summary: `Trainer(enable_model_summary=True)`\n", + " - Logging: `Trainer(logger=True)`, `Trainer(log_every_n_steps>0)`, `LightningModule.log(...)`, `LightningModule.log_dict(...)`\n", + " - Sanity checking: `Trainer(num_sanity_val_steps>0)`\n", + " - Development run: `Trainer(fast_dev_run=True)`\n", + " - Anomaly detection: `Trainer(detect_anomaly=True)`\n", + " - Profiling: `Trainer(profiler=...)`\n", + "INFO: Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO:lightning.pytorch.utilities.rank_zero:Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO: GPU available: True (cuda), used: True\n", + "INFO:lightning.pytorch.utilities.rank_zero:GPU available: True (cuda), used: True\n", + "INFO: TPU available: False, using: 0 TPU cores\n", + "INFO:lightning.pytorch.utilities.rank_zero:TPU available: False, using: 0 TPU cores\n", + "INFO: HPU available: False, using: 0 HPUs\n", + "INFO:lightning.pytorch.utilities.rank_zero:HPU available: False, using: 0 HPUs\n", + "INFO: `Trainer(barebones=True)` started running. The progress bar is disabled so you might want to manually print the progress in your model.\n", + "INFO:lightning.pytorch.utilities.rank_zero:`Trainer(barebones=True)` started running. The progress bar is disabled so you might want to manually print the progress in your model.\n", + "INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n", + "INFO:lightning.pytorch.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n", + "/net/csefiles/coc-fung-cluster/lingyu/miniconda3/envs/mattersim-tune/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:424: The 'predict_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=63` in the `DataLoader` to improve performance.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[{'total_energy': tensor([-11.2461]), 'forces': tensor([[-0.0000, -1.0471, -3.0894],\n", + " [-0.0000, 3.0241, 0.0653],\n", + " [-0.0000, -1.9770, 3.0241]]), 'stresses': tensor([[[ 0.0000, 0.0000, 0.0000],\n", + " [ 0.0000, 0.3167, -0.4845],\n", + " [ 0.0000, -0.4845, -0.0105]]]), 'energy': tensor([-11.2461])}]\n" + ] + } + ], + "source": [ + "import ase\n", + "\n", + "# Create a test periodic system\n", + "atoms = ase.Atoms(\n", + " \"H2O\", positions=[[0, 0, 0], [0, 0, 1], [0, 1, 0]], cell=[10, 10, 10], pbc=True\n", + ")\n", + "print(atoms)\n", + "\n", + "print(property_predictor.predict([atoms], model.hparams.properties))" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:mattertune.wrappers.property_predictor:The model requires inference_mode to be disabled. Setting inference_mode=False.\n", + "INFO: You are running in `Trainer(barebones=True)` mode. All features that may impact raw speed have been disabled to facilitate analyzing the Trainer overhead. Specifically, the following features are deactivated:\n", + " - Checkpointing: `Trainer(enable_checkpointing=True)`\n", + " - Progress bar: `Trainer(enable_progress_bar=True)`\n", + " - Model summary: `Trainer(enable_model_summary=True)`\n", + " - Logging: `Trainer(logger=True)`, `Trainer(log_every_n_steps>0)`, `LightningModule.log(...)`, `LightningModule.log_dict(...)`\n", + " - Sanity checking: `Trainer(num_sanity_val_steps>0)`\n", + " - Development run: `Trainer(fast_dev_run=True)`\n", + " - Anomaly detection: `Trainer(detect_anomaly=True)`\n", + " - Profiling: `Trainer(profiler=...)`\n", + "INFO:lightning.pytorch.utilities.rank_zero:You are running in `Trainer(barebones=True)` mode. All features that may impact raw speed have been disabled to facilitate analyzing the Trainer overhead. Specifically, the following features are deactivated:\n", + " - Checkpointing: `Trainer(enable_checkpointing=True)`\n", + " - Progress bar: `Trainer(enable_progress_bar=True)`\n", + " - Model summary: `Trainer(enable_model_summary=True)`\n", + " - Logging: `Trainer(logger=True)`, `Trainer(log_every_n_steps>0)`, `LightningModule.log(...)`, `LightningModule.log_dict(...)`\n", + " - Sanity checking: `Trainer(num_sanity_val_steps>0)`\n", + " - Development run: `Trainer(fast_dev_run=True)`\n", + " - Anomaly detection: `Trainer(detect_anomaly=True)`\n", + " - Profiling: `Trainer(profiler=...)`\n", + "INFO: Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO:lightning.pytorch.utilities.rank_zero:Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO: GPU available: True (cuda), used: True\n", + "INFO:lightning.pytorch.utilities.rank_zero:GPU available: True (cuda), used: True\n", + "INFO: TPU available: False, using: 0 TPU cores\n", + "INFO:lightning.pytorch.utilities.rank_zero:TPU available: False, using: 0 TPU cores\n", + "INFO: HPU available: False, using: 0 HPUs\n", + "INFO:lightning.pytorch.utilities.rank_zero:HPU available: False, using: 0 HPUs\n", + "INFO: `Trainer(barebones=True)` started running. The progress bar is disabled so you might want to manually print the progress in your model.\n", + "INFO:lightning.pytorch.utilities.rank_zero:`Trainer(barebones=True)` started running. The progress bar is disabled so you might want to manually print the progress in your model.\n", + "INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n", + "INFO:lightning.pytorch.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Atoms(symbols='H2O', pbc=True, cell=[10.0, 10.0, 10.0])\n", + "-11.246139526367188\n" + ] + } + ], + "source": [ + "import ase\n", + "\n", + "# Create a test periodic system\n", + "atoms = ase.Atoms(\n", + " \"H2O\", positions=[[0, 0, 0], [0, 0, 1], [0, 1, 0]], cell=[10, 10, 10], pbc=True\n", + ")\n", + "print(atoms)\n", + "\n", + "# Set the calculator\n", + "atoms.calc = calculator\n", + "\n", + "# Calculate the energy\n", + "energy = atoms.get_potential_energy()\n", + "print(energy)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:mattertune.wrappers.property_predictor:The model requires inference_mode to be disabled. Setting inference_mode=False.\n", + "INFO: You are running in `Trainer(barebones=True)` mode. All features that may impact raw speed have been disabled to facilitate analyzing the Trainer overhead. Specifically, the following features are deactivated:\n", + " - Checkpointing: `Trainer(enable_checkpointing=True)`\n", + " - Progress bar: `Trainer(enable_progress_bar=True)`\n", + " - Model summary: `Trainer(enable_model_summary=True)`\n", + " - Logging: `Trainer(logger=True)`, `Trainer(log_every_n_steps>0)`, `LightningModule.log(...)`, `LightningModule.log_dict(...)`\n", + " - Sanity checking: `Trainer(num_sanity_val_steps>0)`\n", + " - Development run: `Trainer(fast_dev_run=True)`\n", + " - Anomaly detection: `Trainer(detect_anomaly=True)`\n", + " - Profiling: `Trainer(profiler=...)`\n", + "INFO:lightning.pytorch.utilities.rank_zero:You are running in `Trainer(barebones=True)` mode. All features that may impact raw speed have been disabled to facilitate analyzing the Trainer overhead. Specifically, the following features are deactivated:\n", + " - Checkpointing: `Trainer(enable_checkpointing=True)`\n", + " - Progress bar: `Trainer(enable_progress_bar=True)`\n", + " - Model summary: `Trainer(enable_model_summary=True)`\n", + " - Logging: `Trainer(logger=True)`, `Trainer(log_every_n_steps>0)`, `LightningModule.log(...)`, `LightningModule.log_dict(...)`\n", + " - Sanity checking: `Trainer(num_sanity_val_steps>0)`\n", + " - Development run: `Trainer(fast_dev_run=True)`\n", + " - Anomaly detection: `Trainer(detect_anomaly=True)`\n", + " - Profiling: `Trainer(profiler=...)`\n", + "INFO: Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO:lightning.pytorch.utilities.rank_zero:Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "INFO: GPU available: True (cuda), used: True\n", + "INFO:lightning.pytorch.utilities.rank_zero:GPU available: True (cuda), used: True\n", + "INFO: TPU available: False, using: 0 TPU cores\n", + "INFO:lightning.pytorch.utilities.rank_zero:TPU available: False, using: 0 TPU cores\n", + "INFO: HPU available: False, using: 0 HPUs\n", + "INFO:lightning.pytorch.utilities.rank_zero:HPU available: False, using: 0 HPUs\n", + "INFO: `Trainer(barebones=True)` started running. The progress bar is disabled so you might want to manually print the progress in your model.\n", + "INFO:lightning.pytorch.utilities.rank_zero:`Trainer(barebones=True)` started running. The progress bar is disabled so you might want to manually print the progress in your model.\n", + "INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n", + "INFO:lightning.pytorch.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ase.Atoms 1 energy: tensor([-11.2461])\n", + "ase.Atoms 1 forces: tensor([[-0.0000, -1.0471, -3.0894],\n", + " [-0.0000, 3.0241, 0.0653],\n", + " [-0.0000, -1.9770, 3.0241]])\n", + "ase.Atoms 2 energy: tensor([-11.2461])\n", + "ase.Atoms 2 forces: tensor([[-0.0000, -1.0471, -3.0894],\n", + " [-0.0000, 3.0241, 0.0653],\n", + " [-0.0000, -1.9770, 3.0241]])\n" + ] + } + ], + "source": [ + "property_predictor = model.property_predictor()\n", + "atoms_1 = ase.Atoms(\n", + " \"H2O\", positions=[[0, 0, 0], [0, 0, 1], [0, 1, 0]], cell=[10, 10, 10], pbc=True\n", + ")\n", + "atoms_2 = ase.Atoms(\n", + " \"H2O\", positions=[[0, 0, 0], [0, 0, 1], [0, 1, 0]], cell=[10, 10, 10], pbc=True\n", + ")\n", + "atoms = [atoms_1, atoms_2]\n", + "predictions = property_predictor.predict(atoms, [\"energy\", \"forces\"])\n", + "print(\"ase.Atoms 1 energy:\", predictions[0][\"energy\"])\n", + "print(\"ase.Atoms 1 forces:\", predictions[0][\"forces\"])\n", + "print(\"ase.Atoms 2 energy:\", predictions[1][\"energy\"])\n", + "print(\"ase.Atoms 2 forces:\", predictions[1][\"forces\"])" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "mattersim-tune", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.16" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/notebooks/orb-omat.ipynb b/notebooks/orb-omat.ipynb index 2216a59..046cabd 100644 --- a/notebooks/orb-omat.ipynb +++ b/notebooks/orb-omat.ipynb @@ -776,8 +776,8 @@ "\n", "\n", "hp = hparams()\n", - "model = MatterTuner(hp).tune()\n", - "model" + "tune_output = MatterTuner(hp).tune()\n", + "model, trainer = tune_output.model, tune_output.trainer" ] }, { diff --git a/src/mattertune/backbones/mattersim/model.py b/src/mattertune/backbones/mattersim/model.py index 4dacad5..2189383 100644 --- a/src/mattertune/backbones/mattersim/model.py +++ b/src/mattertune/backbones/mattersim/model.py @@ -145,7 +145,7 @@ def create_model(self): self.energy_prop_name = "energy" self.forces_prop_name = "forces" - self.stress_prop_name = "stress" + self.stress_prop_name = "stresses" self.calc_forces = False self.calc_stress = False for prop in self.hparams.properties: @@ -200,7 +200,7 @@ def model_forward(self, batch: Batch, return_backbone_output: bool = False): if self.calc_forces: output_pred[self.forces_prop_name] = output_pred.get("forces") if self.calc_stress: - output_pred[self.stress_prop_name] = output_pred.get("stress") + output_pred[self.stress_prop_name] = output_pred.get("stresses") pred: ModelOutput = {"predicted_properties": output_pred} if return_backbone_output: raise NotImplementedError( @@ -256,10 +256,13 @@ def atoms_to_data(self, atoms, has_labels): energy = labels.get(self.energy_prop_name, None) forces = labels.get(self.forces_prop_name, None) stress = labels.get(self.stress_prop_name, None) - graph = self.graph_convertor.convert(atoms, energy, forces, stress) + graph = self.graph_convertor.convert(atoms) graph.atomic_numbers = torch.tensor( atoms.get_atomic_numbers(), dtype=torch.long ) + setattr(graph, self.energy_prop_name, energy) + setattr(graph, self.forces_prop_name, forces) + setattr(graph, self.stress_prop_name, stress) return graph @override diff --git a/src/mattertune/configs/backbones/jmp/model/CutoffsConfig.schema.json b/src/mattertune/configs/backbones/jmp/model/CutoffsConfig.schema.json deleted file mode 100644 index 41ab555..0000000 --- a/src/mattertune/configs/backbones/jmp/model/CutoffsConfig.schema.json +++ /dev/null @@ -1,28 +0,0 @@ -{ - "properties": { - "aeaint": { - "title": "Aeaint", - "type": "number" - }, - "aint": { - "title": "Aint", - "type": "number" - }, - "main": { - "title": "Main", - "type": "number" - }, - "qint": { - "title": "Qint", - "type": "number" - } - }, - "required": [ - "main", - "aeaint", - "qint", - "aint" - ], - "title": "CutoffsConfig", - "type": "object" -} \ No newline at end of file diff --git a/src/mattertune/configs/backbones/jmp/model/__init__.py b/src/mattertune/configs/backbones/jmp/model/__init__.py index 87e9351..89fbe01 100644 --- a/src/mattertune/configs/backbones/jmp/model/__init__.py +++ b/src/mattertune/configs/backbones/jmp/model/__init__.py @@ -1,21 +1,25 @@ -from __future__ import annotations - __codegen__ = True from mattertune.backbones.jmp.model import CutoffsConfig as CutoffsConfig -from mattertune.backbones.jmp.model import ( - FinetuneModuleBaseConfig as FinetuneModuleBaseConfig, -) +from mattertune.backbones.jmp.model import FinetuneModuleBaseConfig as FinetuneModuleBaseConfig from mattertune.backbones.jmp.model import JMPBackboneConfig as JMPBackboneConfig -from mattertune.backbones.jmp.model import ( - JMPGraphComputerConfig as JMPGraphComputerConfig, -) +from mattertune.backbones.jmp.model import JMPGraphComputerConfig as JMPGraphComputerConfig from mattertune.backbones.jmp.model import MaxNeighborsConfig as MaxNeighborsConfig +from mattertune.backbones.jmp.model import CutoffsConfig as CutoffsConfig +from mattertune.backbones.jmp.model import FinetuneModuleBaseConfig as FinetuneModuleBaseConfig +from mattertune.backbones.jmp.model import JMPBackboneConfig as JMPBackboneConfig +from mattertune.backbones.jmp.model import JMPGraphComputerConfig as JMPGraphComputerConfig +from mattertune.backbones.jmp.model import MaxNeighborsConfig as MaxNeighborsConfig + +from mattertune.backbones.jmp.model import backbone_registry as backbone_registry + + __all__ = [ "CutoffsConfig", "FinetuneModuleBaseConfig", "JMPBackboneConfig", "JMPGraphComputerConfig", "MaxNeighborsConfig", + "backbone_registry", ]