-
Notifications
You must be signed in to change notification settings - Fork 17
Home
The easiest way to get Gretel up and running with all the dependencies is to pull the development Docker image available in Docker Hub:
docker pull gretelxai/gretel:dev-latest
or
docker pull gretelxai/gretel:gpu-latest
And then run the docker image.
docker run --rm -it gretelxai/gretel:tag-latest
To achieve a full development scenario, we recommend first cloning the GRETEL repository and then binding one of the docker images to the local file system copy of the GRETEL repo.
docker run --rm -it -v full_path_to_local_repo_folder:/home/scientist/gretel gretelxai/gretel:tag-latest
Contrary to the fast solution (docker image), we suggest creating your own local environment and using the latest version available in the repository to have access to the full GRETEL's potential. Thus, you can proceed by creating a (CONDA) environment in the following way (referring also to the script launchers/env_install.sh):
conda update -n base -c defaults conda -y
conda create -n GRTL python=3.9 -y
Activate the created env:
conda activate GRTL
Install with pip Pytorch at first and then the other dependencies (MPS support is not yet tested; CUDA support is tested but not widely; thus, CPU support remains the safest choice):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install picologging==0.9.2 exmol gensim joblib jsonpickle matplotlib networkx numpy pandas rdkit scikit-learn scipy selfies sqlalchemy black typing-extensions torch_geometric==2.4.0 dgl IPython ipykernel flufl.lock jsonc-parser
The initialization mechanism in GRETEL 2.0 was completely refactored. A slightly different logic comes in place with a more robust and flexible mechanism.
Thus, new configuration files are needed. Even if they are not yet final, you can take a look at the folder config.
To run a configuration (configure the environment upfront):
python main.py <CONFIG_FILE>