A pytorch implementation of ALIGNet, originally developed by Prof. Rana Hanocka.
ALIGNet is a partial-shape agnostic deep-learning model that "aligns" source images to randomly-masked target images as shown below. It learns both data-driven priors that predicts partially masked target shapes and warp deformation fields that transform source images to resemble the features of the fully-recovered target shapes.
For example, given a source 3d mesh structure (left column) and a target point-cloud with missing datapoints (middle column), ALIGNet warps the source mesh into the target-estimate (right column), which, in plain terms, can be seen as a "mixture" of the two input shapes.move to the ALIGNet or ALIGNet_3d directory.
cd ALIGNet
or
cd ALIGNet_3d
install dependencies by running,
conda env create --file alignet.yml
conda activate alignet
-Define the url to get data in init_config => data => url_data (or use the predefined url in the default init_config).
-run python main.py data -d
to download data from url source.
-Define all the necessary parameters in init_config => data (read the description for each parameter).
-run python main.py data
to augment data, split into validation/train sets, and store it in the defined filepath.
-Define all the necessary parameters in init_config => model (read the description for each parameter).
-run python main.py
new to make new model, train the model as defined in init_config => train, and save it in the defined filepath.
-Define all the necessary parameters in init_config => train (read the description for each parameter).
-run python main.py
train to train model according to configurations.
-Define all the necessary parameters in init_config => valid (read the description for each parameter).
-run python main.py
valid to run validation as specified in init_config.yaml.
-check [model_path]/outputs/images/ for results (creates .png files with the latest integer numbering as its file name).
-If you are running on multiple gpus, then you can train the model in parallel. -set the num_gpu parameter in init_config to however many gpus you need to use, and run the model.
-The distributed data parallel model, if run on a light model with small batch data size, will be slower than running on a single gpu, due to the significant overhead caused by copying each batch of tensor to multiple devices. -Only use the parallel mode if you are either using a significantly heavy network, a very large datasize, very large batch inputs, or if your processor runs out of memory during execution. -Generally, the data parallel mode is not really necessary for the 2d model since it does well on a single gpu.
@article{hanocka2018alignet,
author = {Hanocka, Rana and Fish, Noa and Wang, Zhenhua and Giryes, Raja and Fleishman, Shachar and Cohen-Or, Daniel},
title = {ALIGNet: Partial-Shape Agnostic Alignment via Unsupervised Learning},
journal = {ACM Trans. Graph.},
year = {2018}}