Source code accompanying the paper "Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs" by M. Tatarchenko, A. Dosovitskiy and T. Brox. The implementation is based on Caffe, and extends the basic framework by providing layers for octree-specific features.
For compilation instructions please refer to the official or unofficial CMake build guidelines for Caffe. Makefile build is not supported.
Octrees are stored as text-based serialized std::map containers. The provided utility (tools/ogn_converter) can be used to convert binvox voxel grids into octrees. Three of the datasets used in the paper (ShapeNet-cars, FAUST and BlendSwap) can be downloaded from here. For ShapeNet-all, we used the voxelizations(ftp://cs.stanford.edu/cs/cvgl/ShapeNetVox32.tgz) and the renderings(ftp://cs.stanford.edu/cs/cvgl/ShapeNetRendering.tgz) provided by Choy et al. for their 3D-R2N2 framework.
Example models can be downloaded from here. Run one of the scripts (train_known.sh, train_pred.sh or test.sh) from the corresponding experiment folder. You should have the caffe executable in your $PATH.
There is a python script for visualizing .ot files in Blender. To use it, run
$ blender -P $CAFFE_ROOT/python/rendering/render_model.py your_model.ot
All code is provided for research purposes only and without any warranty. Any commercial use requires our consent. When using the code in your research work, please cite the following paper:
@InProceedings{ogn2017,
author = "M. Tatarchenko and A. Dosovitskiy and T. Brox",
title = "Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs",
booktitle = "IEEE International Conference on Computer Vision (ICCV)",
year = "2017",
url = "http://lmb.informatik.uni-freiburg.de/Publications/2017/TDB17b"
}