This project is a fork of Infinigen: Infinite Photorealistic Worlds using Procedural Generation, an open-source research project developed by researchers at Princeton University.
This fork aims to extend Infinigen with the additional feature of generating singular assets for research purposes. These changes are to experiment with utilizing synthetic data for training computer vision models.
- New file called
generate_two_assets.py
located in/infinigen/infinigen_examples/
that generates 3 renders in total (3 renders, 2 assets) - Two of these renders contain the same asset but at different camera angles
- The third render is a completely different asset
- The seed of the second/unique asset is +1 of the seed of the first object
To run this file, follow the original Installation Instructions for the Python Module (default option). Then, use the following template in the command prompt to run the file:
python -m infinigen_examples.generate_two_assets -f {FactoryName} -n 2 --save_blend --initial-seed {seed}
For example, if you wanted to run this command for Pinecones with a seed of 500, it would look something like this:
python -m infinigen_examples.generate_two_assets -f PineconeFactory -n 2 --save_blend --initial-seed 500
To view the original project, please visit Infinigen's website
Below is the original README.md
If you use Infinigen in your work, please cite our academic paper:
Alexander Raistrick*, Lahav Lipson*, Zeyu Ma* (*equal contribution, alphabetical order)
Lingjie Mei, Mingzhe Wang, Yiming Zuo, Karhan Kayan, Hongyu Wen, Beining Han,
Yihan Wang, Alejandro Newell, Hei Law, Ankit Goyal, Kaiyu Yang, Jia Deng
Conference on Computer Vision and Pattern Recognition (CVPR) 2023
@inproceedings{infinigen2023infinite,
title={Infinite Photorealistic Worlds Using Procedural Generation},
author={Raistrick, Alexander and Lipson, Lahav and Ma, Zeyu and Mei, Lingjie and Wang, Mingzhe and Zuo, Yiming and Kayan, Karhan and Wen, Hongyu and Han, Beining and Wang, Yihan and Newell, Alejandro and Law, Hei and Goyal, Ankit and Yang, Kaiyu and Deng, Jia},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={12630--12641},
year={2023}
}
First, follow our Installation Instructions.
Next, see our "Hello World" example to generate an image & ground truth similar to those shown below.
- Installation Guide
- "Hello World": Generate your first Infinigen scene
- Configuring Infinigen
- Downloading pre-generated data
- Extended ground-truth
- Generating individual assets
- Implementing new materials & assets
- Generating fluid simulations
Please see our project roadmap and follow us at https://twitter.com/PrincetonVL for updates.
We welcome contributions! You can contribute in many ways:
- Contribute code to this repository - We welcome code contributions. More guidelines coming soon.
- Contribute procedural generators -
infinigen/nodes/node_transpiler/dev_script.py
provides tools to convert artist-friendly Blender Nodes into python code. Tutorials and guidelines coming soon. - Contribute pre-generated data - Anyone can contribute their computing power to create data and share it with the community. Please stay tuned for a repository of pre-generated data.
Please post this repository's Github Issues page for help. Please run your command with --debug
, and let us know:
- What is your computing setup, including OS version, CPU, RAM, GPU(s) and any drivers?
- What version of the code are you using (link a commit hash), and what if any modifications have you made (new configs, code edits)
- What exact command did you run?
- What were the output logs of the command you ran?
- If using
manage_jobs
, look inoutputs/MYJOB/MYSEED/logs/
to find the right one. - What was the exact python error and stacktrace, if applicable?
- If using
Infinigen wouldn't be possible without the fantastic work of the Blender Foundation and it's open-source contributors. Infinigen uses many open source projects, with special thanks to Land-Lab, BlenderProc Blender-FLIP-Fluids and Blender-Differential-Growth.
We thank Thomas Kole for providing procedural clouds and Pedro P. Lopes for the autoexposure nodegraph.
We learned tremendously from online tutorials of Andrew Price, Artisans of Vaul, Bad Normals, Blender Tutorial Channel, blenderbitesize, Blendini, Bradley Animation, CGCookie, CGRogue, Creative Shrimp, CrowdRender, Dr. Blender, HEY Pictures, Ian Hubert, Kev Binge, Lance Phan, MaxEdge, Mr. Cheebs, PixelicaCG, Polyfjord, Robbie Tilton, Ryan King Art, Sam Bowman and yogigraphics. These tutorials provided procedural generators for our early experimentation and served as inspiration for our own implementations in the official release of Infinigen. They are acknowledged in file header comments where applicable.
Infinigen has evolved significantly since the version described in our CVPR paper. It now features some procedural code obtained from the internet under CC-0 licenses, which are marked with code comments where applicable - no such code was present in the system for the CVPR version.