-
Clone this repository
git clone [email protected]:polkalian/InterNav.git
-
Install
xorg
if the machine does not have itNote: This codebase should be executed on GPU. Thus, we need
xserver
for GPU redering.# Need sudo permission to install xserver sudo apt-get install xorg
Then, do the xserver refiguration for GPU
sudo python startx.py
-
Create a conda environment and install the required packages
Note: The
python
version needs to be above3.10
, sincepython 2.x
may have issues with some required packages.conda create -n camp python=3.10 pip install requirement.txt
Our work is developed based on the physics-enabled, visually rich AI2-THOR environment and AllenAct framework.
Download our dataset here and unzip it into the datasets folder.
Before running training or inference you'll first have to add the InterNav
directory to your PYTHONPATH
(so that python
and AllenAct
knows where to for various modules). To do this you can run the following:
cd YOUR/PATH/TO/InterNav
export PYTHONPATH=$PYTHONPATH:$PWD
If you want to train a CaMP
model, this can be done by running the command
allenact -s 23456 -o out -b . configs/proc10k_ObsNav/obstacles_nav_rgbd_proc.py
The PPO+intent
model mentioned in the paper are also available in ivn_proc/models_baseline.py (corresponding to the tasks.py).
allenact -s 23456 -b . configs/proc10k_ObsNav/obstacles_nav_rgbd_proc.py -c PATH/TO/YOUR/MODEL --eval
If you find this project useful in your research, please consider citing our paper:
@inproceedings{wang2023CaMP,
author = {Wang, Xiaohan and Liu, Yuehu and Song, Xinhang and Wang, Beibei and Jiang, Shuqiang},
booktitle = {Neurips},
title = {CaMP: Causal Multi-policy Planning for Interactive Navigation in Multi-room Scenes},
url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/333581887bf483296118a97773cab0c1-Paper-Conference.pdf},
year = {2023}
}