Skip to content

Commit

Permalink
Merge branch 'og-develop' into save-restore-ag
Browse files Browse the repository at this point in the history
  • Loading branch information
cgokmen authored Oct 26, 2023
2 parents 14340f1 + 83cef8c commit 2feb290
Show file tree
Hide file tree
Showing 14 changed files with 548 additions and 186 deletions.
57 changes: 48 additions & 9 deletions .github/workflows/build-push-containers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,28 +10,67 @@ on:

jobs:
docker:
runs-on: [self-hosted, linux, gpu]
runs-on: ubuntu-latest
steps:
-
name: Check disk space
run: df . -h
-
name: Free disk space
run: |
sudo docker rmi $(docker image ls -aq) >/dev/null 2>&1 || true
sudo rm -rf \
/usr/share/dotnet /usr/local/lib/android /opt/ghc \
/usr/local/share/powershell /usr/share/swift /usr/local/.ghcup \
/usr/lib/jvm || true
echo "some directories deleted"
sudo apt install aptitude -y >/dev/null 2>&1
sudo aptitude purge aria2 ansible azure-cli shellcheck rpm xorriso zsync \
esl-erlang firefox gfortran-8 gfortran-9 google-chrome-stable \
google-cloud-sdk imagemagick \
libmagickcore-dev libmagickwand-dev libmagic-dev ant ant-optional kubectl \
mercurial apt-transport-https mono-complete libmysqlclient \
unixodbc-dev yarn chrpath libssl-dev libxft-dev \
libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev \
snmp pollinate libpq-dev postgresql-client powershell ruby-full \
sphinxsearch subversion mongodb-org azure-cli microsoft-edge-stable \
-y -f >/dev/null 2>&1
sudo aptitude purge google-cloud-sdk -f -y >/dev/null 2>&1
sudo aptitude purge microsoft-edge-stable -f -y >/dev/null 2>&1 || true
sudo apt purge microsoft-edge-stable -f -y >/dev/null 2>&1 || true
sudo aptitude purge '~n ^mysql' -f -y >/dev/null 2>&1
sudo aptitude purge '~n ^php' -f -y >/dev/null 2>&1
sudo aptitude purge '~n ^dotnet' -f -y >/dev/null 2>&1
sudo apt-get autoremove -y >/dev/null 2>&1
sudo apt-get autoclean -y >/dev/null 2>&1
echo "some packages purged"
-
name: Check disk space
run: |
df . -h
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Login to NVCR
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: nvcr.io
username: ${{ secrets.NVCR_USERNAME }}
password: ${{ secrets.NVCR_PASSWORD }}
-
name: Login to Docker Hub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
-
name: Metadata for dev Image
id: meta-dev
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
images: |
stanfordvl/omnigibson-dev
Expand All @@ -41,7 +80,7 @@ jobs:
-
name: Metadata for prod Image
id: meta-prod
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
images: |
stanfordvl/omnigibson
Expand All @@ -50,7 +89,7 @@ jobs:
type=semver,pattern={{version}}
-
name: Build and push dev image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
push: true
tags: ${{ steps.meta-dev.outputs.tags }}
Expand All @@ -60,11 +99,11 @@ jobs:
cache-to: type=gha,mode=max
-
name: Build and push prod image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
push: true
tags: ${{ steps.meta-prod.outputs.tags }}
labels: ${{ steps.meta-prod.outputs.labels }}
file: docker/prod.Dockerfile
cache-from: type=gha
cache-to: type=gha,mode=max
cache-to: type=gha,mode=max
77 changes: 0 additions & 77 deletions .github/workflows/docs.yml

This file was deleted.

21 changes: 1 addition & 20 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,7 @@ concurrency:

jobs:
test:
runs-on: [self-hosted, linux, gpu]
container:
image: stanfordvl/omnigibson-dev:latest
options: --gpus=all --privileged --user=root
env:
DISPLAY: ""
OMNIGIBSON_HEADLESS: 1
volumes:
- /scr/omni-data/datasets:/data
- /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json
- /usr/share/vulkan/icd.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json
- /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json
- /scr/omni-data/isaac-sim/cache/ov:/root/.cache/ov:rw
- /scr/omni-data/isaac-sim/cache/pip:/root/.cache/pip:rw
- /scr/omni-data/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw
- /scr/omni-data/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw
- /scr/omni-data/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw
- /scr/omni-data/isaac-sim/config:/root/.nvidia-omniverse/config:rw
- /scr/omni-data/isaac-sim/data:/root/.local/share/ov/data:rw
- /scr/omni-data/isaac-sim/documents:/root/Documents:rw
runs-on: [self-hosted, linux, gpu, dataset-enabled]

defaults:
run:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
* 🤖 Mobile Manipulator Robots with Modular ⚙️ Controllers
* 🌎 OpenAI Gym Interface

Check out [**`OmniGibson`**'s documentation](https://stanfordvl.github.io/OmniGibson/getting_started/installation.html) to get started!
Check out [**`OmniGibson`**'s documentation](https://behavior.stanford.edu/omnigibson/getting_started/installation.html) to get started!

### Citation
If you use **`OmniGibson`** or its assets and models, please cite:
Expand Down
8 changes: 4 additions & 4 deletions docker/dev.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvcr.io/nvidia/isaac-sim:2022.2.0
FROM nvcr.io/nvidia/isaac-sim:2023.1.0

# Set up all the prerequisites.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
Expand All @@ -19,7 +19,7 @@ ENV OMNIGIBSON_KEY_PATH /data/omnigibson.key
# Install Mamba (light conda alternative)
RUN curl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj -C / bin/micromamba
ENV MAMBA_ROOT_PREFIX /micromamba
RUN micromamba create -n omnigibson -c conda-forge python=3.7
RUN micromamba create -n omnigibson -c conda-forge python=3.10
RUN micromamba shell init --shell=bash --prefix=/micromamba

# Make sure isaac gets properly sourced every time omnigibson gets called
Expand All @@ -45,8 +45,8 @@ RUN cd /ompl/build/Release && \
micromamba run -n omnigibson cmake ../.. \
-DCMAKE_INSTALL_PREFIX="$CONDA_PREFIX" \
-DBOOST_ROOT="$CONDA_PREFIX" \
-DPYTHON_EXEC=/micromamba/envs/omnigibson/bin/python3.7 \
-DPYTHONPATH=/micromamba/envs/omnigibson/lib/python3.7/site-packages && \
-DPYTHON_EXEC=/micromamba/envs/omnigibson/bin/python3.10 \
-DPYTHONPATH=/micromamba/envs/omnigibson/lib/python3.10/site-packages && \
micromamba run -n omnigibson make -j 4 update_bindings && \
micromamba run -n omnigibson make -j 4 && \
cd py-bindings && \
Expand Down
52 changes: 0 additions & 52 deletions docker/run_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,55 +26,6 @@ do
esac
done

ICD_PATH_1="/usr/share/vulkan/icd.d/nvidia_icd.json"
ICD_PATH_2="/etc/vulkan/icd.d/nvidia_icd.json"
LAYERS_PATH_1="/usr/share/vulkan/icd.d/nvidia_layers.json"
LAYERS_PATH_2="/usr/share/vulkan/implicit_layer.d/nvidia_layers.json"
LAYERS_PATH_3="/etc/vulkan/implicit_layer.d/nvidia_layers.json"
EGL_VENDOR_PATH="/usr/share/glvnd/egl_vendor.d/10_nvidia.json"

# Find the ICD file
if [ -e "$ICD_PATH_1" ]; then
ICD_PATH=$ICD_PATH_1
elif [ -e "$ICD_PATH_2" ]; then
ICD_PATH=$ICD_PATH_2
else
echo "Missing nvidia_icd.json file.";
echo "Typical paths:";
echo "- /usr/share/vulkan/icd.d/nvidia_icd.json or";
echo "- /etc/vulkan/icd.d/nvidia_icd.json";
echo "You can google nvidia_icd.json for your distro to find the correct path.";
echo "Consider updating your driver to 525 if you cannot find the file.";
echo "To continue update the ICD_PATH_1 at the top of the run_docker.sh file and retry";
exit;
fi

# Find the layers file
if [ -e "$LAYERS_PATH_1" ]; then
LAYERS_PATH=$LAYERS_PATH_1
elif [ -e "$LAYERS_PATH_2" ]; then
LAYERS_PATH=$LAYERS_PATH_2
elif [ -e "$LAYERS_PATH_3" ]; then
LAYERS_PATH=$LAYERS_PATH_3
else
echo "Missing nvidia_layers.json file."
echo "Typical paths:";
echo "- /usr/share/vulkan/icd.d/nvidia_layers.json";
echo "- /usr/share/vulkan/implicit_layer.d/nvidia_layers.json";
echo "- /etc/vulkan/implicit_layer.d/nvidia_layers.json";
echo "You can google nvidia_layers.json for your distro to find the correct path.";
echo "Consider updating your driver to 525 if you cannot find the file.";
echo "To continue update the LAYERS_PATH_1 at the top of the run_docker.sh file and retry";
exit;
fi

if [ ! -e "$EGL_VENDOR_PATH" ]; then
echo "Missing ${EGL_VENDOR_PATH} file."
echo "(default path: /usr/share/vulkan/icd.d/nvidia_icd.json)";
echo "To continue update the EGL_VENDOR_PATH at the top of the run_docker.sh file and retry";
exit;
fi

# Move directories from their legacy paths.
if [ -e "${DATA_PATH}/og_dataset" ]; then
mv "${DATA_PATH}/og_dataset" "${DATA_PATH}/datasets/og_dataset"
Expand Down Expand Up @@ -117,9 +68,6 @@ docker run \
-e DISPLAY=${DOCKER_DISPLAY} \
-e OMNIGIBSON_HEADLESS=${OMNIGIBSON_HEADLESS} \
-v $DATA_PATH/datasets:/data \
-v ${ICD_PATH}:/etc/vulkan/icd.d/nvidia_icd.json \
-v ${LAYERS_PATH}:/etc/vulkan/implicit_layer.d/nvidia_layers.json \
-v ${EGL_VENDOR_PATH}:/usr/share/glvnd/egl_vendor.d/10_nvidia.json \
-v $DATA_PATH/isaac-sim/cache/kit:/isaac-sim/kit/cache/Kit:rw \
-v $DATA_PATH/isaac-sim/cache/ov:/root/.cache/ov:rw \
-v $DATA_PATH/isaac-sim/cache/pip:/root/.cache/pip:rw \
Expand Down
4 changes: 2 additions & 2 deletions docs/getting_started/building_blocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ icon: octicons/package-16

??? question annotate "Why macros?"

Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of **`OmniGibson`**'s features can cause unecessary slowdowns, and so configuring the macros for your specific use case can optimize performance.
Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of **`OmniGibson`**'s features can cause unnecessary slowdowns, and so configuring the macros for your specific use case can optimize performance.

For example, Omniverse provides a so-called `flatcache` feature which provides significant performance boosts, but cannot be used when fluids or soft bodies are present. So, we ideally should always have `gm.USE_FLATCACHE=True` unless we have fluids or soft bodies in our environment.

Expand Down Expand Up @@ -407,7 +407,7 @@ python -m omnigibson.examples.object_states.particle_source_sink_demo

This demo loads in a sink, which is enabled with both the ParticleSource and ParticleSink states. The sink's particle source is located at the faucet spout and spawns a continuous stream of water particles, which is then destroyed ("sunk") by the sink's particle sink located at the drain.

??? note "Difference bewteen `ParticleApplier/Removers` and `ParticleSource/Sinks`"
??? note "Difference between `ParticleApplier/Removers` and `ParticleSource/Sinks`"
The key difference between `ParticleApplier/Removers` and `ParticleSource/Sinks` is that `Applier/Removers`
requires contact (if using `ParticleProjectionMethod.ADJACENCY`) or overlap
(if using `ParticleProjectionMethod.PROJECTION`) in order to spawn / remove particles, and generally only spawn
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/robots/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,5 @@
from omnigibson.robots.fetch import Fetch
from omnigibson.robots.tiago import Tiago
from omnigibson.robots.two_wheel_robot import TwoWheelRobot
from omnigibson.robots.franka import FrankaPanda
from omnigibson.robots.franka_allegro import FrankaAllegro
Loading

0 comments on commit 2feb290

Please sign in to comment.