-
Notifications
You must be signed in to change notification settings - Fork 2
Working with the simulator
Tutorial on how to work with the simulator.
Prerequisite: Follow the instructions on the Environment Setup Page.
The simulator is all coded in husky_ur5.py
which then interfaces with the web-app app.py
to create a crowd-sourcing platform. However, in this wiki we only discuss how to work with the simulator. The goal specifications for $DOMAIN are encoded in jsons/$DOMAIN_goals/*.json
. The scene specifications for $DOMAIN are encoded in jsons/$DOMAIN_worlds/*.json
. The goal specifications are declarative and are encoded as a list of constraints that need to be valid. The scene specifications are encoded as a list of objects names with initial position, orientation and possible states (as a list of discrete positions and orientations).
Sample plan inputs for the simulator are encoded in jsons/input_$DOMAIN.json
which is a list of symbolic actions. Possible symbolic actions and their decomposition to sub-symbolic actions are given in src/actions.py
. A sample plan file is shown below.
{
"actions":[
{
"name": "changeState",
"args": ["cupboard", "open"]
},
{
"name": "pickNplaceAonB",
"args": ["apple", "box"]
},
{
"name": "pickNplaceAonB",
"args": ["banana", "box"]
},
{
"name": "pickNplaceAonB",
"args": ["orange", "box"]
},
{
"name": "pickNplaceAonB",
"args": ["box", "cupboard"]
}
]
}
To run a simulator with a world scene specification $SCENE_PATH, goal specification $GOAL_PATH and input plan $INPUT_PATH, use the following command:
python3 husky_ur5.py --world $SCENE_PATH --goal $GOAL_PATH --input $INPUT_PATH --display tp
For example, if you want to run the simulator with a custom plan specified in jsons/input_home.json
on world_home0 and goal of transporting fruits to cupboard, use the following command:
python3 husky_ur5.py --world jsons/home_worlds/world_home0.json --goal jsons/home_goals/goal2-fruits-cupboard.json --input jsons/input_home.json --display tp
This command will open an OpenGL simulator window. Click on the window and press G to minimize the debugging tabs. Use mouse scroll
to zoom in or out, use Ctrl+Scroll click
to pan, and Ctrl+Left click
to rotate the camera.
Note: Without the --display tp
option, the simulator will save a sequence of first-person images in the logs/
directory.
The plan execution, when finished will close the simulator and save the datapoint as test.datapoint
file.
To add a new object to the scene, follow the steps below:
- Create or download an object (OBJ) file and save it and corresponding material (MTL) texture file in
models/meshes/$OBJECT_NAME/
. For example, if you want to create an apple object, create two new filesmodels/meshes/apple/10162_Apple_v01_l3.obj
andmodels/meshes/apple/10162_Apple_v01_l3.mtl
. Make sure the linking of MTL file inside the OBJ file is relative and not absolute. Examples of online repositories of 3D objects are Free3D and CGTrader. - Create a Unified Robotic Description Format (URDF) file for the object. For example, if you want to create an apple object, create a new file
models/urdf/apple.urdf
. The URDF file needs to specify contact, visual, geometric and collision details where you need to mention the path of the OBJ file and scale. The URDF file for the apple object is shown below:
<?xml version="1.0" encoding="utf-8"?>
<robot name="apple">
<link name="base_link">
<contact>
<lateral_friction value="0.5"/>
<rolling_friction value="0.0"/>
<contact_cfm value="0.0"/>
<contact_erp value="1.0"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value="5"/>
<inertia ixx="0" ixy="0" ixz="0" iyy="0" iyz="0" izz="0"/>
</inertial>
<visual>
<origin rpy="1.57 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="models/meshes/apple/10162_Apple_v01_l3.obj" scale="0.0007 0.0007 0.0007"/>
</geometry>
<material name="tray_material">
<color rgba="1 0.03 0 1"/>
</material>
</visual>
<collision>
<origin rpy="1.57 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="models/meshes/apple/10162_Apple_v01_l3.obj" scale="0.0006 0.0007 0.0007"/>
</geometry>
</collision>
</link>
</robot>
- Add the specifications of the object in the
jsons/objects.json
file. Specify the name, URDF file path, list of constraints which might include "noise" if Gaussian noise is to be added to position when placing in the scene and "horizontal" if object needs to be kept horizontal at all times. Add tolerance in distance, constraint link (default is -1), constraint c_pos i.e. relative position from object center when another object is placed on this object. Add constraint pos i.e. the relative position from object center when this object is placed on another. Add distance to be kept from UR5 gripper when constrained to it. Add object properties like "Movable", "Surface", "Printable", "Switchable", etc. For apple object the specs are given below.
{
"name": "apple",
"urdf": "models/urdf/apple.urdf",
"constraints": ["noise"],
"tolerance": 1.2,
"constraint_link": -1,
"constraint_cpos": [[0, 0, -0.05]],
"constraint_pos": [],
"ur5_dist": [0.3,0,0],
"properties": ["Movable"],
"size": [0.2, 0.2, 0.2]
},
- Add the object in the scene. For example, in world_home0 the description of apple object is given below.
{
"name": "apple",
"position": [-1, -4.5, 0.7],
"orientation": [],
"states": []
},
- Add ConceptNet Embedding of this object in
jsons/embeddings/conceptnet.vectors
and FastText Embedding injsons/embeddings/fasttext.vectors
.
To add a goal specification, create a goal file, say jsons/home_goals/goal1_milk_fridge.json
. Specify the declarative goal predicates as a list. An example is shown below. The implementation of goal checking can be seen in the checkGoal()
function in src/utils.py
.
{
"goals":[
{
"object": "milk",
"target": "fridge",
"state": "",
"position": "",
"tolerance": 0
},
{
"object": "fridge",
"target": "",
"state": "close",
"position": "",
"tolerance": 0
}
],
"text": "Put the milk carton inside the fridge.",
"goal-objects": ["milk", "fridge"],
"goal-vector": [float list of 300 size]
}
The goal vector can be calculated as the average of the embeddings of all words in goal "text". For this, we use ConceptNet Embeddings. To do this follow the steps below (here fname is the file path for the extracted file and text is the text for the new goal):
$ python3
>>> from src.extract_vectors import *
>>> data = load_all_vectors(fname)
>>> list(form_goal_vec(data, text))
So for the extracted file path numberbatch-en.txt
and new goal text "put milk in cupboard", use the following:
$ python3
>>> from src.extract_vectors import *
>>> data = load_all_vectors("numberbatch-en.txt")
>>> list(form_goal_vec(data, "put milk in cupboard"))
To visualize datapoints, use the analyze.py
code. There are many visualizations possible using this file.
- To visualize complete datapoints use the
runDatapoint()
method. An example is given below. This method works only when the datapoint does not have any errors in the plan. Note, due to stochasticity in the simulator, a plan working earlier might not work again as the object positions and constraints may act randomly in different runs.
python analyze.py "runDatapoint('dataset/home/goal1-milk-fridge/world_home0/7')"
- To visualize object-centric graphs of a datapoint, use the
drawGraph()
method. An example is given below. This method saves a sequence of graphs including that of the initial state and world state after every action.
python analyze.py "drawGraph('dataset/home/goal1-milk-fridge/world_home0/7')"
A sample graph is shown below. The agent node is shown in green, tools is black and objects with states in red. The relations of Close are shown in green (only populated for agent), On in black, Inside in red and Stuck to in blue. The legend for node IDs to objects are given below:
Home: 0: floor, 1: walls, 2: door, 3: fridge, 4: cupboard, 5: husky, 6: table, 7: table2, 8: couch, 9: big-tray, 10: book, 11: paper, 12: paper2, 13: cube gray, 14: cube green, 15: cube red, 16: tray, 17: tray2, 18: light, 19: bottle blue, 20: bottle gray, 21: bottle red, 22: box, 23: apple, 24: orange, 25: banana, 26: chair, 27: ball, 28: stick, 29: dumpster, 30: milk, 31: shelf, 32: glue, 33: tape, 34: stool, 35: mop, 36: sponge, 37: vacuum, 38: dirt.
Factory: 0: floor warehouse, 1: 3D printer, 2: assembly station, 3: blow dryer, 4: board, 5: box, 6: brick, 7: coal, 8: crate green, 9: crate peach, 10: crate red, 11: cupboard, 12: drill, 13: gasoline, 14: generator, 15: glue, 16: hammer, 17: ladder, 18: lift, 19: long shelf, 20: mop, 21: nail, 22: oil, 23: paper, 24: part1, 25: part2, 26: part3, 27: platform, 28: screw, 29: screwdriver, 30: spraypaint, 31: stick, 32: stool, 33: table, 34: tape, 35: toolbox, 36: trolley, 37: wall warehouse, 38: water, 39: welder, 40: wood, 41: wood cutter, 42: worktable, 43: ramp, 44: husky, 45: tray.
- To print the complete datapoint use the
printDatapoint()
method. An example is shown below.
python analyze.py "printDatapoint('dataset/home/goal1-milk-fridge/world_home0/7')"
BSD-2-Clause. Copyright (c) 2020, Shreshth Tuli, Rajas Basal, Rohan Paul and Mausam. All rights reserved.