Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed save video printouts #328

Open
wants to merge 52 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
c7c616b
modifies behavior_mp_env to allow planning with HLA's
NishanthJKumar Oct 10, 2021
3323fee
Merge pull request #1 from NishanthJKumar/master
NishanthJKumar Oct 10, 2021
1122a45
modifies behavior_mp_env to reveal action_exec_status
NishanthJKumar Oct 10, 2021
fb82c73
includes new tamp env for behavior to integrate with sample-based TAM…
NishanthJKumar Oct 11, 2021
f211023
add mac installation instructions
tomsilver Oct 13, 2021
71da6c9
add njk sorting books yaml
tomsilver Oct 16, 2021
b322e81
add reshelving books yaml
tomsilver Oct 18, 2021
a1c65ab
includes new grasp and navigation parameterized HLA's!
NishanthJKumar Oct 19, 2021
f1f6818
Merge pull request #2 from NishanthJKumar/behavior_planning_wrapper
NishanthJKumar Oct 20, 2021
5964228
changes to enable navigateTo option
NishanthJKumar Oct 26, 2021
d9c75b6
Merge pull request #3 from NishanthJKumar/behavior_planning_wrapper
NishanthJKumar Oct 26, 2021
6ad053b
updates to enable navigateTo option to work deterministically
NishanthJKumar Nov 4, 2021
6fb3169
Merge pull request #4 from NishanthJKumar/behavior_planning_wrapper
NishanthJKumar Nov 4, 2021
bee4e96
increases RRT search iters and removes ghost hands
NishanthJKumar Nov 9, 2021
714a624
enables passing random seed to sampling functions
NishanthJKumar Nov 9, 2021
fa9a776
minor changes to enforce seeded, predictable randomness
NishanthJKumar Nov 17, 2021
cdacb11
minor changes to try to get grasp option working
NishanthJKumar Dec 21, 2021
36f86f3
minor deletion of comment
NishanthJKumar Dec 31, 2021
7a388d4
adds minor comment to code that gets grasped objects
NishanthJKumar Dec 31, 2021
7a7d667
small change to get basic pick and place working with predicators!
NishanthJKumar Jan 1, 2022
3e6b995
adds changes that fix state reloading bug
NishanthJKumar Jan 18, 2022
7f8165d
Merge branch 'master' of https://github.com/StanfordVL/iGibson into S…
NishanthJKumar Jan 18, 2022
f182c07
Merge branch 'StanfordVL-master'. Necessary for QoL and to fix state …
NishanthJKumar Jan 18, 2022
9ad900f
Revert "Checking and maybe updating to latest changes on SVL Master"
NishanthJKumar Jan 18, 2022
493e795
Merge pull request #6 from Learning-and-Intelligent-Systems/revert-5-…
NishanthJKumar Jan 18, 2022
404a36f
updates to get load_checkpoint to correctly update visually
NishanthJKumar Jan 19, 2022
1bb2aad
minor changes to get grasping with option models working!
NishanthJKumar Jan 19, 2022
a417ff0
added behavior task and house command line arguments
Jan 22, 2022
7e18fd1
changes necessary for online_sampling to produce randomized scenes
NishanthJKumar Jan 26, 2022
74d97aa
adds random seed for placeOnTop sampler
NishanthJKumar Jan 27, 2022
6cd759f
changes to dynamically edit online_sampling
NishanthJKumar Jan 28, 2022
a6c4fda
removed unnecessary print
NishanthJKumar Jan 28, 2022
31122fa
minor changes to get online sampling working!
NishanthJKumar Jan 29, 2022
08365c0
minor changes for behavior hacking/debugging
wmcclinton May 30, 2022
fa2326c
removes lots of unnecessary prints
NishanthJKumar Jul 15, 2022
e9f3d6f
Modifies onTop classifier to work correctly for 'shelf' category
NishanthJKumar Sep 11, 2022
9fec253
modify opengl variable name in shader
NishanthJKumar Sep 16, 2022
cdc5377
gitignore update
NishanthJKumar Sep 16, 2022
ce75e2d
commented out lines that cause bug
NishanthJKumar Sep 16, 2022
599a566
change the way modify_config_file works
NishanthJKumar Sep 28, 2022
9b8cb60
changes naming to actually be unique
NishanthJKumar Sep 28, 2022
1c9d6e8
uncomments lines necessary for viz
NishanthJKumar Oct 7, 2022
ad30a13
prevents error from setting simulator viewer
NishanthJKumar Oct 7, 2022
7a59eb4
Fix isinstance check for multiobject state data load
jorge-a-mendez Oct 7, 2022
768b4ea
Fix OpenGL shading language version
jorge-a-mendez Oct 7, 2022
dc92386
remove sampling rejection printouts
NishanthJKumar Oct 14, 2022
eb7461c
added save-video functionality
kathrynle20 Mar 30, 2023
77bd079
save video works
kathrynle20 Apr 5, 2023
e602598
feedback
NishanthJKumar Apr 18, 2023
171eafd
feedback
NishanthJKumar Apr 18, 2023
7c39e57
fixes
NishanthJKumar Apr 18, 2023
6b4b54f
deleted filename printing
kathrynle20 Apr 26, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
includes new grasp and navigation parameterized HLA's!
* also includes utility functions to save state compatible with RetroPlan!
NishanthJKumar committed Oct 19, 2021
commit a1c65abdecb93527c7e24d6827f2299c03185efe
153 changes: 88 additions & 65 deletions igibson/envs/behavior_tamp_env.py
Original file line number Diff line number Diff line change
@@ -6,6 +6,7 @@
import gym.spaces
import numpy as np
import pybullet as p
import scipy

from igibson import object_states
from igibson.envs.behavior_env import BehaviorEnv
@@ -17,7 +18,7 @@
from igibson.utils.behavior_robot_planning_utils import dry_run_base_plan, plan_base_motion_br, plan_hand_motion_br

NUM_ACTIONS = 6
MAX_ACTION_CONT_PARAMS = 2
MAX_ACTION_CONT_PARAMS = 4


class ActionPrimitives(IntEnum):
@@ -133,47 +134,47 @@ def reset_and_release_hand(self):
self.robots[0].parts["right_hand"].trigger_fraction = 0
p.stepSimulation()

def step(self, action, continuous_params):
"""
:param action: an integer such that action % self.num_objects yields the object id, and
action // self.num_objects yields the correct action enum
:param continuous_params: a numpy vector of length MAX_ACTION_CONT_PARAMS. This represents
values derived from the continuous parameters of the action.
"""

obj_list_id = int(action) % self.num_objects
action_primitive = int(action) // self.num_objects
obj = self.task_relevant_objects[obj_list_id]

assert continuous_params.shape == (MAX_ACTION_CONT_PARAMS,)

if not (isinstance(obj, BRBody) or isinstance(obj, BRHand) or isinstance(obj, BREye)):
if action_primitive == ActionPrimitives.NAVIGATE_TO:
if self.navigate_to_obj_pos(obj, continuous_params[0:2], use_motion_planning=self.use_motion_planning):
logging.debug("PRIMITIVE: navigate to {} success".format(obj.name))
else:
logging.debug("PRIMITIVE: navigate to {} fail".format(obj.name))

elif action_primitive == ActionPrimitives.GRASP:
if self.obj_in_hand is None:
if isinstance(obj, URDFObject) and hasattr(obj, "states") and object_states.AABB in obj.states:
lo, hi = obj.states[object_states.AABB].get_value()
volume = get_aabb_volume(lo, hi)
if (
volume < 0.2 * 0.2 * 0.2 and not obj.main_body_is_fixed
): # say we can only grasp small objects
if (
np.linalg.norm(np.array(obj.get_position()) - np.array(self.robots[0].get_position()))
< 2
):
self.grasp_obj(obj, use_motion_planning=self.use_motion_planning)
logging.debug(
"PRIMITIVE: grasp {} success, obj in hand {}".format(obj.name, self.obj_in_hand)
)
else:
logging.debug("PRIMITIVE: grasp {} fail, too far".format(obj.name))
else:
logging.debug("PRIMITIVE: grasp {} fail, too big or fixed".format(obj.name))
# def step(self, action, continuous_params):
# """
# :param action: an integer such that action % self.num_objects yields the object id, and
# action // self.num_objects yields the correct action enum
# :param continuous_params: a numpy vector of length MAX_ACTION_CONT_PARAMS. This represents
# values derived from the continuous parameters of the action.
# """

# obj_list_id = int(action) % self.num_objects
# action_primitive = int(action) // self.num_objects
# obj = self.task_relevant_objects[obj_list_id]

# assert continuous_params.shape == (MAX_ACTION_CONT_PARAMS,)

# if not (isinstance(obj, BRBody) or isinstance(obj, BRHand) or isinstance(obj, BREye)):
# if action_primitive == ActionPrimitives.NAVIGATE_TO:
# if self.navigate_to_obj_pos(obj, continuous_params[0:2], use_motion_planning=self.use_motion_planning):
# logging.debug("PRIMITIVE: navigate to {} success".format(obj.name))
# else:
# logging.debug("PRIMITIVE: navigate to {} fail".format(obj.name))

# elif action_primitive == ActionPrimitives.GRASP:
# if self.obj_in_hand is None:
# if isinstance(obj, URDFObject) and hasattr(obj, "states") and object_states.AABB in obj.states:
# lo, hi = obj.states[object_states.AABB].get_value()
# volume = get_aabb_volume(lo, hi)
# if (
# volume < 0.2 * 0.2 * 0.2 and not obj.main_body_is_fixed
# ): # say we can only grasp small objects
# if (
# np.linalg.norm(np.array(obj.get_position()) - np.array(self.robots[0].get_position()))
# < 2
# ):
# self.grasp_obj(obj, use_motion_planning=self.use_motion_planning)
# logging.debug(
# "PRIMITIVE: grasp {} success, obj in hand {}".format(obj.name, self.obj_in_hand)
# )
# else:
# logging.debug("PRIMITIVE: grasp {} fail, too far".format(obj.name))
# else:
# logging.debug("PRIMITIVE: grasp {} fail, too big or fixed".format(obj.name))

# elif action_primitive == ActionPrimitives.PLACE_ONTOP:
# if self.obj_in_hand is not None and self.obj_in_hand != obj:
@@ -297,9 +298,9 @@ def step(self, action, continuous_params):
# else:
# logging.debug("PRIMITIVE close failed, too far")

state, reward, done, info = super(BehaviorTAMPEnv, self).step(np.zeros(17))
logging.debug("PRIMITIVE satisfied predicates:", info["satisfied_predicates"])
return state, reward, done, info
# state, reward, done, info = super(BehaviorTAMPEnv, self).step(np.zeros(17))
# logging.debug("PRIMITIVE satisfied predicates:", info["satisfied_predicates"])
# return state, reward, done, info

def step_with_exec_info(self, action, continuous_params):
"""
@@ -319,7 +320,7 @@ def step_with_exec_info(self, action, continuous_params):
print("PRIMITIVE: navigate to {} success".format(obj.name))
action_exec_status = True
else:
print("PRIMITIVE: navigate to {} fail".format(obj.name))
print(f"PRIMITIVE: navigate to {obj.name} with params {continuous_params[0:2]} fail")

elif action_primitive == ActionPrimitives.GRASP:
if self.obj_in_hand is None:
@@ -333,7 +334,7 @@ def step_with_exec_info(self, action, continuous_params):
np.linalg.norm(np.array(obj.get_position()) - np.array(self.robots[0].get_position()))
< 2
):
self.grasp_obj(obj, use_motion_planning=self.use_motion_planning)
self.grasp_obj_at_pos(obj, continuous_params[0:3], use_motion_planning=self.use_motion_planning)
print(
"PRIMITIVE: grasp {} success, obj in hand {}".format(obj.name, self.obj_in_hand)
)
@@ -342,6 +343,8 @@ def step_with_exec_info(self, action, continuous_params):
print("PRIMITIVE: grasp {} fail, too far".format(obj.name))
else:
print("PRIMITIVE: grasp {} fail, too big or fixed".format(obj.name))
else:
print("PRIMITIVE: grasp {} fail, agent already has an object in hand!".format(obj.name))

# elif action_primitive == ActionPrimitives.PLACE_ONTOP:
# if self.obj_in_hand is not None and self.obj_in_hand != obj:
@@ -479,16 +482,18 @@ def step_with_exec_info(self, action, continuous_params):
print("PRIMITIVE satisfied predicates:", info["satisfied_predicates"])
return state, reward, done, info, action_exec_status

def grasp_obj(self, obj, use_motion_planning=False):
def grasp_obj_at_pos(self, obj, grasp_offset_and_z_rot, use_motion_planning=False):
if use_motion_planning:
x, y, _ = obj.get_position()
z = obj.states[object_states.AABB].get_value()[1][2]
x, y, z = obj.get_position()
x += grasp_offset_and_z_rot[0]
y += grasp_offset_and_z_rot[1]
z += grasp_offset_and_z_rot[2]
hand_x, hand_y, hand_z = self.robots[0].parts["right_hand"].get_position()

# add a little randomness to avoid getting stuck
x += np.random.uniform(-0.025, 0.025)
y += np.random.uniform(-0.025, 0.025)
z += np.random.uniform(-0.025, 0.025)
# # add a little randomness to avoid getting stuck
# x += np.random.uniform(-0.025, 0.025)
# y += np.random.uniform(-0.025, 0.025)
# z += np.random.uniform(-0.025, 0.025)

minx = min(x, hand_x) - 0.5
miny = min(y, hand_y) - 0.5
@@ -497,12 +502,31 @@ def grasp_obj(self, obj, use_motion_planning=False):
maxy = max(y, hand_y) + 0.5
maxz = max(z, hand_z) + 0.5

# compute the angle the hand must be in such that it can grasp the object from its current offset position
# This involves aligning the z-axis (in the world frame) of the hand with the vector that goes from the hand
# to the object. We can find the rotation matrix that accomplishes this rotation by following:
# https://math.stackexchange.com/questions/180418/calculate-rotation-matrix-to-align-vector-a-to-vector-b-in-3d
hand_to_obj_vector = np.array([x - hand_x, y - hand_y, z - hand_z])
hand_to_obj_unit_vector = hand_to_obj_vector / np.linalg.norm(hand_to_obj_vector)
unit_z_vector = np.array(0,0,1)
c = np.dot(unit_z_vector, hand_to_obj_unit_vector)
if not c == -1.0:
v = np.cross(unit_z_vector, hand_to_obj_unit_vector)
s = np.linalg.norm(v)
v_x = np.array([[0, -v[2], v[1]], [v[2], 0, -v[0]], [-v[1], v[0], 0]])
R = np.eye(3) + v_x + np.linalg.matrix_power(v_x, 2) * ((1-c)/(s ** 2))
r = scipy.spatial.transform.Rotation.from_matrix(R)
euler_angles = r.as_euler('xyz')
euler_angles[2] += grasp_offset_and_z_rot[3]
else:
euler_angles = np.array([0.0, np.pi, 0.0])

state = p.saveState()
# plan a motion to above the object
plan = plan_hand_motion_br(
robot=self.robots[0],
obj_in_hand=None,
end_conf=[x, y, z + 0.05, 0, np.pi * 5 / 6.0, np.random.uniform(-np.pi, np.pi)],
end_conf=[x, y, z, euler_angles[0], euler_angles[1], euler_angles[2]],
hand_limits=((minx, miny, minz), (maxx, maxy, maxz)),
obstacles=self.get_body_ids(include_self=True),
)
@@ -633,7 +657,7 @@ def sample_fn(self):
theta = np.random.uniform(*CIRCULAR_LIMITS)
return (x, y, theta)

def navigate_to_obj_pos(self, obj, to_pos, use_motion_planning=False):
def navigate_to_obj_pos(self, obj, pos_offset, use_motion_planning=False):
"""
navigates the robot to a certain x,y position and selects an orientation
such that the robot is facing the object. If the navigation is infeasible,
@@ -648,25 +672,23 @@ def navigate_to_obj_pos(self, obj, to_pos, use_motion_planning=False):
valid_position = None # ((x,y,z),(roll, pitch, yaw))
original_position = self.robots[0].get_position()
original_orientation = self.robots[0].get_orientation()
pos = [to_pos[0], to_pos[1], self.robots[0].initial_z_offset]

if isinstance(obj, URDFObject):
if isinstance(obj, URDFObject): # must be a URDFObject so we can get its position!
obj_pos = obj.get_position()
pos_diff = np.array(obj_pos) - np.array(pos)
yaw_angle = np.arctan2(pos_diff[1], pos_diff[0])
pos = [pos_offset[0] + obj_pos[0], pos_offset[1] + obj_pos[1], self.robots[0].initial_z_offset]
yaw_angle = np.arctan2(pos_offset[1], pos_offset[0])
orn = [0, 0, yaw_angle]
self.robots[0].set_position_orientation(pos, p.getQuaternionFromEuler(orn))
eye_pos = self.robots[0].parts["eye"].get_position()
ray_test_res = p.rayTest(eye_pos, obj_pos)
blocked = len(ray_test_res) > 0 and ray_test_res[0][0] != obj.get_body_id()
# Test to see if the robot is obstructed by some object, but make sure that object
# is not either the robot's body or the object we want to pick up!
blocked = len(ray_test_res) > 0 and (ray_test_res[0][0] not in (self.robots[0].parts["body"].get_body_id(), obj.get_body_id()))
if not detect_robot_collision(self.robots[0]) and not blocked:
valid_position = (pos, orn)
else:
orn = [0, 0, np.pi]
if not detect_robot_collision(self.robots[0]):
valid_position = (pos, orn)

if valid_position is not None:
print("Position commanded is not in collision!")
if use_motion_planning:
self.robots[0].set_position_orientation(original_position, original_orientation)
plan = plan_base_motion_br(
@@ -692,6 +714,7 @@ def navigate_to_obj_pos(self, obj, to_pos, use_motion_planning=False):
self.robots[0].set_position_orientation(valid_position[0], p.getQuaternionFromEuler(valid_position[1]))
return True
else:
print("Position commanded is in collision!")
self.robots[0].set_position_orientation(original_position, original_orientation)
return False

43 changes: 43 additions & 0 deletions igibson/utils/checkpoint_utils.py
Original file line number Diff line number Diff line change
@@ -5,6 +5,20 @@
import pybullet as p


def save_task_relevant_state(env, root_directory, filename="behavior_dump"):
json_path = os.path.join(root_directory, f"{filename}.json")

# Save the dump in a file.
with open(json_path, "w") as f:
json.dump(save_task_relevant_object_and_robot_states(env), f)

def save_sim_urdf_object_state(sim, root_directory, filename="behavior_dump"):
json_path = os.path.join(root_directory, f"{filename}.json")

# Save the dump in a file.
with open(json_path, "w") as f:
json.dump(save_all_scene_object_and_robot_states(sim), f)

def save_checkpoint(simulator, root_directory):
bullet_path = os.path.join(root_directory, "%d.bullet" % simulator.frame_count)
json_path = os.path.join(root_directory, "%d.json" % simulator.frame_count)
@@ -54,3 +68,32 @@ def load_internal_states(simulator, dump):
robot_dumps = dump["robots"]
for robot, robot_dump in zip(simulator.robots, robot_dumps):
robot.load_state(robot_dump)


def save_task_relevant_object_and_robot_states(env):
# Dump the object state.
object_dump = {}
for obj in env.task_relevant_objects:
object_dump[obj.bddl_object_scope] = {'metadata': obj.metadata, 'asset_path': obj.model_path, 'pose': tuple(obj.get_position_orientation()), 'scale': tuple(obj.scale)}

# Dump the robot state.
robot_dump = []
for robot in env.simulator.robots:
robot_dump.append(robot.dump_state())

return {"objects": object_dump, "robots": robot_dump}

def save_all_scene_object_and_robot_states(sim):
# Dump the object state, but only for objects of type URDFObject
# that are in the sim.
object_dump = {}
for obj in sim.scene.get_objects():
if 'URDFObject' in str(type(obj)):
object_dump[obj.bddl_object_scope] = {'metadata': obj.metadata, 'asset_path': obj.model_path, 'pose': tuple(obj.get_position_orientation()), 'scale': tuple(obj.scale)}

# Dump the robot state.
robot_dump = []
for robot in sim.robots:
robot_dump.append(robot.dump_state())

return {"objects": object_dump, "robots": robot_dump}