Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the action precede the observation? #54

Open
GuoPingPan opened this issue Dec 24, 2024 · 0 comments
Open

Does the action precede the observation? #54

GuoPingPan opened this issue Dec 24, 2024 · 0 comments

Comments

@GuoPingPan
Copy link

I found that when collecting data, observation is the result after env.step(action). Does it mean that every pair in the dataset is (a_t, o_{t+1})? @zhuyifengzju

obs, reward, done, info = env.step(action)

   for j, action in enumerate(actions):

            obs, reward, done, info = env.step(action)

            if j < num_actions - 1:
                # ensure that the actions deterministically lead to the same recorded states
                state_playback = env.sim.get_state().flatten()
                # assert(np.all(np.equal(states[j + 1], state_playback)))
                err = np.linalg.norm(states[j + 1] - state_playback)

                if err > 0.01:
                    print(
                        f"[warning] playback diverged by {err:.2f} for ep {ep} at step {j}"
                    )

            # Skip recording because the force sensor is not stable in
            # the beginning
            if j < cap_index:
                continue

            valid_index.append(j)

            if not args.no_proprio:
                if "robot0_gripper_qpos" in obs:
                    gripper_states.append(obs["robot0_gripper_qpos"])

                joint_states.append(obs["robot0_joint_pos"])

                ee_states.append(
                    np.hstack(
                        (
                            obs["robot0_eef_pos"],
                            T.quat2axisangle(obs["robot0_eef_quat"]),
                        )
                    )
                )

            robot_states.append(env.get_robot_state_vector(obs))

            if args.use_camera_obs:

                if args.use_depth:
                    agentview_depths.append(obs["agentview_depth"])
                    eye_in_hand_depths.append(obs["robot0_eye_in_hand_depth"])

                agentview_images.append(obs["agentview_image"])
                eye_in_hand_images.append(obs["robot0_eye_in_hand_image"])
            else:
                env.render()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant