-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About input depth image #86
Comments
Hi, why the depth cannot be used to correctly determine the grasp action? There are some grasp detection based on RGB by not that accurate |
Hi, thanks for the reply, I have solved this problem. However, I wonder the detected grasp translation and rotation have undergone mirror flipping. I used the view matrix of the camera to transfer the translation and rotation in the camera coordinate system to translation and rotation in the world coordinate system. But it did not work. Do you have any suggestions? |
have you solved this mirror problem, i have the same confusion |
By taking the opposite values of the translation and rotation ([-x, -y, -z]) I obtained, I can achieve relatively accurate results (in IsaacGym). |
What are you based on to determine to take the opposite values of the translation and rotation? |
Exhaustive search. =_= |
Hello,
I obtain RGB and depth images in IsaacGym, where the depth images are stored in uint16 format. The depth data obtained from IsaacGym cannot be used to correctly determine the grasp action. Is it possible to achieve a correct grasp without using the depth images from a RealSense camera?


The text was updated successfully, but these errors were encountered: