Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recontruct Zivid frame from numpy ndarray #72

Open
matthijsramlab opened this issue Jun 15, 2020 · 8 comments
Open

Recontruct Zivid frame from numpy ndarray #72

matthijsramlab opened this issue Jun 15, 2020 · 8 comments

Comments

@matthijsramlab
Copy link

matthijsramlab commented Jun 15, 2020

I need to convert a numpy.ndarray to a Zivid frame. The reason for this is that I am using ROS (Zivid Driver), which sends point clouds in the not Zivid format (Pointcloud2D message) and want to preform Zivid eye in hand calibration which requires a zivid pointcloud object.

I am able to convert a Pointcloud2D message to numpy.ndarray, which has the same result as if you would run the Zivid Pointcloud.to_array function. However, doing it the other way around is not possible at the moment. Or at least I am not able to do it....

Is there a possibility to implement this. In the mean time is there a workaround I can use?

@eskaur
Copy link
Member

eskaur commented Jun 15, 2020

There is currently only two ways to create a Zivid::PointCloud:

  • Capturing with a Zivid camera.
  • Loading a ZDF file.
    (And a Zivid::Frame is merely a PointCloud plus some meta-data.)

To me this sounds like a deficiency in the ROS wrapper, i.e. it should be possible to do HandEye calibration in ROS with whatever data types ROS provides. @apartridge will likely know more.

@matthijsramlab
Copy link
Author

it should be possible to do HandEye calibration in ROS with whatever data types ROS provides

Yes, I agree maybe a solution would to let the detect_feature_points function except a np.ndarray instead of the zivid.Pointcloud?

@apartridge
Copy link
Contributor

Unfortunately the Zivid ROS driver itself does not have hand-eye calibration. This would be some work to do and is not on the short-term roadmap.

In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS. That is not possible currently. To be able to do it, we would need to let the user make a PointCloud based on their own data. This is possible in C++ in API 1.x, but not in API 2.x. That feature is not available in the Python wrapper in API 1.x anyway. This is also not on our short-term roadmap as-of-now.

As a workaround, you could try to do hand-eye calibration using just Python (not involving ROS driver at all), and then disconnect from camera in python, and re-connect to it in the ROS driver, after the calibration is completed. It is a little messy, but probably doable.

A more hacky workaround is to make your own ZDF file with the point cloud data and load that from disk. Keep in mind that the ZDF format may change without notice in the future (and is changing some from API 1.x to 2.x), so it's not a good long-term solution.

@eskaur
Copy link
Member

eskaur commented Jun 16, 2020

Note that calibrateEyeInHand() does not require a PointCloud as input, it merely requires DetectionResults and Poses. As long as the feature point detection (what normally happens in detectFeaturePoints()) can be done in Python with OpenCV, we merely need to add a way for the user to create their own DetectionResult based on a list of 3D points. This is a much lower hanging fruit in terms of SDK changes.

@matthijsramlab
Copy link
Author

In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS.

This is indeed correct

Note that calibrateEyeInHand() does not require a PointCloud as input, it merely requires DetectionResults and Poses. As long as the feature point detection (what normally happens in detectFeaturePoints()) can be done in Python with OpenCV, we merely need to add a way for the user to create their own DetectionResult based on a list of 3D points. This is a much lower hanging fruit in terms of SDK changes.

This would be a nice solution

@matthijsramlab
Copy link
Author

Any idea when this will be implemented?

@eskaur
Copy link
Member

eskaur commented Jun 22, 2020

This is currently not on our shortlist and we are fully booked for a while, sorry.
I can bring it up internally to see what we can do to change that.

@matthijsramlab
Copy link
Author

Yes that would be great. There must be more users that are using ROS to control the Zivid (and need a hand in eye calibration).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants