-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recontruct Zivid frame from numpy ndarray #72
Comments
There is currently only two ways to create a
To me this sounds like a deficiency in the ROS wrapper, i.e. it should be possible to do HandEye calibration in ROS with whatever data types ROS provides. @apartridge will likely know more. |
it should be possible to do HandEye calibration in ROS with whatever data types ROS provides Yes, I agree maybe a solution would to let the detect_feature_points function except a np.ndarray instead of the zivid.Pointcloud? |
Unfortunately the Zivid ROS driver itself does not have hand-eye calibration. This would be some work to do and is not on the short-term roadmap. In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS. That is not possible currently. To be able to do it, we would need to let the user make a PointCloud based on their own data. This is possible in C++ in API 1.x, but not in API 2.x. That feature is not available in the Python wrapper in API 1.x anyway. This is also not on our short-term roadmap as-of-now. As a workaround, you could try to do hand-eye calibration using just Python (not involving ROS driver at all), and then disconnect from camera in python, and re-connect to it in the ROS driver, after the calibration is completed. It is a little messy, but probably doable. A more hacky workaround is to make your own ZDF file with the point cloud data and load that from disk. Keep in mind that the ZDF format may change without notice in the future (and is changing some from API 1.x to 2.x), so it's not a good long-term solution. |
Note that |
In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS. This is indeed correct
This would be a nice solution |
Any idea when this will be implemented? |
This is currently not on our shortlist and we are fully booked for a while, sorry. |
Yes that would be great. There must be more users that are using ROS to control the Zivid (and need a hand in eye calibration). |
I need to convert a numpy.ndarray to a Zivid frame. The reason for this is that I am using ROS (Zivid Driver), which sends point clouds in the not Zivid format (Pointcloud2D message) and want to preform Zivid eye in hand calibration which requires a zivid pointcloud object.
I am able to convert a Pointcloud2D message to numpy.ndarray, which has the same result as if you would run the Zivid Pointcloud.to_array function. However, doing it the other way around is not possible at the moment. Or at least I am not able to do it....
Is there a possibility to implement this. In the mean time is there a workaround I can use?
The text was updated successfully, but these errors were encountered: