Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vision System Depth #124

Closed
RobaczeQ opened this issue Sep 12, 2022 · 3 comments
Closed

Vision System Depth #124

RobaczeQ opened this issue Sep 12, 2022 · 3 comments

Comments

@RobaczeQ
Copy link

Summary

As stated in #88 it is possible to get RGB data but not Depth using OpenCV and Gstreamer, #49 suggests using ROS because there aren't bindings for depth data for python. Is there binding for C++ to access Depth data? It doesn't need to be using OpenCV, if Matlab and ROS could get depth data, then it shouldn't be a problem to get it in C++, right?
Also on ros_kortex_vision is an issue (Kinovarobotics/ros_kortex_vision#1) where @VitaliyKhomko has really nice examples of point cloud/distance-based thresholding/distance measurement probably using Intel® RealSense™ and Kinova camera (or not Kinova camera? Like I can see Kinova camera on the pictures, so he had 2 of them? Or did he unplug Kinova camera and use it with a USB cable?). I tried Intel® RealSense™ D400 Series Custom Calibration but it doesn't find the camera thru ethernet. Is it possible to make that code (he works for Kinova) available to the public or by mail?

Use case

Spreading use cases of Kinova, examples in C++ doesn't even show how to get data from the camera, just parameters.

Also hi @felixmaisonneuve, I would appreciate looking again at #77, I reopened 2 weeks ago and didn't get any answer.

Regards,
Matt

@felixmaisonneuve
Copy link
Contributor

Hi @RobaczeQ,

Color and depth stream are streamed on rtsp://<robot_ip>/color and rtsp://<robot_ip>/depth. As per the "Working with camera streams using GStreamer" section in the user guide, it is recommanded to use GStreamer to handle camera streams.
Everywhere I look (e.g. ros_kortex_vision), the depth image always comes from the depth stream that we fetched using GStreamer.
I can guide you to the ros_kortex_vision code (in C++) that does all that. It fetches the data from the data stream using GStreamer, then eventually converts the data and publishes it to RVIZ in a format it recognizes so the data can be displayed. This is all done in this file : https://github.com/Kinovarobotics/ros_kortex_vision/blob/master/src/vision.cpp

Unfortunately, I have very limited knowledge on the vision module. I almost never spent time on it. At some point (when time will allow it) I will probably dive on this topic and go throught the ros_kortex_vision issues.

Vitaliy's examples are using a Kinova Camera (which is a Intel Realsense camera). He used the same depth sensor stream. For his point cloud example, he also used the librealsense library.

I have access to his code, I will inform myself if I can share his work. I will get back to you on this.

I will also get back to you on the other issue. I do not have the background to help you on the CoggingFeedforward algorithm. I went throught the code, but didn't understand much. The resource person on this is in vacation and will be back next week. I will update the issue as soon as I have an answer.

Rgerads,
Felix

@RobaczeQ
Copy link
Author

https://github.com/Kinovarobotics/ros_kortex_vision/blob/master/src/vision.cpp
Yes I saw that file, that was cause of my motivation that it should work from C++ (matlab had compiled imaq, so using ROS I could see what's inside), but was little intimidated by how much it was to set up a pipeline and convert to ROS data, but if I must do those most of those things either way, then I will try to change that code so it runs without ROS.

But did he use it when it was connected to Kinova arm (via Ethernet) or just USB to vision module? If it was still connected to another arm that wasn't in those pictures, then I would probably try Intel RealSense library first instead of GStreamer.

I have some idea how should CoggingFeedforward work, but without access to any code or paper it relies on, it would be really hard to recreate. Also there wasn't any command in documentation besides setting mode of that cogging mode that has cogging compensation with calibrated parameters so probably parameters are also hard coded and not public.

Thanks for your help :)

@felixmaisonneuve
Copy link
Contributor

Vitaliy used a camera mounted on an second arm that was not present in the pictures and he was connected to the arm via a regular Ethernet cable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants