Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to keep the depth information of a rotating object the same at all times #13535

Open
zhanyaoaaaaaa opened this issue Nov 22, 2024 · 3 comments

Comments

@zhanyaoaaaaaa
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC
SDK Version { legacy / 2.. }
Language {python}
Segment {others }

Issue Description

When I was filming the rotating wind turbine, I found that the depth information of the marked points on the blades of the rotating fan was different at every moment and varied periodically with the rotation speed. I want to ask how to shoot in order to eliminate this depth information that changes over time as much as possible. In theory, depth information should remain constant at every moment. May I ask what caused the depth change? Is it due to the camera's optical axis (i.e. z-axis) not being parallel to the rotation axis of the rotating fan?
1732284782342

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 22, 2024

Hi @zhanyaoaaaaaa It is normal for depth values to continuously fluctuate to a certain extent, even when the fluctuations are very small in value, because the depth image is being continuously updated.

You can reduce the fluctuations and stabilize the depth values by applying a Temporal post-processing filter and setting its filter smooth alpha value to '0.1' instead of its default value of '0.4'. This causes the depth image to update less frequently and so the depth values change less often.

In the RealSense Viewer tool the Temporal filter is already enabled by default, so you can easily test whether changing the Filter Smooth Alpha setting will make a difference,

  1. Go to Stereo Module > Post-Processing Filters

  2. Expand open the list of post-processing filters with the small arrow icon beside 'Post-Processing Filters'. Then expand open the sub-options of the Temporal filter.

  3. Go to the Filter Smooth Alpha sub-option and either use the slider to set it to 0.1 or click on the pencil icon to manually type in 0.1 with the keyboard.


It is unlikely though that it will be possible to have a depth value that is completely fixed and non-fluctuating.

@zhanyaoaaaaaa
Copy link
Author

First of all, thank you very much for your reply. After receiving your reply, I also tried to resolve this situation by applying a Temporal post-processing filter and setting its filter smooth alpha value to '0.1.

However, after applying your method, I found that the results still had a significant deviation. Could you please explain specifically what caused this change in depth values?

Alternatively, when capturing the depth value of a wall, I found that although I tried my best to make the optical axis of the D455 camera (i.e. the z-axis of the camera coordinate system) perpendicular to the wall. The depth value of this plane of the wall after shooting is still not the same, and the difference is significant. May I ask what the reason is?

As shown in the figure, the background of the depth map is the wall, and it can be clearly seen that the color of the wall depth values is different. May I ask which post-processing method can be used to eliminate the error of the same plane but different depth values as much as possible?

Looking forward to your reply, thank you again.
306c41e90cd53e7d3d91eff93af2185

@MartyG-RealSense
Copy link
Collaborator

You said that "depth information should remain constant at every moment". If the object remains at the same distance from the camera as it moves then in theory the distance should remain constant. In real-world conditions it may change though. This is because accuracy can be affected by lighting and reflections.

If the wind turbine has a reflective surface then as it moves, the way that light is reflected off the turbine blade could change as the blade alters postion. If a reflection intensifies when the blade is at a particular angle then that will make it more difficult for the camera to accurately read depth information from it.

If the accuracy is being affected by reflections then placing a thin-film linear polarizing filter over the sensors on the front of the camera could greatly negate glare from reflections and make reflective surfaces much more readable. Because any polarizing filter will work as long as it is linear, the filters can be purchased inexpensively from stores such as Amazon by searching for the term linear polarizing filter sheet.

If you are unable to place a filter over the camera then defining a Region of Interest (ROI) in the lower half of the depth image may help, as advised at the link below.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#use-sunlight-but-avoid-glare

Regarding the different colors on the depth image, these may not be wrong. By default the depth will be colored from light blue nearest to the camera and darker blue a little further away, then transition through green to yellow to orange to red (representing the furthest distance from the camera). So if the detail nearest the camera is light blue, mid-way to the wall is yellow-orange and the back wall is red, this colorization seems appropriate for the distances being represented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants