Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eigen Matrix3d issue in #4

Open
mpottinger opened this issue Jun 23, 2019 · 5 comments
Open

Eigen Matrix3d issue in #4

mpottinger opened this issue Jun 23, 2019 · 5 comments

Comments

@mpottinger
Copy link

mpottinger commented Jun 23, 2019

Hello, I am trying to adapt the
register_convert(depth_frame color_frame) code to my own program to register depth to color, not with ROS.

However I am running into the following error when the Matrix3d rot is being initialized:
/usr/include/eigen3/Eigen/src/Core/DenseCoeffsBase.h:365: Eigen::DenseCoeffsBase<Derived, 1>::Scalar& Eigen::DenseCoeffsBase<Derived, 1>::operator()(Eigen::Index, Eigen::Index) [with Derived = Eigen::Matrix<double, 3, 3>; Eigen::DenseCoeffsBase<Derived, 1>::Scalar = double; Eigen::Index = long int]: Assertion `row >= 0 && row < rows() && col >= 0 && col < cols()' failed.

Error occurs at this point in register.hpp when i or j is greater than 3:
It appears that Eigen::Matrix3d isn't large enough to hold all values from ST::Matrix4 pose.

I am very new to this, but I am very confused how this code has worked for you and not for me.
if I change the condition to i < 3 j < 3 instead, I don't get an error. I haven't tried viewing the resulting depth data from that yet, and I will try. However this is still puzzling why it does not work as originally written.

Eigen::Matrix3d rot;
const ST::Matrix4 pose = depth_frame.visibleCameraPoseInDepthCoordinateFrame();
for(size_t j = 0; j < 4; j++)
  {
      std::cout << "   j:" << j << "\n";
      for(size_t i = 0; i < 4; i++){
            **rot(j, i) = pose.atRowCol(j, i);**  // Error occurs here when i > 3;
      }
  }
@mpottinger
Copy link
Author

@mpottinger Changing the loops to iterate until 3 instead of 4 worked for me. Depth is aligned nicely to RGB.

Don't know why I had to make the change, but it works anyway.

A little slow though. I will try caching the results of calculations for a given xyz value in a large array to try to get a high framerate.

Thanks for publishing this great work! I would have been banging my head against the wall otherwise!!

@mpottinger
Copy link
Author

@mpottinger FYI caching works. I just initialize a very large static array with precalculated values and I get very nice real time framerates. It is a huge lookup table though, gigabytes, but it works!

I will be investigating if a neural net can do better by learning from the lookup table data.

@mpottinger
Copy link
Author

@mpottinger Just so anyone needs the same thing in the future, since I had a bit of a struggle, I have it solved for my use (real time low latency alignment)

No need to cache the transformed Z values. The difference is not that large anyway. It is the x/y coordinates that change a lot, so I just store those in an array to look up for each frame.

No more gigabyte sized arrays, just 2 uint16 arrays, one for depth x to rgb x and one for depth y to rgb y. Takes a few mb and viola realtime depth to rgb alignment.

@mrakkumar
Copy link

mrakkumar commented Jul 29, 2019

Hi! I've been trying to adapt the code to work without ROS myself for a while now, when I came across this repo from the Occipital forums. When I pass my depth and visible frames into the register_convert function, however, I only get junk values when i print the output vector using cout. Are the arguments to the function simply the depth and visible frames from the CaptureSessionSample object? Or is there something else I'm missing? Thanks!

Update: Turns out the issue was me using uint8_t type for my running variable in the for loop while printing. Replacing it with vanilla int seems to have fixed it. I do need some help understanding the output, though. I noticed that the output vector is twice the size of the input. If I had a pixel at (x, y) in the visible frame, how do I find the corresponding location in the depth frame using this output? Also, the depth image resulting from this has pixel intensity values going from 0 - 255, as expected from an image. Is there any way to extract the depth values? Thanks!

@mpottinger
Copy link
Author

mpottinger commented Jul 31, 2019

I will have to take a look at my code when I have time.

The sizes in pixels of the depth and rgb are going to be different because the field of view of the cameras is different, so when the depth is overlaid on the rgb it shows as a smaller area centered over the middle of the rgb image. You don't get depth for the entire rgb image.

The new depth image will look like a picture frame with invalid/zero depth values surrounding a smaller area in the centre.

I modified the code to keep track of the min and max x/y values, that will give you the dimensions of the depth within the rgb image.

When I have time I will share details

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants