Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to use the LightGlue-ONNX fused model #23

Open
panda7281 opened this issue Oct 31, 2023 · 7 comments
Open

Update to use the LightGlue-ONNX fused model #23

panda7281 opened this issue Oct 31, 2023 · 7 comments

Comments

@panda7281
Copy link

https://github.com/fabio-sim/LightGlue-ONNX/releases/tag/v1.0.0
Recently LightGlue-ONNX released there fused model which improve the performance substantially(~2x faster in my test) and change the format of the output tensor, making the current code not functional
output_tensors[2] is now the shape of (matchnum, 2) and the pairs are already computed(so you don't need to compute them yourself)
output_tensors[3] is now the shape of (matchnum), which denotes the matching score of each match

The following simple hack should do the work:

std::vector<int64_t> kpts0_Shape = output_tensors[0].GetTensorTypeAndShapeInfo().GetShape();
int64_t* kpts0 = (int64_t*)output_tensors[0].GetTensorMutableData<void>();
// 在Python里面是一个(batch = 1 , kpts_num , 2)的array,那么在C++里输出的长度就应该是kpts_num * 2
printf("[RESULT INFO] kpts0 Shape : (%lld , %lld , %lld)\n", kpts0_Shape[0], kpts0_Shape[1], kpts0_Shape[2]);

std::vector<int64_t> kpts1_Shape = output_tensors[1].GetTensorTypeAndShapeInfo().GetShape();
int64_t* kpts1 = (int64_t*)output_tensors[1].GetTensorMutableData<void>();
printf("[RESULT INFO] kpts1 Shape : (%lld , %lld , %lld)\n", kpts1_Shape[0], kpts1_Shape[1], kpts1_Shape[2]);

// matches: (match, [0, 1])
std::vector<int64_t> matches_Shape = output_tensors[2].GetTensorTypeAndShapeInfo().GetShape();
int64_t* matches0 = (int64_t*)output_tensors[2].GetTensorMutableData<void>();
int match_Counts = matches_Shape[0];
printf("[RESULT INFO] matches0 Shape : (%lld , %lld)\n", matches_Shape[0], matches_Shape[1]);

// match scores: (score)
std::vector<int64_t> mscore_Shape = output_tensors[3].GetTensorTypeAndShapeInfo().GetShape();
float* mscores = (float*)output_tensors[3].GetTensorMutableData<void>();

// Process kpts0 and kpts1
std::vector<cv::Point2f> kpts0_f, kpts1_f;
for (int i = 0; i < kpts0_Shape[1] * 2; i += 2)
{
    kpts0_f.emplace_back(cv::Point2f(
        (kpts0[i] + 0.5) / scales[0] - 0.5, (kpts0[i + 1] + 0.5) / scales[0] - 0.5));
}
for (int i = 0; i < kpts1_Shape[1] * 2; i += 2)
{
    kpts1_f.emplace_back(cv::Point2f(
        (kpts1[i] + 0.5) / scales[1] - 0.5, (kpts1[i + 1] + 0.5) / scales[1] - 0.5)
    );
}

std::set<std::pair<int, int> > matches;
std::vector<cv::Point2f> m_kpts0, m_kpts1;
for (int i = 0; i < matches_Shape[0] * 2; i += 2) {
    if (mscores[i/2] > cfg.threshold)
        matches.insert(std::make_pair(matches0[i], matches0[i+1]));
}

std::cout << "[RESULT INFO] matches Size : " << matches.size() << std::endl;

for (const auto& match : matches) {
    m_kpts0.emplace_back(kpts0_f[match.first]);
    m_kpts1.emplace_back(kpts1_f[match.second]);
}

keypoints_result.first = m_kpts0;
keypoints_result.second = m_kpts1;

std::cout << "[INFO] Postprocessing operation completed successfully" << std::endl;

Maybe you can add a flag for the new fused model, thanks!

@OroChippw
Copy link
Owner

OroChippw commented Nov 6, 2023

This sounds very good. Since I haven’t paid attention to lightglue-related information for a while, I will upgrade and maintain the version of the code in the repository in the future.❤️

@letterso
Copy link

I have modified the project to adapt v1.0.0: Fused LightGlue-ONNX and tested it on Linux platform. Hope to help anyone who want to use LightGlue on Linux. LightGlue-OnnxRunner

@OroChippw
Copy link
Owner

Thank you for your contribution. Are you willing to initiate a pull request? After I review that there are no problems, I will merge it into the master branch and add you to the contributors.❤️

@letterso
Copy link

I don't have windows env, so I delete many codes which not be used in Linux. Avoiding to make too much change, I need some time to optimize the code before make a pull request. I will do this soon.

@wooAo
Copy link

wooAo commented Jan 5, 2024

I have modified the project to adapt v1.0.0: Fused LightGlue-ONNX and tested it on Linux platform. Hope to help anyone who want to use LightGlue on Linux. LightGlue-OnnxRunner

really help, thanks

@panda7281
Copy link
Author

Thank you for your contribution. Are you willing to initiate a pull request? After I review that there are no problems, I will merge it into the master branch and add you to the contributors.❤️

Sorry, I'm pretty new to github and don't know how to properly submit a pull request, maybe you can merge this into your code? I'll mark this as closed
By the way, i think you should add virtual destructor to BaseFeatureMatchOnnxRunner, or the derived class won't be destruct properly

@OroChippw
Copy link
Owner

Thank you for your contribution. Are you willing to initiate a pull request? After I review that there are no problems, I will merge it into the master branch and add you to the contributors.❤️

Sorry, I'm pretty new to github and don't know how to properly submit a pull request, maybe you can merge this into your code? I'll mark this as closed By the way, i think you should add virtual destructor to BaseFeatureMatchOnnxRunner, or the derived class won't be destruct properly

Thank you for your contribution and for pointing out the bug about the destructor. I will review your code and merge it later, and update the code as soon as possible. I hope this issue can be kept open so that more people who need help can see it. ,Thanks again❤️

@OroChippw OroChippw reopened this Jan 8, 2024
@OroChippw OroChippw pinned this issue Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants