Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find Matches #11

Open
IddoBotzer opened this issue Feb 26, 2019 · 11 comments
Open

Find Matches #11

IddoBotzer opened this issue Feb 26, 2019 · 11 comments

Comments

@IddoBotzer
Copy link

Hi,
I was able to run the net test and extract features from my images, now I'm looking to see the matching. Is the only way to do so is to run the demo with the notebook?

If so, what do I need to change in order to run it on my own data? (I have depth images as well)

Thank you

@framanni
Copy link

Hi,
same question!

Thank you

@kmyi
Copy link
Member

kmyi commented Apr 1, 2019

Hi, the notebook demos are to provide a detailed example of how the input and output structures are.
For example, as per the readme, you can run

python run_lfnet.py --in_dir=images --out_dir=outputs

which is actually how we use this codebase these days.

@framanni
Copy link

framanni commented Apr 2, 2019

Hi, how is possible to see the matching on my own data, (after running the code above)?
Thanks

@framanni
Copy link

In LIFT you provide the function to convert keypoints in open cv2.keypoints.
Is there any support on that?

@abduallahmohamed
Copy link

I have the same issue, after running the code above how to show the correspondence between the images?

@framanni
Copy link

Is there any possibility to solve this?

@GulyaevMaxim
Copy link

In LIFT you provide the function to convert keypoints in open cv2.keypoints.
Is there any support on that?

pts = outs['kpts']
        pts = [cv2.KeyPoint(pts[i][0],
                                pts[i][1],
                                1)
                   for i in range(pts.shape[0])]

@zvadaszi
Copy link

Hi,
I would like to calculate the homography between two images using the key points returned by LF-Net (for example with opencv's findHomography).
So far I was able to extract the detection endpoints, extract the keypoint patches and get the kpts2_corr list.
How to filter for the best matches in order to filter out unwanted correspondences?

Thank you.

@ApeCoding
Copy link

ApeCoding commented Oct 28, 2019

Hi,
I was able to run the net test and extract features from my images, now I'm looking to see the matching. Is the only way to do so is to run the demo with the notebook?

If so, what do I need to change in order to run it on my own data? (I have depth images as well)

Thank you

Hi,
i have the same question, did you finish this question? without run demo in jupyter or use it on my own data
thank you!

@BruceWANGDi
Copy link

@IddoBotzer
Hi,
I prepared to run the feature-extracted demo, my command written below:
python run_lfnet.py --in_dir=1_1.jpg --out_dir=output
but there is an error occured: "ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory". Could you give me your command example?
And another question is that:
Have you found any solution to run the matching demo “demo.ipynb" on your own data?

Looking forward to your reply,
Thank you.

@kmyi
Copy link
Member

kmyi commented Dec 16, 2020

Hi. The error is just telling you that you don't have cublas installed properly. This is an environmental configuration issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants