Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training MagicPoint on MS-COCO & how did you design Homographic Adaptation ? #90

Open
ghost opened this issue Sep 30, 2022 · 9 comments

Comments

@ghost
Copy link

ghost commented Sep 30, 2022

Hi, @eric-yyjau @saunair
Thank you for your great job.
question.1
I was not able to find the step on yours which is training MagicPoint on MS-COCO like Mr.rpautrat's code(https://github.com/rpautrat/SuperPoint).

if you forget to mention and make the codes on your GitHub, Would you like us to reply with the way to make or the codes by using PyTorch?

question.2
I was not able to understand the process .

def combine_heatmap(heatmap, inv_homographies, mask_2D, device="cpu"):
    ## multiply heatmap with mask_2D
    heatmap = heatmap * mask_2D

    heatmap = inv_warp_image_batch(
        heatmap, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )

    ##### check
    mask_2D = inv_warp_image_batch(
        mask_2D, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )
    heatmap = torch.sum(heatmap, dim=0)
    mask_2D = torch.sum(mask_2D, dim=0)
    return heatmap / mask_2D
    pass

This is not a logical sum (OR) process.
I believe that it is better to make the process by using OR process.

@Taogonglin
Copy link

hell, I think you can read the README.md carefully, It has the step you mentioned. And I have a question, the time of step 1, if you spent 2 days for training? My GPU just loads 20%, and it's long time to train

@ghost
Copy link
Author

ghost commented Oct 13, 2022

@Taogonglin

hell, I think you can read the README.md carefully, It has the step you mentioned.

Thank you for replying to us.
But, where is that mention training MagicPoint on MS-COCO in README.md? I did not know that.
Could you quote the texts in README.md?

And I have a question, the time of step 1 if you spent 2 days for training? My GPU just loads 20%, and it's long time to train

In my case, it was almost 1day I took. 

@ghost
Copy link
Author

ghost commented Oct 13, 2022

@Taogonglin
If you don't mine, Could you tell me the way to train training MagicPoint on MS-COCO or code ?

@Taogonglin
Copy link

@Taogonglin If you don't mine, Could you tell me the way to train training MagicPoint on MS-COCO or code ?

sorry, I misunderstand that, It's true that no way to train training MagicPoint on MS-COCO or code. But I think that is not necessary for that way, maybe you can read the paper to find what the function of the MagicPoint. I think it just to find the truth on the COCO, and then use COCO to train SuperPoint. I just an undergraduate student, maybe I'm wrong

@ghost
Copy link
Author

ghost commented Oct 13, 2022

sorry, I misunderstand that, It's true that no way to train training MagicPoint on MS-COCO or code. But I think that is not necessary for that way, maybe you can read the paper to find what the function of the MagicPoint. I think it just to find the truth on the COCO, and then use COCO to train SuperPoint. I just an undergraduate student, maybe I'm wrong

Never mind. Thank you for replying rapidly!
When I read the paper on page 6 and figure 7, The author said ''We repeat the Homographic Adaptation a
second time, using the resulting model trained from the first round of Homographic Adaptation(I think this step is step 2 in this GitHub)
''.

Namely, Doing some time of Homographic Adaptation is important.
The distribution of MagicPoint gets widely due to repeating Homographic Adaptation and training MagicPoint on new images.
And, Figure 7 said that the repeatability of detected points improves empirically.

@Taogonglin
Copy link

sorry, I misunderstand that, It's true that no way to train training MagicPoint on MS-COCO or code. But I think that is not necessary for that way, maybe you can read the paper to find what the function of the MagicPoint. I think it just to find the truth on the COCO, and then use COCO to train SuperPoint. I just an undergraduate student, maybe I'm wrong

Never mind. Thank you for replying rapidly! When I read the paper on page 6 and figure 7, The author said ''We repeat the Homographic Adaptation a second time, using the resulting model trained from the first round of Homographic Adaptation(I think this step is step 2 in this GitHub) ''.

Namely, Doing some time of Homographic Adaptation is important. The distribution of MagicPoint gets widely due to repeating Homographic Adaptation and training MagicPoint on new images. And, Figure 7 said that the repeatability of detected points improves empirically.

I think you can just run this code, and see the data which is used to evalut, and I have a problem about trianing time. The readme.md says it just use about 8 hours, but at step 1, I have spend 20 hours, and I use 2080ti. Could you tell me why?

@ghost
Copy link
Author

ghost commented Oct 13, 2022

@Taogonglin

I think you can just run this code, and see the data which is used to evalut, and I have a problem about trianing time. The readme.md says it just use about 8 hours, but at step 1, I have spend 20 hours, and I use 2080ti. Could you tell me why?

When I tried step1, I have spend more than approximately 20 hours too but, I used GTX1070 ^_^.
To be honest, I do not know about it. sorry.
If you are working on something at the same time, it may run slower.

@ghost ghost closed this as completed Oct 13, 2022
@ghost ghost reopened this Oct 13, 2022
@Taogonglin
Copy link

@Taogonglin

I think you can just run this code, and see the data which is used to evalut, and I have a problem about trianing time. The readme.md says it just use about 8 hours, but at step 1, I have spend 20 hours, and I use 2080ti. Could you tell me why?

When I tried step1, I have spend more than approximately 20 hours too but, I used GTX1070 ^_^. To be honest, I do not know about it. sorry. If you are working on something at the same time, it may run slower.

thank you!

@FFFOCUS
Copy link

FFFOCUS commented Apr 4, 2023

in origin paper, Homographic Adaptation is used for generating more points in real image, my question is

 def combine_heatmap(heatmap, inv_homographies, mask_2D, device="cpu"):
    ## multiply heatmap with mask_2D
    heatmap = heatmap * mask_2D

    heatmap = inv_warp_image_batch(
        heatmap, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )

    ##### check
    mask_2D = inv_warp_image_batch(
        mask_2D, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )
    heatmap = torch.sum(heatmap, dim=0)
    mask_2D = torch.sum(mask_2D, dim=0)
    return heatmap / mask_2D
    pass

should be as follow

def combine_heatmap_new(heatmap, inv_homographies, mask_2D, device="cpu"):
    ## multiply heatmap with mask_2D
    heatmap = heatmap * mask_2D

    heatmap = inv_warp_image_batch(
        heatmap, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )
    heatmap = torch.max(heatmap, dim=0)[0]
    return heatmap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants