You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your contribution to this amazing work.
When I used the provided code for training, the map of base-training was 74.1, and the novel ap of 1-shot fine-tuning was only 30, which did not meet the benchmarks reported in the paper.
I did not modify any configurations, just changed warmup_iters to 500.
Has anyone encountered this situation? Thank you for any responses or solutions.
The text was updated successfully, but these errors were encountered:
When I use the provided pretraining weights, the novel AP can reach 58.6. Do I have a problem with my pretraining process? What configurations need to be modified? I trained on a single machine with a single GPU (Tesla V100) targeting only split1.
Hi @gladdduck , I think there are two reasons:
(a) One-shot results are more sensitive to random seeds or other training factors.
(b) Single-GPU training may be different from 8-GPU training because of the BatchNorm layer of the model.
Thank you for your contribution to this amazing work.
When I used the provided code for training, the map of base-training was 74.1, and the novel ap of 1-shot fine-tuning was only 30, which did not meet the benchmarks reported in the paper.
I did not modify any configurations, just changed warmup_iters to 500.
Has anyone encountered this situation? Thank you for any responses or solutions.
The text was updated successfully, but these errors were encountered: