Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why train for 6 epochs? #32

Open
lisanu opened this issue Jul 23, 2024 · 2 comments
Open

Why train for 6 epochs? #32

lisanu opened this issue Jul 23, 2024 · 2 comments

Comments

@lisanu
Copy link

lisanu commented Jul 23, 2024

Hello,

I noticed in your paper and code that you trained Far3D and StreamPETR for 6 epochs before comparison. However, the StreamPETR paper mentions training their network for 60 epochs for the case of NuScenes. Do you believe Far3D and StreamPETR can achieve good convergence in just 6 epochs on the Argoverse dataset? Could you explain why you chose to train for only 6 epochs?

@exiawsh
Copy link

exiawsh commented Jul 28, 2024

The number of images of argoverse2 is 5x that of nuscenes, and the number of training iter is similar.

@lisanu
Copy link
Author

lisanu commented Jul 29, 2024

Thanks for the response and great work, the results look impressive!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants