Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the pretrain #17

Open
andyaloha opened this issue Jul 18, 2024 · 7 comments
Open

About the pretrain #17

andyaloha opened this issue Jul 18, 2024 · 7 comments

Comments

@andyaloha
Copy link

Hi, thank you for this great work! We used the pre-training code and data provided in the current repository to run pre-training, but the performance on downstream tasks was not as strong as the VoCo_10k.pt model you released. We tested and compared with the VoCo_10k.pt model on several segmentation tasks in the MSD. Since the publicly released code uses a teacher-student training approach, we tested the weights of the teacher and student models separately, and neither performed as well as VoCo_10k.pt. Are there differences in the default training configurations? How can we reproduce a model as strong as the VoCo_10k.pt?

@Luffy03
Copy link
Owner

Luffy03 commented Jul 18, 2024

Many thanks for your attention to our work. Would you please share more details? Pre-training logs, settings, and others?

@andyaloha
Copy link
Author

Many thanks for your attention to our work. Would you please share more details? Pre-training logs, settings, and others?

Just using the command sh train.sh in this repo.

@Luffy03
Copy link
Owner

Luffy03 commented Jul 18, 2024

Hi, would you please share your pre-training logs and the downstream performances?

@andyaloha
Copy link
Author

Hi, would you please share your pre-training logs and the downstream performances?

Here is the log and MSD results. https://www.dropbox.com/scl/fo/do7vnjaxxi9vidhus7gbz/AGH6SCISNnszLzMp5ieDACY?rlkey=ofhh43b4qnm5b4ae80c19rgyh&st=m189nq8m&dl=0

@Luffy03
Copy link
Owner

Luffy03 commented Jul 18, 2024

Many thanks for your sharing! These results are very valuable. It seems that there are no problems with the pre-training and our pre-training checkpoint can indeed improve the performances.
For the downstream, what framework are you using? Monai or nnunet? It seems that the results are not consistent with ours. I have not yet released my implementations of these downstream works. I will release a more powerful version recently, with all the downstream implementation codes. I think implementation of downstream may make difference.

@andyaloha
Copy link
Author

I use the Monai framework as in https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV. Looking forward to your powerful version, much appreciation.

@Luffy03
Copy link
Owner

Luffy03 commented Oct 14, 2024

Dear researchers, our work is now available at Large-Scale-Medical, if you are still interested in this topic. Thank you very much for your attention to our work, it does encourage me a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants