Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A issue while training on my own liver tumor dataset. #8

Open
wangbaoyuanGUET opened this issue Mar 23, 2024 · 1 comment
Open

A issue while training on my own liver tumor dataset. #8

wangbaoyuanGUET opened this issue Mar 23, 2024 · 1 comment

Comments

@wangbaoyuanGUET
Copy link

Hello,developers of SelfMedMAE.
Your work is valuable enlightening me so much and I am trying to make some improvements to improve the effect on my liver tumor dataset.
Considering that my dataset has only 2 kinds of classes but BTCV has 14 kinds, I has made some changes to the code for the 2 output channels.And the training epoch is set to 1000 because 200 data will cost a lot of time. However, the test Dice was just under 0.6, whereas I was able to achieve around 0.66 when training on top of a normal UNETR without any pre-train.
Now I am trying to do some pre-train and hoping to see some changes. If the accuracy increases, I can keep do other imporvment on the SelfMedMAE.
Thank you very much!

@wangbaoyuanGUET
Copy link
Author

Hi!Developers of SelfMedMAE.
I have solved the problem by changing the input_size while embedding the input image to patch. This value defaults to 224, I changed it to the ROI_size (96, 96, 64) then the Dice increase to 0.67.
Now I try to do pre-training on my my 400+ data. I set the epochs of pre-training to 10k that will need a lot of time.
I wonder that if the number of pre-training rounds is a fixed number, even for more data ?
Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant