Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the validation data segmentation result #25

Open
Qingyuncookie opened this issue Nov 12, 2018 · 3 comments
Open

the validation data segmentation result #25

Qingyuncookie opened this issue Nov 12, 2018 · 3 comments

Comments

@Qingyuncookie
Copy link

I have tested your given models under the model17 directory on the validation data by submitting the segmentation results to the CBICA Image Processing Portal. However, I didn't get the same result you presented in the paper. The dice values and Hausdorff values are (0.7572, 0.8989, 0.8350) and ( 3.7947, 5.7439, 7.3037) respectively, In your paper, the values are (0.7859, 0.9050, 0.8378) and (3.2821, 3.8901, 6.4790) respectively. Would you like tell me if you apply any other methods such as model ensemble to get the result in the paper?

@1160914483
Copy link

I used the original configuration file and the corresponding parameters, but the score on the verification set was (0.75315 0.90392 0.80955).
I'd like to ask the same question.

@taigw
Copy link
Owner

taigw commented Dec 3, 2018

The pre-trained models released here are not exactly the ones I used in the paper. To release the repository, I re-organized the code to make it clearer to understand, then I re-trained the model. However, the re-training was implemented on another GPU with 6GB memory so I reduced the batch size. This caused a performance decrease.

@Qingyuncookie
Copy link
Author

@taigw Thank you for your reply! When I trained the model, I set the batch_size to 3 and keep all other options unchanged in the configure files. And the result I got is (0.7376, 0.8997, 0.7912) after 20000 iterations, it's still lower than the result tested by the pre-trained model you gave, I don't know why the model I trained can't reproduce the result of your‘s, is there anything wrong with what I didn't pay attention to?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants