Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the new code #12

Open
LeeX-Bruce opened this issue Mar 21, 2024 · 4 comments
Open

About the new code #12

LeeX-Bruce opened this issue Mar 21, 2024 · 4 comments

Comments

@LeeX-Bruce
Copy link

Dear author,

I noticed that your recently updated code has some parameters that are different from those of the paper, such as self.fg_num = 10 # number of foreground partitions to self.fg_num = 100 # number of foreground partitions. And the loss function has changed too.

Could you please tell me why there are these changes. Thanks!

@YazhouZhu19
Copy link
Owner

YazhouZhu19 commented Mar 21, 2024

self.fg_num = 100 may lead to the better performance in some datasets. It depends on which datasets you chosen, and it also can be change according to your local conditions.

May these help you, :)

@LeeX-Bruce
Copy link
Author

Hi author, thank you for your previous reply.

Now I found a problem when testing the model. When testing, ALL_EV = (1) and ALL_SUPP = (3), the Test Class is SPLEEN and it has a very low Dice score, only about 8% , is this normal or is there something wrong with my image data.

Also, the model I got from my own training still gives poor results when tested, can you say more about what to do with the data used for training, all I need to do is to download the following data directly and do I need to do additional processing?
image

Looking forward to your reply!

@YazhouZhu19
Copy link
Owner

YazhouZhu19 commented Apr 2, 2024

ALL_EV = (1) means you just test the second fold for evaluation, this work employs the five-fold cross validation strategy, and the final result is the mean of five folds' results. ALL_SUPP please keep equal to 2 as used in the raw script.

All of data we uploaded and the supervoxels have already been processed, there are no additional operations needed to be conducted. Due to no accessible to your local code files and environments, I can't understand why you obtain the poor results when tested. Can you provide your codes file and the test results? And how about the test results of my uploaded checkpoints under the raw scripts' settings?

@LeeX-Bruce
Copy link
Author

Do you mean that the value of ALL_SUPP stays at 2 all the time?

I set the settings to ALL_EV=(0 1 2 3 4) and ALL_SUPP=(0 1 2 3 4) when I was doing the five-fold cross validation strategy, and I may have set ALL_SUPP wrong, which resulted in poor SPLEEN results when EVAL_FOLD=1 and SUPP_IDX=3 during the test.
image
image

Thank you for answering my doubts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants