Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add python script and quick start guide for MetricsReloaded #46

Merged
merged 28 commits into from
Apr 30, 2024

Conversation

valosekj
Copy link
Member

@valosekj valosekj commented Feb 27, 2024

Description

This PR adds:

Useful links:

@valosekj

This comment was marked as outdated.

@valosekj

This comment was marked as outdated.

@naga-karthik
Copy link
Member

Regarding 2:

Empty reference and non-empty prediction

The 100 is a bit weird given the the reference is empty so a division by an empty array should be Infinity, but I understand that we need to show that this is oversegmentation so you set 100 here

@valosekj
Copy link
Member Author

The 100 is a bit weird given the reference is empty so a division by an empty array should be Infinity,

Yes, you are right; without this condition, the RVE would be Infinity. Which would complicate aggregation across subjects, though.

but I understand that we need to show that this is oversegmentation so you set 100 here

Yeah, I choose 100 to be the opposite of -100 (which results from non-empty reference and empty prediction).

@valosekj
Copy link
Member Author

valosekj commented Feb 29, 2024

Hey @naga-karthik! I believe the PR can now be beta-tested.

I implemented the following points based on our discussions:

  • the script now accepts folders with multiple nii files as inputs (the compatibility with a single nii file is still preserved!)
  • the output is saved as CSV (instead of JSON) --> easier to manipulate and plot
  • the script also outputs a second CSV with mean +- std metrics
  • output CSV files use full metric names (instead of metric abbreviations)

I also updated the handling of the case when both the reference and prediction are empty in a23b81b.

…ion images are empty.

The original 'label = 0' caused that the metrics were corresponding to background and were not easy to aggregate across subjects.
Now, with 'label = 1', even cases with both empty reference and prediction are considered when computing group mean and std.
@valosekj valosekj marked this pull request as ready for review March 4, 2024 18:54
@valosekj valosekj requested a review from hermancollin March 4, 2024 18:54
@naga-karthik
Copy link
Member

So I am trying out the installation of MetricsReloaded and followed the commands mentioned here. However, I get a pip dependency resolver error post-installation

Error

Screenshot 2024-03-20 at 9 58 34 PM

Is it something you also had? How did you solve this? @valosekj

@valosekj
Copy link
Member Author

So I am trying out the installation of MetricsReloaded and followed the commands mentioned here. However, I get a pip dependency resolver error post-installation

I just tried the commands and the installation completed successfully on my end. We can check it together in person.

.github/workflows/ci.yml Outdated Show resolved Hide resolved
@valosekj valosekj merged commit 3377f3f into main Apr 30, 2024
1 check passed
@valosekj valosekj deleted the jv/MetricsReloaded branch April 30, 2024 16:58
@valosekj
Copy link
Member Author

CI is passing and I self-reviewed the code --> merging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants