Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abdominal phantom #71

Merged
merged 7 commits into from
Aug 21, 2024
Merged

Abdominal phantom #71

merged 7 commits into from
Aug 21, 2024

Conversation

oliverchampion
Copy link
Collaborator

Describe the changes you have made in this PR

The abdominal phantom has been updated; we need to check whether this influences the automated tests from Eric

Checklist

  • [ x ] Self-review of changed code
  • [ x ] Added automated tests where applicable
  • [ x ] Update Docs & Guides

@etpeterson
Copy link
Contributor

I think the remaining piece is to run sim_ivim_sig.py to get the new generic.json and update that. Hopefully the tests will work!

@@ -261,7 +273,7 @@ def XCAT_to_MR_DCE(XCAT, TR, TE, bvalue, D, f, Ds, b0=3, ivim_cont = True):
73: 'Pancreas tumor',
}
###############################################################################
np.random.seed(42)
np.random.seed(41)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seed change?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now fixed. Changed it to test something :).

@@ -2,7 +2,7 @@
import os
import subprocess
import time

import zenodo_get
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used here:

subprocess.check_call(["zenodo_get", 'https://zenodo.org/records/10696605'])

but as supprocess, it is not recognized that it is used. If you remove it, however, this crashes :).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand that, but I guess it's not a big deal.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this starts a subprocess which then runs the commando in the string (which contains Zenodo_get). But the Zenodo_get function is not recognized by interpreters as it is hidden in a string.
The download happens outside Pycharm, and if I don't put this in a subprocess, it somehow continues the code while downloading, running into errors (using data that still needs downloading).

@etpeterson
Copy link
Contributor

I think the remaining piece is to run sim_ivim_sig.py to get the new generic.json and update that. Hopefully the tests will work!

I think this is the remaining piece?

@oliverchampion
Copy link
Collaborator Author

I think the remaining piece is to run sim_ivim_sig.py to get the new generic.json and update that. Hopefully the tests will work!

Ah, I had run this, but the generic.json files are on the .gitignore list so nothing was pushed. Is there a good way of pushing these results?
Also, it creates only four "generic_#.json"-files (with # one-four), but not a "generic.json" file.
Are those files manually copied to tests/IVI-models/unit_tests?
Which files are used and which of these is (renamed into?) "generic.json"?

Thanks!

@etpeterson
Copy link
Contributor

I think the remaining piece is to run sim_ivim_sig.py to get the new generic.json and update that. Hopefully the tests will work!

Ah, I had run this, but the generic.json files are on the .gitignore list so nothing was pushed. Is there a good way of pushing these results? Also, it creates only four "generic_#.json"-files (with # one-four), but not a "generic.json" file. Are those files manually copied to tests/IVI-models/unit_tests? Which files are used and which of these is (renamed into?) "generic.json"?

Thanks!

Ah, there was a change. I think there should also be a generic_original.json? I think it's being pulled from phantoms/MR_XCAT_qMRI/b_values.json now. I think you should already have that, so rename that generic.json and overwrite the original and git add -f generic.json. The -f forces it to add even though it's ignored.

Then hopefully it all works!

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just the generic.json overwrite remaining.

@oliverchampion
Copy link
Collaborator Author

Good news: File is updated (as well as the other generic files).
Bad news: (some) tests fail... But a lot pass... Will need to check this in more detail tomorrow...

@oliverchampion
Copy link
Collaborator Author

oliverchampion commented Aug 14, 2024

@etpeterson It seems like only your algorithm is failing, and only in a limited set of tissues. In particular, it has trouble estimating an f of 1 in vessels. But also has some challenges in the pericardium...

I think it has to do with the fact that all algorithms use real tolerances, but yours uses dynamic tolerances. Hence, your tolerances are way more strict (and realistic) than for the other algorithms.

I think the dyn_tolerances are related to an issue/feature request I wrote earlier today :) and should be the way to go.

@etpeterson
Copy link
Contributor

etpeterson commented Aug 20, 2024

@etpeterson It seems like only your algorithm is failing, and only in a limited set of tissues. In particular, it has trouble estimating an f of 1 in vessels. But also has some challenges in the pericardium...

I think it has to do with the fact that all algorithms use real tolerances, but yours uses dynamic tolerances. Hence, your tolerances are way more strict (and realistic) than for the other algorithms.

I think the dyn_tolerances are related to an issue/feature request I wrote earlier today :) and should be the way to go.

I'm not too sure why it's failing now, but I'm not too worried. You could mark those tests as expected failures. So add the regions to this area of algorithms.json. That's probably easiest, especially since it seems like more bounds changes are coming. Actually, looks like Vein was failing and it's now passing, so just swap out "Blood RV" for "Vein" and it should work.

"ETP_SRI_LinearFitting": {
        "xfail_names": {
            "Vein": true
        },

@oliverchampion
Copy link
Collaborator Author

Features updated:
1: Phantom is smaller (done previously)
2: signal in phantom can now optionally be T1/T2 weighted. Default has T1/T2 weighting turned off.

@etpeterson It seems like only your algorithm is failing, and only in a limited set of tissues. In particular, it has trouble estimating an f of 1 in vessels. But also has some challenges in the pericardium...
I think it has to do with the fact that all algorithms use real tolerances, but yours uses dynamic tolerances. Hence, your tolerances are way more strict (and realistic) than for the other algorithms.
I think the dyn_tolerances are related to an issue/feature request I wrote earlier today :) and should be the way to go.

I'm not too sure why it's failing now, but I'm not too worried. You could mark those tests as expected failures. So add the regions to this area of algorithms.json. That's probably easiest, especially since it seems like more bounds changes are coming. Actually, looks like Vein was failing and it's now passing, so just swap out "Blood RV" for "Vein" and it should work.

"ETP_SRI_LinearFitting": {
        "xfail_names": {
            "Vein": true
        },

Ah! Now I understand the XPASS fail, haha. I was wondering what that was doing :)

@@ -169,6 +169,8 @@ def test_ivim_fit_saved(name, bvals, data, algorithm, xfail, kwargs, tolerances,
tolerances = tolerances_helper(tolerances, ratio, data["noise"])
[f_fit, Dp_fit, D_fit] = fit.osipi_fit(signal, bvals)
npt.assert_allclose(data['f'], f_fit, rtol=tolerances["rtol"]["f"], atol=tolerances["atol"]["f"])
npt.assert_allclose(data['D'], D_fit, rtol=tolerances["rtol"]["D"], atol=tolerances["atol"]["D"])
npt.assert_allclose(data['Dp'], Dp_fit, rtol=tolerances["rtol"]["Dp"], atol=tolerances["atol"]["Dp"])
if data['f']<0.80: # we need some signal for D to be detected
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just looking at code in more detail, would this be a relative tolerance thing rather than an if?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am doing that in my next project. But either way it seems silly to quantity something that is undefined (D* is f=0 and D if f=1)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. Looks good to me.

@etpeterson etpeterson merged commit 44051e4 into main Aug 21, 2024
24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants