-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing chi-to-fieldmap subjects #11
Comments
Here's a list of OK, FAIL, and SKIPPED subejcts: OK (52)amuAL, amuALT, amuAM, amuAP, amuED, amuFL, amuFR, amuGB, amuHB, amuJW, amuMD, amuMLL, amuMT, amuTM, amuTR, amuTT, amuVC, amuVG, amuVP, unfErssm002, unfErssm003, unfErssm004, unfErssm005, unfErssm006, unfErssm008, unfErssm009, unfErssm010, unfErssm011, unfErssm013, unfErssm014, unfErssm015, unfErssm016*, unfErssm017*, unfErssm018, unfErssm019, unfErssm020, unfErssm022*, unfErssm024*, unfErssm026, unfErssm027, unfErssm028*, unfErssm029, unfErssm030*, unfErssm031*, unfPain001, unfPain002, unfPain003, unfPain004, unfPain005*, unfPain006, unfSCT001*, unfSCT002** FAIL (8)amuCR, amuJD, amuLJ, amuPA, unfErssm007, unfErssm012, unfErssm023, unfErssm025 SKIPPED (2)
|
The chi-maps for the failing subjects can be loaded in FSLeyes, and appear ok. Passing B0 subject: So, I'm not quite sure yet why the susceptibility-to-fieldmap-fft is returning NaN's for them. Will debug through the code |
Ok, so I've encountered my first NaN after this block of code: https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft/blob/9a60de9e478b724576d390c63070b48951df5e23/functions/compute_fieldmap.py#L74-L76 Somehow, we go from kz and k2 having no NaN's, to kernel having a NaN: |
The source of the NaN comes from the fact that Why for some subjects but not others, and does this imply an implementation bug that affects passing subjects but is just hidden? |
k2 can only have a zero is all kx, ky, and kz are zero, because of line:
|
Ok I see why it's some subjects and not others now. Via If all three dims of a subject are odd-numbered (eg, amuCR: [175, 273, 713]), then the linspaces will have zero in the middle since kmax=kmin. For passing subjects, at least one of the dimensions were even (eg, amuAL: [173, 270, 713]). See my comment above where I got the dimensions in screenshots: #11 (comment) Tagging @CharlesPageot and @evaalonsoortiz so you're aware of this bug in the python code, and that it should be verified in the MATLAB one. Also, @evaalonsoortiz, any idea on the best implementation strategy to fix the bug for these cases? Also, can it be a problem for the "passing" subjects when one but not all of the three dimensions are odd-numbered? |
So I tested Eva's code, and in hers, this line
removes the only NaN produced for the case of all odd-dims. Charle's python implementation has a similar line,
However, I found that even after this line, edit: I don't think Eva's line there removes a NaN, as it was never introduced. I think it replaces a 0, but there is also a zero at kernel(end, end, end) that is not replaced by 1/3. Regardless, from below, this wasn't the main issue) |
Ah ha, I see a major difference now. In Eva's code, the meshgrid is calculated using a value called
And then, the grid is calculated using this number as the interval, % define k-space grid
[kx,ky,kz] = ndgrid(-k_max(1):interval(1):k_max(1) - interval(1),-k_max(2):interval(2):k_max(2) - interval(2),-k_max(3): However, note that she subtracts kmax by that interval; this is because for volum array dimension of 129, doing Whereas, in Charles code, he implemented it differently. Instead of calculating the interval, he does, # creating the k-space grid with the buffer
new_dimensions = buffer*np.array(dimensions)
kmax = 1/(2*image_resolution)
[kx, ky, kz] = np.meshgrid(np.linspace(-kmax[0], kmax[0], new_dimensions[0]),
np.linspace(-kmax[1], kmax[1], new_dimensions[1]),
np.linspace(-kmax[2], kmax[2], new_dimensions[2]), indexing='ij') However, note that in matlab, doing
K>> A=linspace(-k_max(1), k_max(1), 129);
disp(min(A))
disp(max(A))
-0.5000
0.5000 doesn't result in the same array as
K>> B=-k_max(1):interval(1):k_max(1)-interval(1);
disp(min(B))
disp(max(B))
-0.5000
0.4922
despite both final arrays having a length of 129. K>> size(A)
ans =
1 129
K>> size(B)
ans =
1 129
And Eva did the first way (which off-centers k-space and doesn't include 0), whereas Charles did the second (which is symmetrical. Now, I'm not sure if one (symmetrical vs asymmetrical) is the "right" way per say (FFT's can be weird), however if we wanted Charles code to mimic Eva's code, then the I think there might still be situations where there's a zero somewhere prior to the division, so probably adding a check in there would be safer. An alternative approach for Charle's code could be simply to ensure that there is always even-numbered dimension after padding, and if that's not the case simply pad the dimension by 1 more voxel on one of the side (which kinda switched the asymmetricality to the physical space instead of eva where it's introduced in the k-space, re: FFT's are weird). tl;dr I'll try running Charles code but updated with |
amazing digging @mathieuboudreau ! thank you so much! |
I ran it on all subjects, and some subjects that passed previously now failed. There was a warning about a mismatch of dimensions at some point,
After some digging, I found that
This is because for some values, >>> interval = 2*0.5/857
>>> interval
0.0011668611435239206
>>> dim = 2*0.5/interval
>>> dim
857.0000000000001 whereas for most it worked as expected,
This small difference allowed for one extra interval to be introduced in some arange outputs, eg 858 instead of 857. So to solve it I had to do somewhat of a hybrid approach between Eva's and Charles to force the output that Eva's would get, [kx, ky, kz] = np.meshgrid(np.linspace(-kmax[0], kmax[0] - interval[0], dimensions[0]),
np.linspace(-kmax[1], kmax[1]- interval[1], dimensions[1]),
np.linspace(-kmax[2], kmax[2] - interval[2], dimensions[2]), indexing='ij') Now we're getting arrays that start at -kmax and are forced to end at kmax - interval, and are also forced to be the same lengths as the dimensions of the volume. Re-running on all subjects, they all produced well-behaved B0 maps, so I'll push a PR to that repo |
Yesterday, Sebastien raised that the B0 maps were significantly different before and after the fix above. After a long discussion with Eva and Sebastien, we discovered that the MATLAB implementation handled the odd parity cases incorrectly, i.e.c calculated k-space frequencies were not correct. Basically, gridding from -kmax:interval:kmax-interval only works for the case where the dimensions are even, and the strategy above will ensure that the center of k-space is sampled. If the dimensiosn are odd, then the strategy above won't sample the center of kspace (it'll sample instead from -interval/2 to interval/2, skipping 0). This also means two other things:
kernel[0,0,0] = 1/3 ^(after fftshift)
I implemented the necessary fixes in the python implementation, the code block is now: # dimensions needs to be a numpy.array
dimensions = np.array(susceptibility_distribution.shape)
kmax = 1/(2*image_resolution)
interval = 2 * kmax / dimensions
kx_min_shift = (dimensions[0]%2)*interval[0]/2
ky_min_shift = (dimensions[1]%2)*interval[1]/2
kz_min_shift = (dimensions[2]%2)*interval[2]/2
kx_max_shift = -interval[0] + (dimensions[0]%2)*interval[0]/2
ky_max_shift = -interval[1] + (dimensions[1]%2)*interval[1]/2
kz_max_shift = -interval[2] + (dimensions[2]%2)*interval[2]/2
[kx, ky, kz] = np.meshgrid(np.linspace(-kmax[0] + kx_min_shift, kmax[0] + kx_max_shift, dimensions[0]),
np.linspace(-kmax[1] + ky_min_shift, kmax[1] + ky_max_shift, dimensions[1]),
np.linspace(-kmax[2] + kz_min_shift, kmax[2] + kz_max_shift, dimensions[2]), indexing='ij')
# FFT procedure
# undetermined at the center of k-space
k2 = kx**2 + ky**2 + kz**2
with np.errstate(divide='ignore', invalid='ignore'):
x_kernel = 1/3 - kz**2/k2
x_kernel[int(dimensions[0]/2-1/2*(dimensions[0]%2)), int(dimensions[1]/2-1/2*(dimensions[1]%2)), int(dimensions[2]/2-1/2*(dimensions[2]%2))] = 1/3
kernel = np.fft.fftshift(x_kernel) Note that I switched to setting 1/3 the center of k-space prior to fftshift, since it's easier to interpret when reading the code (for me at least). After these changes, the simulated B0 maps for all 60 subjects retained similar spatial patterns, but reduced in amplitude some of the inhomogeneities. The before-higher inhomogeneities were likely due to setting a large value (1/3) to some random k-space frequency nearby the 0 but not exactly there, an that would have caused some sinusoidal amplification / reduction in the resulting simulated B0 map I think. I also tested the analytical vs fourier simulated fieldmap for a sphere for the case of odd-pairity dimensions (eg 129x129x129) after the fix, as this appeared to not have been tested before in our python/matlab implementations (it was always 128x128x128 with a x2 factor padding, so 256x256x256): |
Nice work @mathieuboudreau! I just want to document the fact that, would could also add the option (or default to) setting the center of k-space to zero, instead of 1/3. This should lead your simulated B0 maps to be "demodulated", in the sense that their average value would then be zero (which is something that I think you're manually doing afterwards now). |
Thanks - right that makes sense! Though, I think that "demodulation" would be whole-volume, whereas after I remove the mean just of the MR-signal tissues (i.e. excluding airways and bone). But yeah, that value in the center doesn't really matter, but changing a non-center k-space value to 1/3 (like we were doing before) did impact the resulting B0 map in a non-constant way that we couldn't clean up afterwards - so I'm glad we got this resolved! |
Following up on this results comment, #4 (comment), I'm opening this thread to understand why some subjects result in NaN maps, ie.
The text was updated successfully, but these errors were encountered: