Creating Motion Artifacts on Complex Image Space of MRI scans #1075
-
Hi everyone, Let me introduce myself. My name is Quentin and I am a Masters student at Columbia University. I am currently working on a project where I am looking for creating random artifacts on MRI brain scans. Looking in the literature, I quickly found TorchIO and the RandomMotion class inspired from Shaw et al., 2019. Let me explain better what I am trying to do and what I have done so far. I am trying to create random artifacts on the raw complex image space of MRI scans. I have the raw k-space, I take my inverse Fourier Transform to find my associate raw image space and I want to create motion artifacts on this raw space without losing any information (neither on the phase or on the amplitude). The goal being to create a corrupted raw-k-space that would be the same recorded if someone was to move (rotate or translate) while having a MRI scan. This is why I cannot take the magnitude (and therefore loss information on my image space phase because I want to keep a complete bijection between my corrupted image space and my corrupted k-space). By default, TorchIO does not handle complex-valued images. I started looking in the code to understand how the motion works and TorchIO uses SimpleITK to create motion. STIK handles complex images but not for Euler3DTransform (which are used in Shaw et al. to create the motion and therefore is used in TorchIO). And since Shaw et al. did not share their code, I kept looking on how to work around this issue in TorchIO. I therefore thought of separating my image in real and complex part and apply the same Motion and reuniting the image-space back after the transformation. I managed to do this after working more on the code (returning the full fourier transform of the modified k-space and not only the real part, removing some attributes given to STIK when the 3D transformation is made ...). I can provide more details on the changes I made if this can help. I am still unsure if my implementation is good but it seems to give promising results since I retrieve a complex image space and when I plot my amplitude, it gives similar results to what TorchIO gives when I give my original amplitude (but loose the information on the phase). The results are not exactly the same though and therefore, I do not know if what I am doing is right. The interest behind this is being able to simulate many different artefacts and retrieving what would be the raw associated k-space. Also, it avoids loosing many information contained in the phase and this information could be provided to Deep Learning models and maybe provide better results (the phase always containing so many information that we don't look but deep learning models could extract many features from it). This to say that I see many applications to what I am trying to achieve and that is why I think it's extremely cool! I would like to know what you think about this work. Does it make sense (even physically) to separate my real and complex part of my raw image space and perform the same transformation to then put them back together? What do you think about this work? Also, I was looking for information about normalization when performing a Fourier or Inverse Fourier Transform and did not find any consensus. Should they be normalized by sqrt(N) or not when they are performed for MRI scans? I found that normalizing preserve the energy of the signal and should be done but for example in TorchIO, norm is always taken as None (the default parameter in Numpy). Below you will find an example of the work I have done (the same Motion transformation is always applied) on a T2 1.5T brain MRI scan: Thank you so much in advance for your help and time! I deeply appreciate it. I hope you find just like me this work interesting and exciting. Thank you also for this great package! It's a very beautiful code behind. @romainVala I am taking the liberty to ping you after emailing @fepegar, he told me that you could help me (fun fact my mom did her License at UPMC 30 years ago so always happy to chat!). Thanks a lot! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hello I am glad you are interested in motion. About the solution you describe I do not understand the first part : why do you want to apply the motion on real and imaginary part at the first place ? If you want to work with "raw kspace " data with the phase information the only change you need to do return complex data instead of the abs at the end of the code (ie after the different kspace is recombined and the inverse fft is taken ...) We work on different implementation of motion instead of applying the motion in image domaine and perfoming an fft for each motion step we apply motion directly in k-space In short translation are just a phase shift in k-space and rotation are rotation in the k-space . After rotation of the k-space grip you end up with a non-uniform fft (ie as if you were acquiring a non cartesian grid ...) so using a NUFFT (non uniform fft ) for the inverse fft lead to the corrupted (by motion) image We did implement it in a torchio transform here: Note that it should give exactly the same thing as the implementation in image domain as done in torchio (and by shaw) This only difference is that now you can do as many time step you want. with current torchio implementation if you have N time steps you need to perform N reslicing, and N fft which take too much time if N > 10 or 20. whereas with the k-space implementation you need to apply N rotations of the kspace grid (which is quick, no reslicing just a matrix multiplication) and then a NuFFt. I pointed out this implementation because I think it is important to better understand how motion "works" with perturbation of the k-space data. But for your need ie keeping the complex data then it is the same solution you just need to change the last line were the abs is taken after the inverse fft (done with nufft in my case) I hope it helps Cheers |
Beta Was this translation helpful? Give feedback.
-
Hi Sorry if I get a bit off topic, but I would like to point out other possible development, about the importance of k-space I _Adding noise in kspace should be as easy as adding it in image space, but well lead to the more realist rician noise (if gaussian noise is added in kspace) The fact that zero filling interpolation is the best possible interpolation (compare to sinc interpolation in image domaine) is well know but playing with successive rotation in kspace I notice a very important properties (may be related, I would be curious to know if this is mathematically trivial ... ? ): here is the same where I do repeat 5 time (succesively) the previous interpolation. If you did not notice the bluring in the first step then it is know obvious after 5 steps I find it quite amazing and I think this properties worth a closer look and it should be enough to motivate a specific transform We need one more step with non linear deformation which should be possible ... (and fun to explore !) The second cool thing will be that such transform would not use the sitk library that is cpu only ... |
Beta Was this translation helpful? Give feedback.
-
about the random_motion_from_time_course code we get started by loking at this solution we just change the randon phase shift in kspace to a phase shift that is computed from a given (random or not) time course of translation. but the "core" perturbation of kspace is still the same phase shift for a translation |
Beta Was this translation helpful? Give feedback.
Hello
I am glad you are interested in motion.
Looking at the full complex data is often a good idea especially with motion. For instance if you want to invert the problem ie given a motion corrupted imaged and knowing the exact motion time course how to reconstruct a motion free image.
If there is only translation then it is simple to do (with 100 % accuracy) but only if you keep the complex data. (this can be easily understood if you consider how motion can be applied directly in kspace. ... i come back to this later)
About the solution you describe I do not understand the first part : why do you want to apply the motion on real and imaginary part at the first place ?
Actually we do alwa…