Multi-modality embedding #7862
Unanswered
karllandheer
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi @karllandheer, you may need to design a custom autoencoder architecture that can handle both 2D and 3D data. One approach could be to have separate encoders for each modality that map the input data to a shared embedding space. MONAI offers flexibility in building custom network architectures, making it possible to design such an autoencoder. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, lets say I want to train an autoencoder for two different modalities. Maybe even one is 2D and the other is 3D. Is it possible to have some kind of joint embedding in an autoencoder via MONAI? Does anyone have any idea on how to go about this? Thanks very much in advance for any help!
Beta Was this translation helpful? Give feedback.
All reactions