Replies: 2 comments
-
@virginiafdez @Warvito may have some insights here. |
Beta Was this translation helpful? Give feedback.
0 replies
-
We just released a 3D VAE. Hope it can help. https://github.com/Project-MONAI/tutorials/tree/main/generation/maisi |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I am reading your excellent tutorial about a 2D VQVAE Transformer: https://github.com/Project-MONAI/GenerativeModels/blob/main/tutorials/generative/2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb
I see that you flattened the 2D outputs of the VQVAE before inputting them into the transformer. I wonder how you would do it if the VQVAE was 3D. What I mean with this is a sequence of images, and you use the 2D VQVAE to obtain the latent representation for each image in the sequence, and then you input somehow this sequence into the Transformer model to predict the next image in the sequence.
Beta Was this translation helpful? Give feedback.
All reactions