Extending and generalising to other modalities #20
Replies: 1 comment
-
Hi @mdkozlowski! Thanks so much for your question, and it's a great one too. I made originally started making fusilli just for my PhD project on combining MRI with tabular data, which is why it's so tabular+image focused in its design. I only decided to make it a Python library around halfway through haha I'll make this into a GitHub issue so that I can follow up easily when I get around to it, and once I do I'd love for you to contribute some models, that would be excellent. I'm not sure what kind of timeline we're looking at though - I just submitted to the Journal of Open Source Software and I'm not sure if I can change fusilli much while it's under review. Thanks again, I'll keep you updated on how it goes 🌸 |
Beta Was this translation helpful? Give feedback.
-
Hi,
For my usecases I'm interested in some additional modalities that aren't currently supported in fusili, such as graph-structured data (as inputs) and text data. Fusion of multiple modalities on a graph input (such as tabular + textual features, per-node) is specifically interesting. On the other hand, my usecase doesn't make use of images or image models.
If it makes sense and depending on interest, I'd be happy to contribute these kinds of models to the project.
At the moment the dataloaders and data classes are quite specific to combinations of tabular & images. Do you see any value in making the the data classes more generic? For example, removing dependencies on
image_downsample_size
inTrainTestDataModule
, and using naming in the project likeembedding
ordense_representation
- agnostic to the embedding modality.Beta Was this translation helpful? Give feedback.
All reactions