Skip to content
/ ASIF Public

Personal implementation of ASIF by Antonio Norelli

License

Notifications You must be signed in to change notification settings

noranta4/ASIF

Repository files navigation

ASIF

Coupled Data Turns Unimodal Models to Multimodal without Training

Open In Colab

This repository contains a demo of ASIF by me (first author of the paper).

It is a self-contained notebook useful to run ASIF models based on different backbones and datasets, sufficient to reproduce the main results reported in the paper within minutes. The free GPU runtime of colab is sufficient to run all the code, dataset embeddings are precomputed and downloaded from my google drive.

Notebook content

  • Setup of the ASIF model
  • DEMO1: Zero-shot classification experiment (Fig. 5 in the paper)
  • DEMO2: Calculate the similarity between uploaded images and texts
  • DEMO3: Interpretability demo: deep dive into a classification
  • DEMO4: Universal classifier using images from your webcam

Paper

Paper: ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training

By: Antonio Norelli, Marco Fumero, Valentino Maiorca, Luca Moschella, Emanuele Rodolà, Francesco Locatello

Image

TLDR: The meaning was already there: connecting text and images without training a neural network to do so.

Abstract: CLIP proved that aligning visual and language spaces is key to solving many vision tasks without explicit training, but required to train image and text encoders from scratch on a huge dataset. LiT improved this by only training the text encoder and using a pre-trained vision network. In this paper, we show that a common space can be created without any training at all, using single-domain encoders (trained with or without supervision) and a much smaller amount of image-text pairs. Furthermore, our model has unique properties. Most notably, deploying a new version with updated training samples can be done in a matter of seconds. Additionally, the representations in the common space are easily interpretable as every dimension corresponds to the similarity of the input to a unique entry in the multimodal dataset. Experiments on standard zero-shot visual benchmarks demonstrate the typical transfer ability of image-text models. Overall, our method represents a simple yet surprisingly strong baseline for foundation multi-modal models, raising important questions on their data efficiency and on the role of retrieval in machine learning.

Instructions

Click on the big blue button Open In Colab and run all the cells of the notebook. That's it.

You can adjust backbones and datasets using the convenient drop-down menus.

Cite

If you liked our work and want to cite it in yours:

@article{norelli2024asif,
  title={ASIF: Coupled data turns unimodal models to multimodal without training},
  author={Norelli, Antonio and Fumero, Marco and Maiorca, Valentino and Moschella, Luca and Rodola, Emanuele and Locatello, Francesco},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2024}
}

About

Personal implementation of ASIF by Antonio Norelli

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published