Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] Add MVP Support For 1-2 Models Per-Modality #710

Open
1 task done
4gatepylon opened this issue Aug 31, 2024 · 3 comments
Open
1 task done

[Proposal] Add MVP Support For 1-2 Models Per-Modality #710

4gatepylon opened this issue Aug 31, 2024 · 3 comments
Labels
complexity-high Very complicated changes for people to address who are quite familiar with the code discussion No action needed yet

Comments

@4gatepylon
Copy link

4gatepylon commented Aug 31, 2024

Is this out of scope? I hope not, would be nice to have a one-stop shop for interpretability tooling.

Proposal

It should be easy to get the most bare-bones interpretability research off the ground for models that are not just transformer language models. Obviously, transformer lens should not have to support every model ever out there, but I think it would be cool to support just 1-2 very popular models per-modality.

  • TTS: It should include Whisper (I just started working on this today: https://github.com/4gatepylon/WhisperLens)
  • Vision: It should be possible to use a ResNet with hookpoints in it (I know this is not a transformer, but I think it is worthwhile to have the ability) and 1-2 basic ViTs like from https://github.com/soniajoseph/ViT-Prisma.
    • Support a basic VAE
    • Support a basic GAN (idk much about flow-based models so I cannot say much there)
  • Music Generation: probably 1 transformer model
  • Probably include 1-2 versions of Mamba

Not sure what I think about diffusion.

The scope of this is simple

  1. Be able to hook into basically a prototypical model for any modality
  2. Be able to have easy functionality to get step-wise computation when you are doing some iterative model

With these two features it's easy to train SAEs on top etc... (even if it isn't optimally efficient).

Motivation

There are resources out there for this stuff, but it is a little scattered and often there isn't a nice tutorial to just "get SAEs for my music gen model" for example.

Pitch

This is not that hard. All it entails is:

  1. Models properly extend HookedRootModule and have HookPoints]
  2. Tests that make sure this works
  3. Some sort of tutorial ipynb to do basic steering or train an SAE on a layer
  4. (BONUS) A single not-bad trained SAE and steering example per-modality (this may leak into SAE Lens or be out of scope)

It might be possible to get some code-gen or AI to generate basic versions of this?

It has benefits:

  1. Help us work through the generalization steps that are necessary eventually if we ever want to have a one-stop tool-shop for interpretability tooling, while NOT requiring us to tackle actually unclear questions like "how do I do attention table visualization for sound?"
  2. Speeds people up.

There's a lot of little questions that come up all the time like:

  • Do we see induction heads or similar mechanisms in music or TTS?
  • Can I do linear steering in xyz modality?
  • Can SAEs disentangle VAE latents (i.e. is there a vector direction decomposition where each direction gives me more or less of an interpretable singlefeature?
  • Can I optimize xyz continuous modality's inputs to maximize a feature direction for an SAE and get more interpretable inputs than for neurons?
  • Can I find the same interpretable feature if I trained with xyz method instead of abc method?

These should be as easy as run_with_hooks on a model that works out of the box + call some function on your activations. These things could take like a couple hours and not be bug prone instead of twice that and be somewhat finnicky.

Alternatives

The current paradigm is that you spend the first few ours of a project on getting hook-points and SAEs integrated. It's not the end of the world. You can also use pytorch hooks, but we use transformer lens for a reason.

Checklist

  • I have checked that there is no similar issue in the repo (required)
@bryce13950 bryce13950 added the complexity-high Very complicated changes for people to address who are quite familiar with the code label Sep 30, 2024
@bryce13950
Copy link
Collaborator

This is a lot more complicated than it seems. TransformerLens currently supports 185 models. If you look through the source code of transformers, you will see that every model architecture has its own implementation. TransformerLens is already trying to condense that into unified implementations, which has caused a ton of problems. Models that used to work fine over time seem to break without much notice due to someone maybe tweaking something for a new model, without realizing that one of the other 185 models has now become inaccurate. I am currently in the process of auditing every single model, and fixing all implementations without breaking other implementations. This turns into a lot of additional components to handle specific nuances for some models, and multiple layers of complexity that grows substantially with every model that is supported.

Now to take what we have, and then add whole new types of models will exponentially grow the existing issues. What I think is a better solution would be to turn TransformerLens into a platform by adding programatical hook points, and allow someone to essentially build plugins that can for instance, add support for vision models. This would open up a whole myriad of new possibilities for the development of larger interpretability built on top of TransformerLens, and allow for people to build and extend the code in ways that we cannot yet imagine. That would also allow us to focus making TransformerLens as good at Transformer interpretability as it possibly can be, instead of trying to make it a once size fits all solution.

For extending TransformerLens, there are quite a bit of pieces in the code that need to be cleaned up to allow for this sort of capability, but there are places where it can feasibly start. If you are interested in doing something like this, then I can add some experimental points where you could hook into with a child project. It is a pretty low priority at the moment, but it is something far out into the future to begin working on. I would be happy to start playing around with it now if there is a reason to do so.

@bryce13950 bryce13950 added the discussion No action needed yet label Sep 30, 2024
@ashwath98
Copy link

Hey, I'm interested in creating support for vision models (I have to do it for one of the baselines for my project), such as one of the VITs. Could you point me where to look to help do this?

@bryce13950
Copy link
Collaborator

Which models do you want to use? I have helped a couple people get them up and running, and I am sure they would be happy to share their code for the models they have used. If the models do not intersect, then I would still be happy to help you troubleshoot it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
complexity-high Very complicated changes for people to address who are quite familiar with the code discussion No action needed yet
Projects
None yet
Development

No branches or pull requests

3 participants