Skip to content

Commit

Permalink
Update adapterhub-v3_2.md
Browse files Browse the repository at this point in the history
  • Loading branch information
calpt authored Mar 27, 2023
1 parent 61b0f40 commit 236e0a9
Showing 1 changed file with 8 additions and 5 deletions.
13 changes: 8 additions & 5 deletions posts/2023/03/adapterhub-v3_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,15 @@ The new v3.2 of `adapter-transformers` adds support for adapters for several new
- ALBERT
- BertGeneration

## Other notable changes

⚠️ **Breaking change**: The latest release removes the `MultiLingAdapterArguments` class which was previously used to add adapter support to training scripts.
It is now recommended to use the [`AdapterArguments`](https://docs.adapterhub.ml/classes/adapter_training.html#transformers.adapters.training.setup_adapter_training) class and [`setup_adapter_training`](https://docs.adapterhub.ml/classes/adapter_training.html#transformers.adapters.training.setup_adapter_training) method instead. [Learn more](https://docs.adapterhub.ml/training.html).

Finally, version 3.2 of `adapter-transformers` updates the underlying transformers version from v.4.23.1 to v4.26.1

## Fixes

## Fixed
- Fixes for GLUE & dependency parsing example script
- Fix access to shared parameters of compacter (e.g. during sequence generation)
- Fix reference to adapter configs in `T5EncoderModel`
Expand All @@ -55,10 +62,6 @@ The new v3.2 of `adapter-transformers` adds support for adapters for several new
- Make HuggingFace Hub Mixin work with newer utilities
- Only compute fusion reg loss if the fusion layer is trained


### Transformers Update
Version 3.2 of `adapter-transformers` updates the underlying transformers version from v.4.23.1 to v4.26.1

## References

- Pfeiffer, J., Ruder, S., Vulic, I., & Ponti, E. (2023). Modular Deep Learning. ArXiv, abs/2302.11529.

0 comments on commit 236e0a9

Please sign in to comment.