Skip to content

Releases: adapter-hub/adapters

Adapters v1.0.1

02 Nov 18:47
Compare
Choose a tag to compare

This version is built for Hugging Face Transformers v4.45.x.

New

Changes

  • Upgrade supported Transformers version (@calpt via #751)

Fixed

  • Fix import error for Huggingface-hub version >=0.26.0 & Update Notebooks (@lenglaender via #750)
  • Fix output_embeddings get/ set for empty active head (@calpt via #754)
  • Fix LoRA parallel composition (@calpt via #752)

Adapters v1.0.0

10 Aug 15:47
Compare
Choose a tag to compare

Blog post: https://adapterhub.ml/blog/2024/08/adapters-update-reft-qlora-merging-models

This version is built for Hugging Face Transformers v4.43.x.

New Adapter Methods & Model Support

Breaking Changes & Deprecations

  • Remove support for loading from archived Hub repository (@calpt via #724)
  • Remove deprecated add_fusion() & train_fusion() methods (@calpt via #714)
  • Remove deprecated arguments in push_adapter_to_hub() method (@calpt via #724)
  • Deprecate support for passing Python lists to adapter activation (@calpt via #714)

Minor Fixes & Changes

Adapters v0.2.2

27 Jun 20:23
Compare
Choose a tag to compare

This version is built for Hugging Face Transformers v4.40.x.

New

  • Add example notebook for embeddings training & update docs (@hSterz via #706)

Changed

  • Upgrade supported Transformers version (@calpt via #697)
  • Add download redirect for AH adapters to HF (@calpt via #704)

Fixed

  • Fix saving adapter model with custom heads (@hSterz via #700)
  • Fix moving adapter head to device with adapter_to() (@calpt via #708)
  • Fix importing encoder-decoder adapter classes (@calpt via #711)

Adapters v0.2.1

21 May 19:16
Compare
Choose a tag to compare

This version is built for Hugging Face Transformers v4.39.x.

New

  • Support saving & loading via Safetensors and use_safetensors parameter (@calpt via #692)
  • Add adapter_to() method for moving & converting adapter weights (@calpt via #699)

Fixed

  • Fix reading model info in get_adapter_info() for HF (@calpt via #695)
  • Fix load_best_model_at_end=True for AdapterTrainer with quantized training (@calpt via #699)

Adapters v0.2.0

25 Apr 13:54
Compare
Choose a tag to compare

This version is built for Hugging Face Transformers v4.39.x.

New

Changed

Fixed

  • Fix DataParallel training with adapters (@calpt via #658)
  • Fix embedding Training Bug (@hSterz via #655)
  • Fix fp16/ bf16 for Prefix Tuning (@calpt via #659)
  • Fix Training Error with AdapterDrop and Prefix Tuning (@TimoImhof via #673)
  • Fix default cache path for adapters loaded from AH repo (@calpt via #676)
  • Fix skipping composition blocks in not applicable layers (@calpt via #665)
  • Fix Unipelt Lora default config (@calpt via #682)
  • Fix compatibility of adapters with HF Accelerate auto device-mapping (@calpt via #678)
  • Use default head dropout prob if not provided by model (@calpt via #685)

Adapters v0.1.2

28 Feb 21:47
Compare
Choose a tag to compare

This version is built for Hugging Face Transformers v4.36.x.

New

Changed

  • Upgrade supported Transformers version (@calpt via #617)
  • Simplify XAdapterModel implementations (@calpt via #641)

Fixed

  • Fix prediction head loading for T5 (@calpt via #640)

Adapters v0.1.1

09 Jan 21:16
Compare
Choose a tag to compare

This version is built for Hugging Face Transformers v4.35.x.

New

  • Add leave_out to LoRA and (IA)³ (@calpt via #608)

Fixed

  • Fix error in push_adapter_to_hub() due to deprecated args (@calpt via #613)
  • Fix Prefix-Tuning for T5 models where d_kv != d_model / num_heads (@calpt via #621)
  • [Bart] Move CLS rep extraction from EOS tokens to head classes (@calpt via #624)
  • Fix adapter activation with skip_layers/ AdapterDrop training (@calpt via #634)

Docs & Notebooks

  • Update notebooks & add new complex configuration demo notebook (@hSterz & @calpt via #614)

Adapters 0.1.0

24 Nov 10:10
4c00622
Compare
Choose a tag to compare

Blog post: https://adapterhub.ml/blog/2023/11/introducing-adapters/

With the new Adapters library, we fundamentally refactored the adapter-transformers library and added support for new models and adapter methods.

This version is compatible with Hugging Face Transformers version 4.35.2.

For a guide on how to migrate from adapter-transformers to Adapters have a look at https://docs.adapterhub.ml/transitioning.md.
Changes are given compared to the latest adapters-transformers v3.2.1.

New Models & Adapter Methods

Breaking Changes

Changes Due to the Refactoring

  • Refactored the implementation of all already supported models (@calpt, @lenglaender, @hSterz, @TimoImhof)
  • Separate the model config (PretrainedConfig) from the adapters config (ModelAdaptersConfig) (@calpt)
  • Updated the whole documentation, Jupyter Notebooks and example scripts (@hSterz, @lenglaender, @TimoImhof, @calpt)
  • Introduced the load_model function to load models containing adapters. This replaces the Hugging Face from_pretrained function used in the adapter-transformers library (@lenglaender)
  • Sharing more logic for adapter composition between different composition blocks (@calpt via #591)
  • Added Backwards Compatibility Tests which allow for testing if adaptations of the codebase, such as Refactoring, impair the functionality of the library (@TimoImhof via #596)
  • Refactored the EncoderDecoderModel by introducing a new mixin (ModelUsingSubmodelsAdaptersMixin) for models that contain other models (@lenglaender)
  • Rename the class AdapterConfigBase into AdapterConfig (@hSterz via #603)

Fixes and Minor Improvements

  • Fixed EncoderDecoderModel generate function (@lenglaender)
  • Fixed deletion of invertible adapters (@TimoImhof)
  • Automatically convert heads when loading with XAdapterModel (@calpt via #594)
  • Fix training T5 adapter models with Trainer (@calpt via #599)
  • Ensure output embeddings are frozen during adapter training (@calpt #537)

adapter-transformers v.3.2.1

06 Apr 21:17
Compare
Choose a tag to compare

This is the last release of adapter-transformers. See here for the legacy codebase: https://github.com/adapter-hub/adapter-transformers-legacy.

Based on transformers v4.26.1

Fixed

  • Fix compacter init weights (@hSterz via #516)
  • Restore compatibility of GPT-2 weight initialization with Transformers (@calpt via #525)
  • Restore Python 3.7 compatibility (@lenglaender via #510)
  • Fix LoRA & (IA)³ implementation for Bart & MBart (@calpt via #518)
  • Fix resume_from_checkpoint in AdapterTrainer class (@hSterz via #514)

v3.2.0

03 Mar 14:08
Compare
Choose a tag to compare

Based on transformers v4.26.1

New

New model integrations

Misc

  • Add support for adapter configuration strings (@calpt via #465, #486)
    This enables you to easily configure adapter configs. To create a Pfeiffer adapter with reduction factor 16 you can know use pfeiffer[reduction_factor=16]. Especially for experiments using different hyperparameters or the example scripts, this can come in handy. Learn more
  • Add for Stack, Parallel & BatchSplit composition to prefix tuning (@calpt via #476)
    In previous adapter-transformers versions, you could combine multiple bottleneck adapters. You could use them in parallel or stack them. Now, this is also possible for prefix-tuning adapters. Add multiple prefixes to the same model to combine the functionality of multiple adapters (Stack) or perform several tasks simultaneously (Parallel, BatchSplit) Learn more
  • Enable parallel sequence generation with adapters (@calpt via #436)

Changed

Fixed

  • Fixes for GLUE & dependency parsing example script (@calpt via #430, #454)
  • Fix access to shared parameters of compacter (e.g. during sequence generation) (@calpt via #440)
  • Fix reference to adapter configs in T5EncoderModel (@calpt via #437)
  • Fix DeBERTa prefix tuning with enabled relative attention (@calpt via #451)
  • Fix gating for prefix tuning layers (@calpt via #471)
  • Fix input to T5 adapter layers (@calpt via #479)
  • Fix AdapterTrainer hyperparameter tuning (@dtuit via #482)
  • Move loading best adapter to AdapterTrainer class (@MaBeHen via #487)
  • Make HuggingFace Hub Mixin work with newer utilities (@Helw150 via #473)
  • Only compute fusion reg loss if fusion layer is trained (@calpt via #505)