Skip to content

0.44.0: New AdEMAMix optimizer, Embeddings quantization, and more!

Compare
Choose a tag to compare
@matthewdouglas matthewdouglas released this 29 Sep 16:31
· 12 commits to main since this release

New optimizer: AdEMAMix

The AdEMAMix optimizer is a modification to AdamW which proposes tracking two EMAs to better leverage past gradients. This allows for faster convergence with less training data and improved resistance to forgetting.

We've implemented 8bit and paged variations: AdEMAMix, AdEMAMix8bit, PagedAdEMAMix, and PagedAdEMAMix8bit. These can be used with a similar API to existing optimizers.

import bitsandbytes as bnb

optimizer = bnb.optim.PagedAdEMAMix8bit(
    model.parameters(),
    lr=1e-4,
    betas=(0.9, 0.999, 0.9999),
    alpha=5.0,
    eps=1e-8,
    weight_decay=1e-2,
    alpha=5.0,
)

8-bit Optimizers Update

The block size for all 8-bit optimizers has been reduced from 2048 to 256 in this release. This is a change from the original implementation proposed in the paper which improves accuracy.

CUDA Graphs support

A fix to enable CUDA Graphs capture of kernel functions was made in #1330. This allows for performance improvements with inference frameworks like vLLM. Thanks @jeejeelee!

Quantization for Embeddings

The trend of LLMs to use larger vocabularies continues. The embeddings can take up a significant portion of a quantized model's footprint. We now have an implementation of Embedding4bit and Embedding8bit thanks to @galqiwi!

Example usage:

import torch
import torch.nn as nn

from bitsandbytes.nn import Embedding4bit

fp16_module = nn.Embedding(128, 64)
quantized_module = Embedding4bit(128, 64)

quantized_module.load_state_dict(fp16_module.state_dict())

quantized_module = quantized_module.to(0)

Continuous Builds

We are now building binary wheels for each change on main. These builds can be used to preview upcoming changes.

🚤 Continuous Build

What's Changed

New Contributors

Full Changelog: 0.43.3...v0.44.0