Foundation Model Stack is a collection of components for development, inference, training, and tuning of foundation models leveraging PyTorch native components. For inference optimizations we aim to support PyTorch compile, accelerated transformers, and tensor parallelism. At training time we aim to support FSDP, accelerated transformers, and PyTorch compile. To enable these optimizations, we will provide reimplementations of several popular model architectures starting with Llama and GPT-BigCode.
Model family | Inference | Tuning and Training |
---|---|---|
LLaMA | ✔️ | ✔️ |
GPT-BigCode | ✔️ | ❌ |
RoBERTa | ✔️ | ❌ |
We recommend running this on Python 3.11 and CUDA 12.1 for best performance, as the CPU overheads of the models are reduced significantly.
pip install ibm-fms
Requires PyTorch >= 2.1.
pip install -e .
or
python setup.py install
Our approach for inference optimization is to use PyTorch compile, accelerated transformers, and tensor parallelism. PyTorch compile compiles the code into optimized kernels, accelerated transformers leverages scaled_dot_product_attention
(SDPA) for accelerating attention computation while saving memory, and tensor parallelism is necessary for larger models.
To enable the Llama models to compile, we had to reimplement RoPE
encodings without complex numbers. With this change, Llama model inference is able to leverage model compilation for latency reduction.
We measured inference latencies with 1024 token prompt and generation of 256 tokens on AWS P4de instance nodes with 8 80G A100 GPUs and report the median latency in the below table.
Model | # GPUs | Median latency (ms) |
---|---|---|
7B | 1 | 14ms |
13B | 1 | 22ms |
70B | 8 | 30ms |
If you would like to reproduce the latencies, you can run the scripts/benchmark_inference.py
and the details are described in inference.
For more information on reproducing the benchmarks and running some examples, see here
The support for HF models is provided by our HF model adapter. One can obtain similar latencies as tabulated above with HF models using our HF model adapter:
from fms.models import get_model
from fms.models.hf import to_hf_api
import torch
from transformers import pipeline
# fms model
llama = get_model("llama", "13b")
# huggingface model backed by fms internals
llama_hf = to_hf_api(llama)
# compile the model -- in HF, the decoder only
llama_hf.decoder = torch.compile(llama_hf.decoder)
# generate some text -- the first time will be slow since the model needs to be compiled, but subsequent generations should be faster.
llama_generator = pipeline(task="text-generation", model=llama_hf, tokenizer=tokenizer)
llama_generator("""q: how are you? a: I am good. How about you? q: What is the weather like today? a:""")
A detailed example is provided here.
To fine-tune LLaMA, use the scripts/train_causal.py
training script. Here's
an example of that command.
torchrun --nproc_per_node=2 \
scripts/train_causal.py \
--architecture=llama \
--variant=7b \
--tokenizer=~/models/tokenizer.model \
--model_path=~/models/7B/ \
--report_steps=10 \
--checkpoint_format=meta \
--distributed=fsdp
See options in the script for other ways to train and tune.
This library is used by three dependent projects at IBM.
- fms-fsdp - This repo shares training code that has been used to pretrain an fms implementation of LLaMA on IBM internal data.
- fms-extras - This repo shares code for additional fms-based models trained by IBM. This repo will also be a home for other extensions, and may also include research or in-developent work intended for eventual upstreaming to fms.
- TGIS - This inference server includes support for serving fms models.
- pytorch/pytorch#107824 prevents training/finetuning from working with
torch.compile
. - In addition, there are several open issues we are tracking to improve stability and memory footprint of inference
- Huggingface TGI: https://github.com/huggingface/text-generation-inference
- IBM TGIS: https://github.com/IBM/text-generation-inference