Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Readme improvements #83

Merged
merged 6 commits into from
Mar 26, 2024
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 71 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@ ______________________________________________________________________
<a href="#get-started">Get started</a> •
<a href="#install-thunder">Install</a> •
<a href="#hello-world">Examples</a> •
<a href="#features">Features</a> •
<a href="#documentation">Documentation</a> •
<a href="#inside-thunder-a-brief-look-at-the-core-features">Inside Thunder</a> •
<a href="#get-involved">Get involved!</a> •
<a href="#documentation">Documentation</a>
</p>

[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/lightning-thunder/blob/main/LICENSE)
Expand All @@ -30,36 +31,50 @@ ______________________________________________________________________

**Thunder makes PyTorch models Lightning fast.**

Thunder is a source-to-source compiler for PyTorch. It makes PyTorch programs faster by combining and using different hardware executors at once (ie: nvFuser, torch.compile, cuDNN, and TransformerEngine FP8).
Thunder is a source-to-source compiler for PyTorch. It makes PyTorch programs faster by combining and using different hardware executors at once (for instance, [nvFuser](https://github.com/NVIDIA/Fuser), [torch.compile](https://pytorch.org/docs/stable/torch.compiler.html), [cuDNN](https://developer.nvidia.com/cudnn), and [TransformerEngine FP8](https://github.com/NVIDIA/TransformerEngine)).

It supports single accelerators (such as GPUs and TPUs) and also works in multi-GPU settings.
carmocca marked this conversation as resolved.
Show resolved Hide resolved

Works on single accelerators and in multi-GPU settings.
Thunder aims to be usable, understandable, and extensible.

## Performance
&#160;

> \[!Note\]
> Lightning Thunder is in alpha. Feel free to get involved, but expect a few bumps along the way.

&#160;

Thunder can achieve significant speedups over standard PyTorch eager code, through the compounding effects of optimizations and the use of best-in-class executors. Here is an example of the pretraining throughput for Llama 2 7B as implemented in [LitGPT](https://github.com/Lightning-AI/litgpt).
## Single-accelerator performance

Thunder can achieve significant speedups over standard non-compiled PyTorch code ("PyTorch eager"), through the compounding effects of optimizations and the use of best-in-class executors. The figure below shows the pretraining throughput for Llama 2 7B as implemented in [LitGPT](https://github.com/Lightning-AI/litgpt).

<div align="center">
<img alt="Thunder" src="docs/source/_static/images/training_throughput_single.png" width="800px" style="max-width: 100%;">
</div>

Thunder achieves a 40% speedup in training throughput compared to eager code on H100 using a combination of executors including nvFuser, torch.compile, cuDNN, and TransformerEngine FP8.
As shown in the plot above, Thunder achieves a 40% speedup in training throughput compared to eager code on H100 using a combination of executors including nvFuser, torch.compile, cuDNN, and TransformerEngine FP8.

&#160;

## Multi-GPU performance

Thunder supports distributed strategies like DDP and FSDP (ZeRO2 and ZeRO3). Here is the normalized throughput measured for Llama 2 7B (this time without FP8 mixed precision, support for FSDP is underway).
Thunder also supports distributed strategies such as DDP and FSDP for training models on multiple GPUs. The following plot displays the normalized throughput measured for Llama 2 7B without FP8 mixed precision; support for FSDP is in progress.

<div align="center">
<img alt="Thunder" src="docs/source/_static/images/normalized_training_throughput_zero2.png" width="800px" style="max-width: 100%;">
</div>

**NOTE: Lightning Thunder is alpha.** Feel free to get involved, expect a few bumps along the way.
&#160;

## Get started

Try Thunder without installing by using our [Zero to Thunder Tutorial Studio](https://lightning.ai/lightning-ai/studios/zero-to-thunder-tutorial).
The easiest way to get started with Thunder, requiring no extra installations or setups, is by using our [Zero to Thunder Tutorial Studio](https://lightning.ai/lightning-ai/studios/zero-to-thunder-tutorial).

&#160;

## Install Thunder

Install [nvFuser](https://github.com/NVIDIA/Fuser) nightly, and Thunder together
To use Thunder on your local machine, install [nvFuser](https://github.com/NVIDIA/Fuser) nightly and Thunder together as follows:
rasbt marked this conversation as resolved.
Show resolved Hide resolved

```bash
# install nvFuser which installs the matching nightly PyTorch
Expand All @@ -73,26 +88,54 @@ pip install lightning-thunder
<summary>Advanced install options</summary>
<!-- following section will be skipped from PyPI description -->

&#160;

### Install from main

Alternatively, you can install the latest version of Thunder directly from this GitHub repository as follows:
carmocca marked this conversation as resolved.
Show resolved Hide resolved

```bash
pip install git+https://github.com/Lightning-AI/lightning-thunder.git
```

&#160;

### Install to tinker and contribute

Install this way to tinker with the internals and contribute:
If you are interested in tinkering with and contributing to Thunder, we recommend cloning the Thunder repository and installing it in pip's editable mode:

```bash
git clone https://github.com/Lightning-AI/lightning-thunder.git
cd lightning-thunder
pip install -e .
```

&#160;

### Develop and run tests

After cloning the lightning-thunder repository and installing it as an editable package as explained above, ou can set up your environment for developing Thunder by installing the development requirements:

```bash
pip install -r requirements/devel.txt
```

Now you run tests:

```bash
pytest thunder/tests
```

Thunder is very thoroughly tested, so expect this to take a while.

</details>
<!-- end skipping PyPI description -->

&#160;

## Hello World

Here is a simple example of how Thunder lets you compile and run PyTorch code:
Below is a simple example of how Thunder allows you to compile and run PyTorch code:

```python
import torch
Expand Down Expand Up @@ -120,15 +163,19 @@ print(result)

The compiled function `jfoo` takes and returns PyTorch tensors, just like the original function, so modules and functions compiled by Thunder can be used as part of larger PyTorch programs.

&#160;

## Train models

Thunder is in its early stages and should not be used for production runs yet.

However, it can already deliver outstanding performance on LLM model supported by [LitGPT](https://github.com/Lightning-AI/lit-gpt), such as Mistral, Llama 2, Gemma, Falcon, and others.
However, it can already deliver outstanding performance for pretraining and finetuning LLMs supported by [LitGPT](https://github.com/Lightning-AI/lit-gpt), such as Mistral, Llama 2, Gemma, Falcon, and others.

Check out [the LitGPT integration](https://github.com/Lightning-AI/litgpt/tree/main/extensions/thunder) to learn about running LitGPT and Thunder together.

## Features
&#160;

## Inside Thunder: A brief look at the core features

Given a Python callable or PyTorch module, Thunder can generate an optimized program that:

Expand All @@ -140,8 +187,8 @@ Given a Python callable or PyTorch module, Thunder can generate an optimized pro
To do so, Thunder ships with:

- A JIT for acquiring Python programs targeting PyTorch and custom operations
- A multi-level IR to represent operations as a trace of a reduced op-set
- An extensible set of transformations on the trace, such as `grad`, fusions, distributed (like `ddp`, `fsdp`), functional (like `vmap`, `vjp`, `jvp`)
- A multi-level intermediate representation (IR) to represent operations as a trace of a reduced operation set
- An extensible set of transformations on the trace of a computational graph, such as `grad`, fusions, distributed (like `ddp`, `fsdp`), functional (like `vmap`, `vjp`, `jvp`)
- A way to dispatch operations to an extensible collection of executors

Thunder is written entirely in Python. Even its trace is represented as valid Python at all stages of transformation. This allows unprecedented levels of introspection and extensibility.
Expand All @@ -159,6 +206,8 @@ Thunder doesn't generate code for accelerators directly. It acquires and transfo

Modules and functions compiled with Thunder fully interoperate with vanilla PyTorch and support PyTorch's autograd. Also, Thunder works alongside torch.compile to leverage its state-of-the-art optimizations.

&#160;

## Documentation

Docs are currently not hosted publicly. However you can build them locally really quickly:
Expand All @@ -169,27 +218,15 @@ make docs

and point your browser to the generated docs at `docs/build/index.html`.

## Develop and run tests
&#160;

You can set up your environment for developing Thunder by installing the development requirements:
## Get involved!

```bash
pip install -r requirements/devel.txt
```
We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the [GitHub Issue](https://github.com/Lightning-AI/lightning-thunder/issues) tracker.

Install Thunder as an editable package (optional):
We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

```bash
pip install -e .
```

Now you run tests:

```bash
pytest thunder/tests
```

Thunder is very thoroughly tested, so expect this to take a while.
&#160;

## License

Expand Down
Loading