Skip to content

Commit

Permalink
Prep for a preview release to pypi (#83)
Browse files Browse the repository at this point in the history
Co-authored-by: brucekimrokcmu <kwangkyk@alumni.cmu.edu>
  • Loading branch information
stellaraccident and brucekimrokcmu authored Oct 9, 2023
1 parent ef330cb commit bf80c1e
Show file tree
Hide file tree
Showing 5 changed files with 83 additions and 36 deletions.
82 changes: 52 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,55 @@
# SHARK Turbine
![image](https://netl.doe.gov/sites/default/files/2020-11/Turbine-8412270026_83cfc8ee8f_c.jpg)

This project provides a unified build of [IREE](https://github.com/openxla/iree),
[torch-mlir](https://github.com/llvm/torch-mlir), and auxilliary support for
providing a tight integration with PyTorch and other related frameworks. It
presently uses IREE's compiler plugin API to achieve this coupling, allowing
us to build a specialized compiler with tight coupling to PyTorch concepts.

WARNING: This project is still under construction and is at an early phase.

As things progress, we will be building out:

* Native Dynamo support.
* Integration to allow use of the compiler flow as part of the eager flow.
* Compiler support for hallmark PyTorch features such as strided tensors,
in-place semantics, dynamic shapes, etc (IREE mostly supports these
features under the covers but they need adaptation for good interop with
PyTorch).
* Custom op and type support for emerging low-precision numerics.
* Triton code generation and retargeting.
* Cleaned up APIs and options for AOT compiling and standalone deployment.

We would also like to engage with the community to continue to push the bounds
on what Dynamo can do, especially when it comes to tighter integration with
optimizers and collectives -- both of which we are eager to integrate with
PyTorch to a similar level as can be achieved with whole-graph frameworks like
Jax.

## Getting Up and Running
Turbine is the set of development tools that the [SHARK Team](https://github.com/nod-ai/SHARK)
is building for deploying all of our models for deployment to the cloud and devices. We
are building it as we transition from our TorchScript-era 1-off export and compilation
to a unified approach based on PyTorch 2 and Dynamo. While we use it heavily ourselves, it
is intended to be a general purpose model compilation and execution tool.

Turbine provides three primary tools:

* *AOT Export*: For compiling one or more `nn.Module`s to compiled, deployment
ready artifacts. This operates via both a [simple one-shot export API](TODO)
for simple models and an underlying [advanced API](TODO) for complicated models
and accessing the full features of the runtime.
* *Eager Execution*: A `torch.compile` backend is provided and a Turbine Tensor/Device
is available for more native, interactive use within a PyTorch session.
* *Turbine Kernels*: (coming soon) A union of the [Triton](https://github.com/openai/triton) approach and
[Pallas](https://jax.readthedocs.io/en/latest/pallas/index.html) but based on
native PyTorch constructs and tracing. It is intended to complement for simple
cases where direct emission to the underlying, cross platform, vector programming model
is desirable.

Under the covers, Turbine is based heavily on [IREE](https://github.com/openxla/iree) and
[torch-mlir](https://github.com/llvm/torch-mlir) and we use it to drive evolution
of both, upstreaming infrastructure as it becomes timely to do so.

## Contact Us

Turbine is under active development. If you would like to participate as it comes online,
please reach out to us on the `#turbine` channel of the
[nod-ai Discord server](https://discord.gg/QMmR6f8rGb).

## Quick Start for Users

1. Install from source:

```
pip install .
```

(or follow the "Developers" instructions below for installing from head/nightly)

2. Try one of the sample:

* [AOT MNIST](TODO)
* [Eager with `torch.compile`](TODO)
* [AOT llama2](TODO)

## Developers

### Getting Up and Running

If only looking to develop against this project, then you need to install Python
deps for the following:
Expand All @@ -45,7 +67,7 @@ Installing into a venv is highly recommended.

```
pip install --upgrade -r requirements.txt
pip install --upgrade -e .[torch,testing]
pip install --upgrade -e .[torch-cpu-nightly,testing]
```

Run tests:
Expand All @@ -54,7 +76,7 @@ Run tests:
pytest
```

## Using a development compiler
### Using a development compiler

If doing native development of the compiler, it can be useful to switch to
source builds for iree-compiler and iree-runtime.
Expand All @@ -67,7 +89,7 @@ sure to specify [additional options](https://openxla.github.io/iree/building-fro
-DIREE_BUILD_PYTHON_BINDINGS=ON -DPython3_EXECUTABLE="$(which python)"
```

### Configuring Python
#### Configuring Python

Uninstall existing packages:

Expand Down
2 changes: 1 addition & 1 deletion pytorch-cpu-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
--pre
torch==2.1.0.dev20230901
torch==2.1.0
4 changes: 2 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@
-r pytorch-cpu-requirements.txt
-r torchvision-requirements.txt

iree-compiler==20230925.656
iree-runtime==20230925.656
iree-compiler==20231004.665
iree-runtime==20231004.665
29 changes: 27 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,16 @@
VERSION_INFO_FILE = os.path.join(THIS_DIR, "version_info.json")


with open(
os.path.join(
THIS_DIR,
"README.md",
),
"rt",
) as f:
README = f.read()


def load_version_info():
with open(VERSION_INFO_FILE, "rt") as f:
return json.load(f)
Expand All @@ -30,7 +40,7 @@ def load_version_info():

PACKAGE_VERSION = version_info.get("package-version")
if not PACKAGE_VERSION:
PACKAGE_VERSION = f"0.dev0"
PACKAGE_VERSION = f"0.9.1dev1"


packages = find_namespace_packages(
Expand Down Expand Up @@ -78,6 +88,19 @@ def initialize_options(self):
setup(
name=f"shark-turbine",
version=f"{PACKAGE_VERSION}",
author="SHARK Authors",
author_email="stella@nod.ai",
description="SHARK Turbine Machine Learning Deployment Tools",
long_description=README,
long_description_content_type="text/markdown",
url="https://github.com/nod-ai/SHARK-Turbine",
license="Apache-2.0",
classifiers=[
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
],

package_dir={
"": "python",
},
Expand All @@ -91,9 +114,11 @@ def initialize_options(self):
"numpy",
f"iree-compiler{get_version_spec('iree-compiler')}",
f"iree-runtime{get_version_spec('iree-runtime')}",
# Use the [torch-cpu-nightly] spec to get a more recent/specific version.
"torch>=2.1.0",
],
extras_require={
"torch": [f"torch{get_version_spec('torch')}"],
"torch-cpu-nightly": [f"torch{get_version_spec('torch')}"],
"testing": [
"pytest",
"pytest-xdist",
Expand Down
2 changes: 1 addition & 1 deletion torchvision-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
--pre
torchvision==0.16.0.dev20230901
torchvision

0 comments on commit bf80c1e

Please sign in to comment.