From c76c5fb2b25877144ac36e99134a0e28b9634552 Mon Sep 17 00:00:00 2001 From: penguin-wwy <940375606@qq.com> Date: Tue, 12 Mar 2024 19:39:31 +0800 Subject: [PATCH] Fix broken links in README --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 3de3e5103..30c0bb3a1 100644 --- a/README.md +++ b/README.md @@ -10,8 +10,8 @@ is intended to be a general purpose model compilation and execution tool. Turbine provides three primary tools: * *AOT Export*: For compiling one or more `nn.Module`s to compiled, deployment - ready artifacts. This operates via both a [simple one-shot export API](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/aot/exporter.py) - for simple models and an underlying [advanced API](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/aot/compiled_module.py) for complicated models + ready artifacts. This operates via both a simple one-shot export API (Already upstream to [torch-mlir](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py)) + for simple models and an underlying [advanced API](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/shark_turbine/aot/compiled_module.py) for complicated models and accessing the full features of the runtime. * *Eager Execution*: A `torch.compile` backend is provided and a Turbine Tensor/Device is available for more native, interactive use within a PyTorch session. @@ -62,11 +62,11 @@ compiler, these should be compilable via IREE with `--iree-input-type=torch` for end to end execution. Dynamic shape support in torch-mlir is a work in progress, and not everything works at head with release binaries at present. - * [AOT MLP With Static Shapes](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/aot_mlp/mlp_export_simple.py) - * [AOT MLP with a dynamic batch size](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/aot_mlp/mlp_export_dynamic.py) - * [AOT llama2](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/llama2_inference/llama2.ipynb): + * [AOT MLP With Static Shapes](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/aot_mlp/mlp_export_simple.py) + * [AOT MLP with a dynamic batch size](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/aot_mlp/mlp_export_dynamic.py) + * [AOT llama2](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/llama2_inference/llama2.ipynb): Dynamic sequence length custom compiled module with state management internal to the model. - * [Eager MNIST with `torch.compile`](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/eager_mlp/mlp_eager_simple.py) + * [Eager MNIST with `torch.compile`](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/eager_mlp/mlp_eager_simple.py) ## Developers