Skip to content

Commit

Permalink
Fix broken links in README (nod-ai#527)
Browse files Browse the repository at this point in the history
Co-authored-by: Daniel Garvey <34486624+dan-garvey@users.noreply.github.com>
  • Loading branch information
penguin-wwy and dan-garvey authored Mar 14, 2024
1 parent e18728b commit 6e3adb3
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ is intended to be a general purpose model compilation and execution tool.
Turbine provides three primary tools:

* *AOT Export*: For compiling one or more `nn.Module`s to compiled, deployment
ready artifacts. This operates via both a [simple one-shot export API](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/aot/exporter.py)
for simple models and an underlying [advanced API](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/aot/compiled_module.py) for complicated models
ready artifacts. This operates via both a simple one-shot export API (Already upstreamed to [torch-mlir](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py))
for simple models and an underlying [advanced API](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/shark_turbine/aot/compiled_module.py) for complicated models
and accessing the full features of the runtime.
* *Eager Execution*: A `torch.compile` backend is provided and a Turbine Tensor/Device
is available for more native, interactive use within a PyTorch session.
Expand Down Expand Up @@ -62,11 +62,11 @@ compiler, these should be compilable via IREE with `--iree-input-type=torch` for
end to end execution. Dynamic shape support in torch-mlir is a work in progress,
and not everything works at head with release binaries at present.

* [AOT MLP With Static Shapes](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/aot_mlp/mlp_export_simple.py)
* [AOT MLP with a dynamic batch size](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/aot_mlp/mlp_export_dynamic.py)
* [AOT MLP With Static Shapes](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/aot_mlp/mlp_export_simple.py)
* [AOT MLP with a dynamic batch size](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/aot_mlp/mlp_export_dynamic.py)
* [AOT llama2](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/llama2_inference/llama2.ipynb):
Dynamic sequence length custom compiled module with state management internal to the model.
* [Eager MNIST with `torch.compile`](https://github.com/nod-ai/SHARK-Turbine/blob/main/examples/eager_mlp/mlp_eager_simple.py)
* [Eager MNIST with `torch.compile`](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/eager_mlp/mlp_eager_simple.py)

## Developers

Expand Down

0 comments on commit 6e3adb3

Please sign in to comment.