diff --git a/README.md b/README.md index e4d441b..a110457 100644 --- a/README.md +++ b/README.md @@ -69,7 +69,7 @@ See [Docker image](docs/docker-image.md) for details. - [Model construction](docs/model-construction.md) - [Pretrained models](docs/pretrained-models.md) - [Training examples](docs/training-examples.md) -- [Inference examples](examples/inference.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/krasserm/perceiver-io/blob/main/examples/inference.ipynb) +- [Inference examples](examples/inference.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/krasserm/perceiver-io/blob/0.7.0/examples/inference.ipynb) - [Building blocks](docs/building-blocks.md) ## Getting started @@ -227,7 +227,7 @@ torch.save(model.state_dict(), "/path/to/model.pt") For generating text from a prompt via top-k sampling, `CausalLanguageModel` provides a `generate()` method. The following example first loads a trained model from a checkpoint and then generates text from a short sample prompt. An interactive -demo is also available in the [Colab notebook](https://colab.research.google.com/github/krasserm/perceiver-io/blob/main/examples/inference.ipynb). +demo is also available in the [Colab notebook](https://colab.research.google.com/github/krasserm/perceiver-io/blob/0.7.0/examples/inference.ipynb). ```python from perceiver.data.text import TextPreprocessor