Skip to content

Commit

Permalink
fix(docs): Fix various broken links in docs (#5047)
Browse files Browse the repository at this point in the history
* Fix link for Rclone to k8s storage secrets

* Change link to use .md instead of .html for in-page link reference

* Allow linking to deeply nested sub-headings (H6 level)

* Fix broken links in inference artifacts docs

* Fix broken links in pipeline examples docs

* Update link to changed section name

* Fix more broken links due to typos, moved docs, and filename suffix

* Remove trailing whitespace in Getting Started page
  • Loading branch information
agrski authored Jul 21, 2023
1 parent ea5f5d4 commit f28ec5f
Show file tree
Hide file tree
Showing 8 changed files with 19 additions and 17 deletions.
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@
"tasklist",
]

myst_heading_anchors = 6;

source_suffix = ['.rst', '.md']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
Expand Down
2 changes: 1 addition & 1 deletion docs/source/contents/apis/inference/v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -869,7 +869,7 @@ matches the tensor's data type.

A platform is a string indicating a DL/ML framework or
backend. Platform is returned as part of the response to a
[Model Metadata](#model_metadata) request but is information only. The
[Model Metadata](#model-metadata) request but is information only. The
proposed inference APIs are generic relative to the DL/ML framework
used by a model and so a client does not need to know the platform of
a given model to use the API. Platform names use the format
Expand Down
4 changes: 2 additions & 2 deletions docs/source/contents/getting-started/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Getting Started

```{note}
Some dependencies may require that the (virtual) machines on which you deploy, support the SSE4.2 instruction set or x86-64-v2 microarchitecture. If `lscpu | grep sse4_2` does not return anything on your machine, your CPU is not compatible, and you may need to update the (virtual) host's CPU.
Some dependencies may require that the (virtual) machines on which you deploy, support the SSE4.2 instruction set or x86-64-v2 microarchitecture. If `lscpu | grep sse4_2` does not return anything on your machine, your CPU is not compatible, and you may need to update the (virtual) host's CPU.
```

Seldon Core can be installed either with Docker Compose or with Kubernetes:
Expand All @@ -12,7 +12,7 @@ Seldon Core can be installed either with Docker Compose or with Kubernetes:
Once installed:

* Try the existing [examples](../examples/index.md).
* Train and deploy your own [model artifact](../models/inference-artifacts/index.html#saving-model-artifacts).
* Train and deploy your own [model artifact](../models/inference-artifacts/index.md#saving-model-artifacts).


## Core Concepts
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The Helm charts can be found within the `k8s/helm-charts` folder and they are pu
Assuming you have installed any ecosystem components: Jaeger, Prometheus, Kafka as discussed [here](./index.md) you can follow the
following steps.

Note that for Kafka follow the steps discussed [here](kafka.md)
Note that for Kafka follow the steps discussed [here](../../kubernetes/kafka/index)

## Add Seldon Core v2 Charts

Expand Down
2 changes: 1 addition & 1 deletion docs/source/contents/metrics/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Hodometer is installed as a separate deployment, by default in the same namespac
````{group-tab} Helm
If you install Seldon Core v2 by [Helm chart](../getting-started/kubernetes-installation/helm.md), there are values corresponding to the key environment variables discussed [above](#setting-options).
If you install Seldon Core v2 by [Helm chart](../getting-started/kubernetes-installation/helm.md), there are values corresponding to the key environment variables discussed [above](#options).
These Helm values and their equivalents are provided below:
| Helm value | Environment variable |
Expand Down
10 changes: 5 additions & 5 deletions docs/source/contents/models/inference-artifacts/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,15 @@ To run your model inside Seldon you must supply an inference artifact that can b
* - LightGBM
- MLServer
- `lightgbm`
- [example](../../examples/model-zoo.html#lightgbm-model)
- [example](../../examples/model-zoo.md#lightgbm-model)
* - MLFlow
- MLServer
- `mlflow`
- [example](../../examples/model-zoo.html#mlflow-wine-model)
- [example](../../examples/model-zoo.md#mlflow-wine-model)
* - ONNX
- Triton
- `onnx`
- [example](../../examples/model-zoo.html#onnx-mnist-model)
- [example](../../examples/model-zoo.md#onnx-mnist-model)
* - OpenVino
- Triton
- `openvino`
Expand All @@ -53,7 +53,7 @@ To run your model inside Seldon you must supply an inference artifact that can b
* - PyTorch
- Triton
- `pytorch`
- [example](../../examples/model-zoo.html#pytorch-mnist-model)
- [example](../../examples/model-zoo.md#pytorch-mnist-model)
* - SKLearn
- MLServer
- `sklearn`
Expand All @@ -77,7 +77,7 @@ To run your model inside Seldon you must supply an inference artifact that can b
* - XGBoost
- MLServer
- `xgboost`
- [example](../../examples/model-zoo.html#xgboost-model)
- [example](../../examples/model-zoo.md#xgboost-model)
```

## Saving Model artifacts
Expand Down
4 changes: 2 additions & 2 deletions docs/source/contents/models/rclone/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

We utilize [Rclone](https://rclone.org/) to copy model artifacts from a storage location to the model servers. This allows users to take advantage of Rclones support for over 40 cloud storage backends including Amazon S3, Google Storage and many others.

For local storage while developing see [here](../../getting-started/docker-installation/index.html#local-models).
For local storage while developing see [here](../../getting-started/docker-installation/index.md#local-models).

For authorization needed for cloud storage when running on Kubernetes see [here](../../kubernetes/cloud-storage/index.html#kubernetes-secret).
For authorization needed for cloud storage when running on Kubernetes see [here](../../kubernetes/storage-secrets/index).
10 changes: 5 additions & 5 deletions docs/source/contents/pipelines/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ The simplest Pipeline chains models together: the output of one model goes into

In the above we rename tensor `OUTPUT0` to `INPUT0` and `OUTPUT1` to `INPUT1`. This allows these models to be chained together. The shape and data-type of the tensors needs to match as well.

This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.html#model-chaining).
This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.md#model-chaining).

## Join

Expand Down Expand Up @@ -127,7 +127,7 @@ Joins can have a join type which can be specified with `inputsJoinType` and can
* `outer` : wait for `joinWindowMs` to join any inputs. Ignoring any inputs that have not sent any data at that point. This will mean this step of the pipeline is guaranteed to have a latency of at least `joinWindowMs`.
* `any` : Wait for any of the specified data sources.

This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.html#model-join).
This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.md#model-join).

## Conditional Logic

Expand Down Expand Up @@ -181,7 +181,7 @@ In the above we have a step `conditional` that either outputs a tensor named `OU

Note, we also have a final Pipeline output step that does an `any` join on these two models essentially outputing fron the pipeline whichever data arrives from either model. This type of Pipeline can be used for Multi-Armed bandit solutions where you want to route traffic dynamically.

This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.html#conditional).
This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.md#conditional).

### Errors

Expand All @@ -193,7 +193,7 @@ Its also possible to abort pipelines when an error is produced to in effect crea

This Pipeline runs normally or throws an error based on whether the input tensors have certain values.

This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.html#error).
This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.md#error).

### Triggers

Expand Down Expand Up @@ -241,7 +241,7 @@ Sometimes you want to run a step if an output is received from a previous step b

In this example the last step `tfsimple3` runs only if there are outputs from `tfsimple1` and `tfsimple2` but also data from the `check` step. However, if the step `tfsimple3` is run it only receives the join of data from `tfsimple1` and `tfsimple2`.

This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.html#model-join-with-trigger).
This example can be found in the [pipeline-examples examples](../examples/pipeline-examples.md#model-join-with-trigger).

### Trigger Joins

Expand Down

0 comments on commit f28ec5f

Please sign in to comment.