diff --git a/docs/blog/articles/2024-05-15-KServe-0.13-release.md b/docs/blog/articles/2024-05-15-KServe-0.13-release.md index fdd05e6ff..1730eec83 100644 --- a/docs/blog/articles/2024-05-15-KServe-0.13-release.md +++ b/docs/blog/articles/2024-05-15-KServe-0.13-release.md @@ -88,7 +88,7 @@ These endpoints are useful for generative transformer models, which take in mess This update fosters a standardized approach to transformer model serving, ensuring compatibility with a broader spectrum of models and tools, and enhances the platform's versatility. The API can be directly used with OpenAI's client libraries or third-party tools, like LangChain or LlamaIndex. ### Future Plan -* Support other tasks like text embeddings [#3572](https://github.com/kserve/kserve/issues/3572]) +* Support other tasks like text embeddings [#3572](https://github.com/kserve/kserve/issues/3572). * Support more LLM backend options in the future, such as TensorRT-LLM. * Enrich text generation metrics for Throughput(tokens/sec), TTFT(Time to first token) [#3461](https://github.com/kserve/kserve/issues/3461). * KEDA integration for token based LLM Autoscaling [#3561](https://github.com/kserve/kserve/issues/3561).