From 4778d4249e61004b3f94c31a4b7c8f42f248ca81 Mon Sep 17 00:00:00 2001 From: Amit Raj <168538872+quic-amitraj@users.noreply.github.com> Date: Thu, 24 Oct 2024 15:11:55 +0530 Subject: [PATCH] Jenkins and Docs minor bug fix (#162) * Added QEFF_HOME for CI setup Signed-off-by: amitraj * fixed broken links and updated Signed-off-by: amitraj --------- Signed-off-by: amitraj --- README.md | 5 ++++- docs/source/introduction.md | 12 ++++++++++-- scripts/Jenkinsfile | 1 + 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 4f22ce3f..debb4eef 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,8 @@ *Latest news* :fire:
- [coming soon] Support for more popular [models](https://quic.github.io/efficient-transformers/source/validate.html#models-coming-soon) and inference optimization technique speculative decoding
+- [09/2024] [AWQ](https://arxiv.org/abs/2306.00978)/[GPTQ](https://arxiv.org/abs/2210.17323) 4-bit quantized models are supported +- [09/2024] Now we support [PEFT](https://huggingface.co/docs/peft/index) models - [09/2024] Added support for [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) - [09/2024] Added support for [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) - [09/2024] Added support for [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) @@ -27,6 +29,7 @@ - [05/2024] Added support for [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) & [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). - [04/2024] Initial release of [efficient transformers](https://github.com/quic/efficient-transformers) for seamless inference on pre-trained LLMs. + # Overview ## Train anywhere, Infer on Qualcomm Cloud AI with a Developer-centric Toolchain @@ -77,7 +80,7 @@ For more details about using ``QEfficient`` via Cloud AI 100 Apps SDK, visit [Li * [Quick Start Guide](https://quic.github.io/efficient-transformers/source/quick_start.html#) * [Python API](https://quic.github.io/efficient-transformers/source/hl_api.html) * [Validated Models](https://quic.github.io/efficient-transformers/source/validate.html) -* [Models coming soon](models-coming-soon) +* [Models coming soon](https://quic.github.io/efficient-transformers/source/validate.html#models-coming-soon) > Note: More details are here: https://quic.github.io/cloud-ai-sdk-pages/latest/Getting-Started/Model-Architecture-Support/Large-Language-Models/llm/ diff --git a/docs/source/introduction.md b/docs/source/introduction.md index 2e72b97a..a6f0140a 100644 --- a/docs/source/introduction.md +++ b/docs/source/introduction.md @@ -22,8 +22,16 @@ For other models, there is comprehensive documentation to inspire upon the chang ***Latest news*** :
-- [coming soon] Support for more popular [models](coming_soon_models) and inference optimization technique speculative decoding
-- [08/2024] Added Support for inference optimization technique ```continuous batching``` +- [coming soon] Support for more popular [models](https://quic.github.io/efficient-transformers/source/validate.html#models-coming-soon) and inference optimization technique speculative decoding
+- [09/2024] [AWQ](https://arxiv.org/abs/2306.00978)/[GPTQ](https://arxiv.org/abs/2210.17323) 4-bit quantized models are supported +- [09/2024] Now we support [PEFT](https://huggingface.co/docs/peft/index) models +- [09/2024] Added support for [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) +- [09/2024] Added support for [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) +- [09/2024] Added support for [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) +- [09/2024] Added support for [granite-20b-code-base](https://huggingface.co/ibm-granite/granite-20b-code-base-8k) +- [09/2024] Added support for [granite-20b-code-instruct-8k](https://huggingface.co/ibm-granite/granite-20b-code-instruct-8k) +- [09/2024] Added support for [Starcoder1-15B](https://huggingface.co/bigcode/starcoder) +- [08/2024] Added support for inference optimization technique ```continuous batching``` - [08/2024] Added support for [Jais-adapted-70b](https://huggingface.co/inceptionai/jais-adapted-70b) - [08/2024] Added support for [Jais-adapted-13b-chat](https://huggingface.co/inceptionai/jais-adapted-13b-chat) - [08/2024] Added support for [Jais-adapted-7b](https://huggingface.co/inceptionai/jais-adapted-7b) diff --git a/scripts/Jenkinsfile b/scripts/Jenkinsfile index 3facc515..b6e706fe 100644 --- a/scripts/Jenkinsfile +++ b/scripts/Jenkinsfile @@ -36,6 +36,7 @@ pipeline sh ''' . preflight_qeff/bin/activate export TOKENIZERS_PARALLELISM=false + export QEFF_HOME=$PWD pytest tests --ignore tests/cloud --junitxml=tests/tests_log1.xml pytest tests/cloud --junitxml=tests/tests_log2.xml junitparser merge tests/tests_log1.xml tests/tests_log2.xml tests/tests_log.xml