diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm-4v/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm-4v/README.md index 0bf8584a5c9..a9ff8006780 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm-4v/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm-4v/README.md @@ -19,7 +19,7 @@ conda activate llm # install ipex-llm with 'all' option pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu -pip install torchvision tiktoken transformers==4.42.4 trl +pip install torchvision tiktoken transformers==4.42.4 "trl<0.12.0" ``` On Windows: @@ -30,7 +30,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] -pip install torchvision tiktoken transformers==4.42.4 trl +pip install torchvision tiktoken transformers==4.42.4 "trl<0.12.0" ``` ### 2. Run diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm4/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm4/README.md index dea525d94f1..cb0b20d7317 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm4/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/glm4/README.md @@ -18,7 +18,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu # install packages required for GLM-4 -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` On Windows: @@ -29,7 +29,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` ## 2. Run diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3.1/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3.1/README.md index efdf5dc1dbf..eb4294b54d0 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3.1/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3.1/README.md @@ -20,7 +20,7 @@ pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pyt # transformers>=4.43.1 is required for Llama3.1 with IPEX-LLM optimizations pip install transformers==4.43.1 -pip install trl +pip install "trl<0.12.0" ``` On Windows: @@ -31,7 +31,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] pip install transformers==4.43.1 -pip install trl +pip install "trl<0.12.0" ``` ### 2. Run diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/minicpm-v-2_6/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/minicpm-v-2_6/README.md index 6e733f8b0f0..4de59d5484a 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/minicpm-v-2_6/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/minicpm-v-2_6/README.md @@ -18,7 +18,7 @@ conda activate llm # install ipex-llm with 'all' option pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cpu -pip install transformers==4.40.0 trl +pip install transformers==4.40.0 "trl<0.12.0" ``` On Windows: @@ -28,7 +28,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] pip install torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cpu -pip install transformers==4.41.0 trl +pip install transformers==4.41.0 "trl<0.12.0" ``` ### 2. Run diff --git a/python/llm/example/CPU/PyTorch-Models/Model/glm4/README.md b/python/llm/example/CPU/PyTorch-Models/Model/glm4/README.md index 6c05d588e38..a359745d477 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/glm4/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/glm4/README.md @@ -21,7 +21,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu # install packages required for GLM-4 -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` On Windows: @@ -32,7 +32,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[all] -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` ### 2. Run diff --git a/python/llm/example/GPU/HuggingFace/LLM/gemma2/README.md b/python/llm/example/GPU/HuggingFace/LLM/gemma2/README.md index c935235705f..b3167e8c997 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/gemma2/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/gemma2/README.md @@ -4,7 +4,7 @@ In this directory, you will find examples on how you could apply IPEX-LLM INT4 o ## Requirements To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information. -**Important: According to Gemma2's requirement, please make sure you have installed `transformers==4.43.1` and `trl` to run the example.** +**Important: According to Gemma2's requirement, please make sure you have installed `transformers==4.43.1` and `trl<0.12.0` to run the example.** ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Gemma2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs. @@ -19,7 +19,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte # According to Gemma2's requirement, please make sure you are using a stable version of Transformers, 4.43.1 or newer. pip install "transformers>=4.43.1" -pip install trl +pip install "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -33,7 +33,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte # According to Gemma2's requirement, please make sure you are using a stable version of Transformers, 4.43.1 or newer. pip install "transformers>=4.43.1" -pip install trl +pip install "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md b/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md index 2f6757b9c96..541ae806639 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/glm4/README.md @@ -14,7 +14,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # install packages required for GLM-4 -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` ### 1.2 Installation on Windows @@ -27,7 +27,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # install packages required for GLM-4 -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` ## 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/HuggingFace/LLM/llama3.1/README.md b/python/llm/example/GPU/HuggingFace/LLM/llama3.1/README.md index bbbfcdbe6b7..1e006c0826b 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/llama3.1/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/llama3.1/README.md @@ -17,7 +17,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte # transformers>=4.43.1 is required for Llama3.1 with IPEX-LLM optimizations pip install transformers==4.43.1 -pip install trl +pip install "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -31,7 +31,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte # transformers>=4.43.1 is required for Llama3.1 with IPEX-LLM optimizations pip install transformers==4.43.1 -pip install trl +pip install "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/HuggingFace/LLM/llama3.2/README.md b/python/llm/example/GPU/HuggingFace/LLM/llama3.2/README.md index 261a4626512..156c662287f 100644 --- a/python/llm/example/GPU/HuggingFace/LLM/llama3.2/README.md +++ b/python/llm/example/GPU/HuggingFace/LLM/llama3.2/README.md @@ -17,7 +17,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte pip install transformers==4.45.0 pip install accelerate==0.33.0 -pip install trl +pip install "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -31,7 +31,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte pip install transformers==4.45.0 pip install accelerate==0.33.0 -pip install trl +pip install "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md index ee653b58136..a11e1061d21 100644 --- a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md +++ b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md @@ -15,7 +15,7 @@ conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ -pip install transformers==4.41.0 trl +pip install transformers==4.41.0 "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -27,7 +27,7 @@ conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ -pip install transformers==4.41.0 trl +pip install transformers==4.41.0 "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md index 569225f6503..7e0ea2eafcc 100644 --- a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md +++ b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md @@ -15,7 +15,7 @@ conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ -pip install transformers==4.40.0 trl +pip install transformers==4.40.0 "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -27,7 +27,7 @@ conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ -pip install transformers==4.40.0 trl +pip install transformers==4.40.0 "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md b/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md index b4fc7341a27..c37a99f8183 100644 --- a/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md +++ b/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v/README.md @@ -15,7 +15,7 @@ conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ -pip install tiktoken transformers==4.42.4 trl +pip install tiktoken transformers==4.42.4 "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -27,7 +27,7 @@ conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ -pip install tiktoken transformers==4.42.4 trl +pip install tiktoken transformers==4.42.4 "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux diff --git a/python/llm/example/GPU/LLM-Finetuning/QLoRA/trl-example/README.md b/python/llm/example/GPU/LLM-Finetuning/QLoRA/trl-example/README.md index 99488aceeed..498eb8a9828 100644 --- a/python/llm/example/GPU/LLM-Finetuning/QLoRA/trl-example/README.md +++ b/python/llm/example/GPU/LLM-Finetuning/QLoRA/trl-example/README.md @@ -19,7 +19,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ pip install transformers==4.36.0 datasets pip install peft==0.10.0 -pip install bitsandbytes scipy trl +pip install bitsandbytes scipy "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables diff --git a/python/llm/example/GPU/Lightweight-Serving/README.md b/python/llm/example/GPU/Lightweight-Serving/README.md index 1a1f7f5ce24..3e67b1e579c 100644 --- a/python/llm/example/GPU/Lightweight-Serving/README.md +++ b/python/llm/example/GPU/Lightweight-Serving/README.md @@ -41,7 +41,7 @@ pip install gradio # for gradio web UI conda install -c conda-forge -y gperftools=2.10 # to enable tcmalloc # for glm-4v-9b -pip install transformers==4.42.4 trl +pip install transformers==4.42.4 "trl<0.12.0" # for internlm-xcomposer2-vl-7b pip install transformers==4.31.0 diff --git a/python/llm/example/GPU/PyTorch-Models/Model/glm4/README.md b/python/llm/example/GPU/PyTorch-Models/Model/glm4/README.md index b9082c45a4c..961b71c004a 100644 --- a/python/llm/example/GPU/PyTorch-Models/Model/glm4/README.md +++ b/python/llm/example/GPU/PyTorch-Models/Model/glm4/README.md @@ -16,7 +16,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # install packages required for GLM-4 -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` #### 1.2 Installation on Windows @@ -29,7 +29,7 @@ conda activate llm pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # install packages required for GLM-4 -pip install "tiktoken>=0.7.0" transformers==4.42.4 trl +pip install "tiktoken>=0.7.0" transformers==4.42.4 "trl<0.12.0" ``` ### 2. Configures OneAPI environment variables for Linux