Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

InternVL2

In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on InternVL2 model on Intel GPUs. For illustration purposes, we utilize the OpenGVLab/InternVL2-4B as a reference InternVL2 model.

0. Requirements

To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information.

Example: Predict Tokens using chat() API

In the example chat.py, we show a basic use case for an InternVL2-4B model to predict the next N tokens using chat() API, with IPEX-LLM INT4 optimizations on Intel GPUs.

1. Install

1.1 Installation on Linux

We suggest using conda to manage environment:

conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

pip install einops timm

1.2 Installation on Windows

We suggest using conda to manage environment:

conda create -n llm python=3.11 libuv
conda activate llm

# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

pip install einops timm

2. Configures OneAPI environment variables for Linux

Note

Skip this step if you are running on Windows.

This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.

source /opt/intel/oneapi/setvars.sh

3. Runtime Configurations

For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.

3.1 Configurations for Linux

For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
For Intel Data Center GPU Max Series
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
export ENABLE_SDP_FUSION=1

Note: Please note that libtcmalloc.so can be installed by conda install -c conda-forge -y gperftools=2.10.

For Intel iGPU
export SYCL_CACHE_PERSISTENT=1
export BIGDL_LLM_XMX_DISABLED=1

3.2 Configurations for Windows

For Intel iGPU
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
For Intel Arc™ A-Series Graphics
set SYCL_CACHE_PERSISTENT=1

Note

For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.

4. Running examples

  • chat with specified prompt:
    python ./chat.py --prompt 'What is in the image?'
    

Arguments info:

  • --repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the InternVL2 (e.g. OpenGVLab/InternVL2-4B) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be 'OpenGVLab/InternVL2-4B'.
  • --image-url-or-path IMAGE_URL_OR_PATH: argument defining the image to be infered. It is default to be 'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg'.
  • --prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be 'What is in the image?'.
  • --n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be 64.

Sample Output

-------------------- Input Image --------------------
https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg
-------------------- Input Prompt --------------------
What is in the image?
-------------------- Chat Output --------------------
The image shows a tiger lying on the grass.

The sample input image is: