In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Phoenix models. For illustration purposes, we utilize the FreedomIntelligence/phoenix-inst-chat-7b as a reference Phoenix model.
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to here for more information.
In the example generate.py, we show a basic use case for a Phoenix model to predict the next N tokens using generate()
API, with IPEX-LLM INT4 optimizations.
We suggest using conda to manage environment:
On Linux:
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
On Windows:
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the huggingface repo id for the Phoenix model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be"FreedomIntelligence/phoenix-inst-chat-7b"
.--prompt PROMPT
: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'What is AI?'
.--n-predict N_PREDICT
: argument defining the max number of tokens to predict. It is default to be32
.
Note: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a XB model saved in 16-bit will requires approximately 2X GB of memory for loading, and ~0.5X GB memory for further inference.
Please select the appropriate size of the Phoenix model based on the capabilities of your machine.
On client Windows machine, it is recommended to run directly with full utilization of all cores:
python ./generate.py
For optimal performance on server, it is recommended to set several environment variables (refer to here for more information), and run the example with all the physical cores of a single socket.
E.g. on Linux,
# set IPEX-LLM env variables
source ipex-llm-init
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python ./generate.py
Inference time: xxxx s
-------------------- Prompt --------------------
<human>What is AI?<bot>
-------------------- Output --------------------
<human>What is AI?<bot> AI stands for Artificial Intelligence. It is a branch of computer science that focuses on creating intelligent machines that can perform tasks that typically require human intelligence, such as visual
Inference time: xxxx s
-------------------- Prompt --------------------
<human>AI是什么?<bot>
-------------------- Output --------------------
<human>AI是什么?<bot>AI(Artificial Intelligence)是指用计算机程序模拟人类智能的一种技术。它通过学习、推理和自我修正等手段,使计算机能够执行类似于