In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on DeciLM-7B models. For illustration purposes, we utilize the Deci/DeciLM-7B-instruct as a reference DeciLM-7B model.
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to here for more information.
In the example generate.py, we show a basic use case for a DeciLM-7B model to predict the next N tokens using generate()
API, with BigDL-LLM INT4 optimizations.
We suggest using conda to manage environment:
conda create -n llm python=3.9
conda activate llm
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
pip install transformers==4.35.2 # required by DeciLM-7B
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the huggingface repo id for the DeciLM-7B model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'Deci/DeciLM-7B-instruct'
.--prompt PROMPT
: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'What is AI?'
.--n-predict N_PREDICT
: argument defining the max number of tokens to predict. It is default to be32
.
Note: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a XB model saved in 16-bit will requires approximately 2X GB of memory for loading, and ~0.5X GB memory for further inference.
Please select the appropriate size of the DeciLM-7B model based on the capabilities of your machine.
On client Windows machine, it is recommended to run directly with full utilization of all cores:
python ./generate.py
For optimal performance on server, it is recommended to set several environment variables (refer to here for more information), and run the example with all the physical cores of a single socket.
E.g. on Linux,
# set BigDL-LLM env variables
source bigdl-llm-init
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python ./generate.py
Inference time: XXXX s
-------------------- Prompt --------------------
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
What is AI?
### Assistant:
-------------------- Output --------------------
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
What is AI?
### Assistant:
AI stands for Artificial Intelligence, which refers to the development of computer systems and software that can perform tasks that typically require human intelligence, such as recognizing patterns