In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Replit models. For illustration purposes, we utilize the replit/replit-code-v1-3b as a reference Replit model.
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to here for more information.
In the example generate.py, we show a basic use case for an Replit model to predict the next N tokens using generate()
API, with IPEX-LLM INT4 optimizations.
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to here.
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install "transformers<4.35"
On Windows:
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install "transformers<4.35"
After setting up the Python environment, you could run the example by following steps.
On client Windows machine, it is recommended to run directly with full utilization of all cores:
python ./generate.py --prompt 'def print_hello_world():'
More information about arguments can be found in Arguments Info section. The expected output can be found in Sample Output section.
For optimal performance on server, it is recommended to set several environment variables (refer to here for more information), and run the example with all the physical cores of a single socket.
E.g. on Linux,
# set IPEX-LLM env variables
source ipex-llm-init
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python ./generate.py
More information about arguments can be found in Arguments Info section. The expected output can be found in Sample Output section.
In the example, several arguments can be passed to satisfy your requirements:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the huggingface repo id for the Replit model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'replit/replit-code-v1-3b'
.--prompt PROMPT
: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'def print_hello_world():'
.--n-predict N_PREDICT
: argument defining the max number of tokens to predict. It is default to be32
.
-------------------- Prompt --------------------
def print_hello_world():
-------------------- Output --------------------
def print_hello_world():
print("Hello")
print("World")
print_hello_world()
def print_hello_world():
print