Docker image for XTuner 0.1.14.
Uses PyTorch 2.0.1, CUDA 11.7.
-
Log into registry using public credentials:
docker login -u public -p public public.aml-repo.cms.waikato.ac.nz:443
-
Create the following directories:
mkdir cache triton
-
Launch docker container
docker run \ -u $(id -u):$(id -g) -e USER=$USER \ --gpus=all \ --shm-size 8G \ -v `pwd`:/workspace \ -v `pwd`/cache:/.cache \ -v `pwd`/triton:/.triton \ -it public.aml-repo.cms.waikato.ac.nz:443/pytorch/pytorch-xtuner:0.1.14_cuda11.7
-
Create the following directories:
mkdir cache triton
-
Launch docker container
docker run \ -u $(id -u):$(id -g) -e USER=$USER \ --gpus=all \ --shm-size 8G \ -v `pwd`:/workspace \ -v `pwd`/cache:/.cache \ -v `pwd`/triton:/.triton \ -it waikatodatamining/pytorch-xtuner:0.1.14_cuda11.7
-
Build the image from Docker file (from within /path_to/huggingface-transformers/0.1.14_cuda11.7)
docker build -t hf .
-
Run the container
docker run --gpus=all --shm-size 8G -v /local/dir:/container/dir -it hf
/local/dir:/container/dir
maps a local disk directory into a directory inside the container
docker build -t pytorch-xtuner:0.1.14_cuda11.7 .
-
Tag
docker tag \ pytorch-xtuner:0.1.14_cuda11.7 \ public-push.aml-repo.cms.waikato.ac.nz:443/pytorch/pytorch-xtuner:0.1.14_cuda11.7
-
Push
docker push public-push.aml-repo.cms.waikato.ac.nz:443/pytorch/pytorch-xtuner:0.1.14_cuda11.7
If error "no basic auth credentials" occurs, then run (enter username/password when prompted):
docker login public-push.aml-repo.cms.waikato.ac.nz:443
-
Tag
docker tag \ pytorch-xtuner:0.1.14_cuda11.7 \ waikatodatamining/pytorch-xtuner:0.1.14_cuda11.7
-
Push
docker push waikatodatamining/pytorch-xtuner:0.1.14_cuda11.7
If error "no basic auth credentials" occurs, then run (enter username/password when prompted):
docker login
docker run --rm --pull=always \
-it public.aml-repo.cms.waikato.ac.nz:443/pytorch/pytorch-xtuner:0.1.14_cuda11.7 \
pip freeze > requirements.txt
xtuner
- the command-line tool that comes with XTuner, e.g. for interactive chatsxtuner_redis
- for making models available via Redis
When running the docker container as regular use, you will want to set the correct user and group on the files generated by the container (aka the user:group launching the container):
docker run -u $(id -u):$(id -g) -e USER=$USER ...
{
"text": "the text to use as input.",
"history": "previous prompts concatenated",
"turns": 0
}
The (optional) history
text and the number of turns
are used as additional inputs to the model.
Using RESET
as text in the prompt will reset the history.
{
"text": "the generated text.",
"history": "previous input texts concatenated",
"turns": 0
}
history
and turns
can be used for the next prompt.
If the --no_history
flag is used, then these two fields will get omitted in the response.