From 1675636a3c33efb81fa84940aadd9d60f39cde4e Mon Sep 17 00:00:00 2001 From: Mike Cheung Date: Thu, 26 Sep 2024 15:27:48 +0800 Subject: [PATCH] update README --- examples/opensora_hpcai/tools/caption/llava_next/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/opensora_hpcai/tools/caption/llava_next/README.md b/examples/opensora_hpcai/tools/caption/llava_next/README.md index 76f8189117..b66e2b4191 100644 --- a/examples/opensora_hpcai/tools/caption/llava_next/README.md +++ b/examples/opensora_hpcai/tools/caption/llava_next/README.md @@ -12,7 +12,7 @@ This repo contains Mindspore model definitions, pre-trained weights and inferenc ### Downloading Pretrained Checkpoints -Please download the model (llava-v1.6-mistral-7b-hf)[https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf] to the `./models` directory. And run +Please download the model [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) to the `./models` directory. And run ```bash python tools/convert_llava.py models/llava-v1.6-mistral-7b-hf -o models/llava-v1.6-mistral-7b-hf/model.ckpt @@ -28,7 +28,7 @@ To run the inference, you may use `predict.py` with the following command python predict.py --input_image path_to_your_input_image --prompt input_prompt ``` -For example, running `python predict.py` with the default image (`assets/llava_v1_5_radar.jpg`) and default prompt (`What is shown in this image?`) will give the following result: +For example, running `python predict.py` with the default image `assets/llava_v1_5_radar.jpg` and default prompt `What is shown in this image?` will give the following result: ```text [INST]