Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
  • Loading branch information
zhtmike committed Sep 26, 2024
1 parent aea0839 commit 1675636
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions examples/opensora_hpcai/tools/caption/llava_next/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ This repo contains Mindspore model definitions, pre-trained weights and inferenc

### Downloading Pretrained Checkpoints

Please download the model (llava-v1.6-mistral-7b-hf)[https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf] to the `./models` directory. And run
Please download the model [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) to the `./models` directory. And run

```bash
python tools/convert_llava.py models/llava-v1.6-mistral-7b-hf -o models/llava-v1.6-mistral-7b-hf/model.ckpt
Expand All @@ -28,7 +28,7 @@ To run the inference, you may use `predict.py` with the following command
python predict.py --input_image path_to_your_input_image --prompt input_prompt
```

For example, running `python predict.py` with the default image (`assets/llava_v1_5_radar.jpg`) and default prompt (`What is shown in this image?`) will give the following result:
For example, running `python predict.py` with the default image `assets/llava_v1_5_radar.jpg` and default prompt `What is shown in this image?` will give the following result:

```text
[INST]
Expand Down

0 comments on commit 1675636

Please sign in to comment.