Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor grammatical correction. #94

Merged
merged 4 commits into from
Nov 14, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 6 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,15 @@ Note that the `latest` tag always points to the latest code in the `main`
branch. To test a stable version, please replace it with a specific
[tag](https://github.com/01-ai/Yi/tags).

If you prefer trying out with your local development environment. First, create
If you prefer to try out with your local development environment. First, create
a virtual environment and clone this repo. Then install the dependencies with
`pip install -r requirements.txt`. For the best performance, we recommend you
also install the latest version (`>=2.3.3`) of
[flash-attention](https://github.com/Dao-AILab/flash-attention#installation-and-features).

### 2. Download the model (optional)

By default the model weights and tokenizer will be downloaded from
By default, the model weights and tokenizer will be downloaded from
[HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You
can also download them manually from the following places:

Expand Down Expand Up @@ -170,7 +170,7 @@ The Arctic is a place of great beauty. The ice and snow are a

</details>

For more advanced usage, please refer the
For more advanced usage, please refer to the
[doc](https://github.com/01-ai/Yi/tree/main/demo).

#### 3.2 Finetuning from the base model:
Expand All @@ -179,8 +179,7 @@ For more advanced usage, please refer the
bash finetune/scripts/run_sft_Yi_6b.sh
```

Once finished, you can compare the finetuned model and the base model with the
following command:
Once finished, you can compare the finetuned model and the base model with the following command:

```bash
bash finetune/scripts/run_eval.sh
Expand All @@ -199,15 +198,15 @@ python quantization/gptq/quant_autogptq.py \
--trust_remote_code
```

Once finished, you can then evaluate the resulted model as follows:
Once finished, you can then evaluate the resulting model as follows:

```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```

For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)
For a more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)

##### AWQ
```bash
Expand All @@ -227,7 +226,6 @@ python quantization/awq/eval_quantized_model.py \

For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)


## Disclaimer

We use data compliance checking algorithms during the training process, to
Expand Down
Loading