Skip to content

Commit

Permalink
readme中增加了MiniCPM的支持 (#869)
Browse files Browse the repository at this point in the history
Co-authored-by: liudan <liudan@MacBook-Pro.local>
  • Loading branch information
LDLINGLINGLING and liudan authored Jul 29, 2024
1 parent 94a4fcb commit d2a173a
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ English | [简体中文](README_zh-CN.md)
</div>

## 🎉 News

- **\[2024/07\]** Support [MiniCPM](xtuner/configs/minicpm/) models!
- **\[2024/07\]** Support [DPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/dpo), [ORPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/orpo) and [Reward Model](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/reward_model) training with packed data and sequence parallel! See [documents](https://xtuner.readthedocs.io/en/latest/dpo/overview.html) for more details.
- **\[2024/07\]** Support [InternLM 2.5](xtuner/configs/internlm/internlm2_5_chat_7b/) models!
- **\[2024/06\]** Support [DeepSeek V2](xtuner/configs/deepseek/deepseek_v2_chat/) models! **2x faster!**
Expand Down Expand Up @@ -113,6 +113,7 @@ XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large
<li><a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral</a></li>
<li><a href="https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat">DeepSeek V2</a></li>
<li><a href="https://huggingface.co/google">Gemma</a></li>
<li><a href="https://huggingface.co/openbmb">MiniCPM</a></li>
<li>...</li>
</ul>
</td>
Expand Down
3 changes: 2 additions & 1 deletion README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
</div>

## 🎉 更新

- **\[2024/07\]** 支持 [MiniCPM](xtuner/configs/minicpm/) 模型!
- **\[2024/07\]** 支持训练 [DPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/dpo)[ORPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/orpo) 还有 [Reward Model](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/reward_model) ! 并且能够支持打包数据以及序列并行功能! 请参考 [文档](https://xtuner.readthedocs.io/zh-cn/latest/dpo/overview.html) 了解更多信息。
- **\[2024/07\]** 支持 [InternLM 2.5](xtuner/configs/internlm/internlm2_5_chat_7b/) 模型!
- **\[2024/06\]** 支持 [DeepSeek V2](xtuner/configs/deepseek/deepseek_v2_chat/) models! **训练速度提升一倍!**
Expand Down Expand Up @@ -113,6 +113,7 @@ XTuner 是一个高效、灵活、全能的轻量化大模型微调工具库。
<li><a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral</a></li>
<li><a href="https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat">DeepSeek V2</a></li>
<li><a href="https://huggingface.co/google">Gemma</a></li>
<li><a href="https://huggingface.co/openbmb">MiniCPM</a></li>
<li>...</li>
</ul>
</td>
Expand Down

0 comments on commit d2a173a

Please sign in to comment.