This library enables pre-training and fine-tuning of large language models (LLMs) at scale. Our repository is a modification of the original Megatron-LM codebase by Nvidia.
Added key features include:
- architectures supported: Llama, Llama 2, Code Llama, Falcon and Mistral
- support training of large models (70B Llama 2, 65B Llama 1, 34B Code Llama, 40B Falcon and Mistral) on commodity hardware on multiple nodes
- 3-way parallelism: tensor parallel, pipeline parallel and data parallel training (inherited from Megatron)
- full pretraining, finetuning and instruct tuning support
- Support for special tokens & tokenizers
- grouped-query attention (GQA) and multi-query attention (MQA)
- Rotary Position Embeddings (RoPE), RMS layer norm, Lima dropout
- RoPE scaling for longer attention context support
- FlashAttention 2
- BF16 / FP16 training
- WandB integration
- Metrics support: Ease to add custom metrics to evaluate on the validation set while training
- Conversion to and from Hugging Face hub
Take a look at the online documentation.
Alternatively, build the docs from source:
cd docs/
pip install -r requirements.txt
make html
70B Llama2: meditron 70b, llama2-70b-oasst-sft-v10, 40B Falcon: falcon-40b-megacode2-oasst, 13B Code Llama: codellama-13b-oasst-sft-v10, 7B Llama2: meditron 7b, ... (Let us know about yours!)
If you use this software please cite it:
@software{epfmgtrn, author = {Alejandro Hernández Cano and Matteo Pagliardini and Andreas Köpf and Kyle Matoba and Amirkeivan Mohtashami and Xingyao Wang and Olivia Simin Fan and Axel Marmet and Deniz Bayazit and Igor Krawczuk and Zeming Chen and Francesco Salvi and Antoine Bosselut and Martin Jaggi}, title = {epfLLM Megatron-LLM}, year = 2023, url = {https://github.com/epfLLM/Megatron-LLM} }