Skip to content

Latest commit

 

History

History

PretrainingLLMs

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Promo banner for

Dear learner,

We’re happy to announce a new short course: Pretraining LLMs.

In this course, created in collaboration with Upstage and taught by its CEO Sung Kim, and CSO, Lucy Park, you’ll explore the creation of large language models (LLMs) like Llama, Grok, and Solar using a technique called pretraining, which is the first step of training an LLM.

You’ll learn to pretrain a model from scratch and also to take a model that's already been pretrained and continue the pretraining process on your own data. You'll learn the essential steps to pretrain an LLM, understand the associated costs, and discover how starting with smaller, existing open source models can be more cost-effective.

GIF with slides from lesson 1 of the Pretraining LLMs course

In detail, here’s what’s in the course:

  • Explore scenarios where pretraining is the optimal choice for model performance.
  • Compare text generation across different versions of the same model to understand the performance differences between base, fine-tuned, and specialized pre-trained models.
  • Create a high-quality training dataset using web text and existing datasets, which is crucial for effective model pretraining.
  • Prepare your cleaned dataset for training. Learn how to package your training data for use with the Hugging Face library.
  • Explore ways to configure and initialize a model for training and see how these choices impact the speed of pretraining.
  • Configure and execute a training run, enabling you to train your own model.
  • Learn how to assess your trained model's performance and explore common evaluation strategies for LLMs, including important benchmark tasks used to compare different models' performance.

After taking this course, you’ll be equipped with the skills to pretrain a model—from data preparation and model configuration to training and performance evaluation.

Details

  • Gain in-depth knowledge of the steps to pretrain an LLM, encompassing all the steps, from data preparation, to model configuration and performance assessment.

  • Explore various options for configuring your model’s architecture, including modifying Meta’s Llama models to create larger or smaller versions and initializing weights either randomly or from other models.

  • Learn innovative pretraining techniques like Depth Upscaling, which can reduce training costs by up to 70%.

Note: Upstage models are hosted in HuggingFace - organization link

Lesson Video Code
Introduction video
Why Pre-training video code
Data Preparation video code
Packaging Data for Pretraining video code
Model Initialization video code
Training in Action video code
Evaluation video code
Conclusion video