From c539d9afc06e77b7cf87b2f66334991641e8e317 Mon Sep 17 00:00:00 2001 From: alex snow Date: Mon, 23 Dec 2024 14:04:21 +0100 Subject: [PATCH] updated for more talks --- talks/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/talks/index.md b/talks/index.md index f9e799a..ac0be04 100644 --- a/talks/index.md +++ b/talks/index.md @@ -13,7 +13,7 @@ title: Talks * Slides - Download -* Streamlining AI: Knowledge Distillation for Smaller, Efficient Models +* Knowledge Distillation : Making LLM model smaller - Streamlining AI This talk explores the transformative potential of knowledge distillation in creating smaller, efficient AI models while preserving their performance. Delve into its role in flexible architectures, data augmentation, and resource-constrained applications like TinyML. The discussion covers key concepts, including the teacher-student framework, various distillation schemes, and objective loss functions. It also highlights practical tools like PyTorch's `torchdistill` library and examines real-world applications in NLP and object detection. Join us to uncover how knowledge distillation is shaping the future of efficient deep learning.