Welcome to the repository where I save content and slides of the workshops I've conducted so far. Explore the content and dive deeper into each topic presented, please contact me for any error you have spotted, amelioration you want to see and topics you want to discuss.
Held at ENSAA (Ecole Nationale Algérienne des Affaires) as part of the Techup Event organized by VIC ENP Club. This workshop covered:
- Core concepts of Machine Learning
- Types of Machine Learning
- Principles of Linear Regression and KMeans
- Key concepts: Bias, Invariance, Overfitting, and Underfitting
- Challenges of unclean data and solutions
Go to Hands-On Machine Learning Folder
Presented at the AI Fest by the GDSC ENSIA Club. This workshop emphasized:
- In-depth exploration of Unsupervised Learning techniques
- Real-world applications and case studies
- Engaging Q&A sessions with participants
Go to Beyond Labels: Unsupervised Learning Folder
Organized by the IT Community Club, this workshop delved into:
- The three levels of MLOps
- Model deployment techniques
- Containerization practices
- Monitoring using Prometheus and Grafana
- CI/CD workflows with GitHub actions
Go to The Next Step after ML: MLOps
Organized by my club School of AI Algiers and in collaboration with Mr. Abdelghani Kabot, this workshop delved into:
- What are LLMs ? How are they used today ? How companies will embrace them ? What are the problems of LLMs ?
- The "not yet" perspective of LLMs and the need of scaling to emmerge new tasks for neural networks to perform.
- What is GPT exactly and why is it so powerful ?
- A small dive into the Transformer architecture and its components.
- Building a small GPT version that performs text generation character by character. (Inspired by this video
Go to Building GPT from scratch
Organized online by CSE Algiers (Club Scientifique de l'ESI), I delved into:
- Fundamentals of Large language models
- 3 use cases of LLMs realized by big companies (google, nvidia and amazon)
- Chains, agents and tools
- LLM-based apps design patterns
- LLMOps Go to LLMs from an engineering POV
Organized by my club School of AI, in this video I explored:
- The limitations of LSTMs in capturing long sequences and the need for attention mechanisms.
- An introduction to word embeddings and their role in representing word meanings as vectors.
- How the attention mechanism works and why it provides better context management.
- The step-by-step solution involving embeddings, encoder-decoder architecture, and transformers.
- A dive into the Transformer architecture, highlighting the benefits of parallel processing for sequence tasks. Go to Attention mechanism and Transformers architecture (in darija)