As part of our team's journey in learning Large Language Models (LLM) and implementing cool applications, here's our very first base model. Wondering what we did and how? Let’s walk you through it! 🚶♂️🚶♀️
This chatbot answers the age-old question: “What to cook?” 🍳
Examples:
- “Hey, I have some leftover rice. What can I do with it?”
- "I don't have eggs. What can I use as a substitute in baking?"
- "What's the best way to grill vegetables?"
The chatbot provides the best recipes from its vast knowledge and detailed cooking instructions. 📚👨🍳
We delved into the popular buzzword LLM, which is everywhere! 🌍 We used the concept of Retrieval Augmented Generation (RAG) to develop a domain-specific knowledge application. RAG is a popular technique used by large businesses to create specialized applications. Learn more about RAG here: RAG Article 📖
Key Concepts:
- Embeddings: Vectorizing information to create a vector space of related data. Learn more about embeddings and vector space here: Medium Article on Embeddings.
- ChromaDB: We chose ChromaDB as our vector database. Here's a step-by-step guide to setting it up: ChromaDB Tutorial.
- Dataset: Sourced from Hugging Face, The Food Processor dataset is a compilation of recipes that includes allergy information, dietary preferences, and alternative ingredients. Check it out here: The Food Processor Dataset.
We utilized powerful fine-tuned models like ChatGPT and Mixtral, ultimately selecting Mixtral for execution. 💪
Implementing and integrating all these resources was a fun and educational experience. But the real goal was to develop a sharable, interactive chatbot accessible to anyone with a link. 🌐
How did we achieve that? Thanks to Hugging Face, which provides an interface to host ML models. Learn more about hosting model demos with Hugging Face Spaces: Hosting Model Demos.
Try the chatbot here: What's Cooking Chatbot 🍲🤩
Enjoy exploring the chatbot!