🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
-
Updated
Jan 17, 2024 - Python
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
🦖 X—LLM: Simple & Cutting Edge LLM Finetuning
This repository features an example of how to utilize the xllm library. Included is a solution for a common type of assessment given to LLM engineers, who typically earn between $120,000 to $140,000 annually
A solution that could prioritize patients based on urgency, reducing wait times and ensuring those who need immediate care.
Conversation AI model for open domain dialogs
Jarvis-Chat is a Flask-based web application that utilizes various AI models to provide users with a conversational interface for answering questions and completing tasks. It features basic memory for follow-up questions, customizable system prompts, and allows you to select from a variety of AI models. Designed for ease of use,
Orion is a web-based chat interface that simplifies interactions with multiple AI model providers.
Using Cerebras LLM for Recursive Vector Disambiguation
This PWA offers a fully offline inference runtime with a chat model finetuned from Cerebras-GPT 111M
Replicating a simple clone of NotebookLM using CrewAI + Cerebras (Llama3.1-70B) + ElevenLabs!
This evaluation explores the In-context learning (ICL) capabilities of pre-trained language models on arithmetic tasks and sentiment analysis using synthetic datasets. The goal is to use different prompting strategies—zero-shot, few-shot, and chain-of-thought—to assess the performance of these models on the given tasks.
Add a description, image, and links to the cerebras topic page so that developers can more easily learn about it.
To associate your repository with the cerebras topic, visit your repo's landing page and select "manage topics."