- Large language models (LLMs) have revolutionized the landscape of natural language processing, showing unparalleled prowess in a wide array of tasks, from simple text generation to complex problem-solving.
- As the potential of LLMs continues to unfold, there's an increasing demand to tailor these models for specific domains and industries, ensuring that their vast knowledge base is attuned to specialized requirements.
- This repo aims to create a database of domain-specific LLMs optimized for different sectors, ranging from healthcare and legal to finance and entertainment.
- It seeks to bridge the gap between generic LLMs and niche applications, showcasing tools that truly understand and cater to the unique linguistic nuances and knowledge demands of different industries.
Name | Type | Description | Demo | Paper | Repo | Site |
---|---|---|---|---|---|---|
ProtGPT2 | Pre-trained | LLM (with 738 million parameters) specifically for protein engineering and design by being trained on the protein space that generates de novo protein sequences following principles of natural ones. | 🔗 | - | - | 🔗 |
Name | Type | Description | Demo | Paper | Repo | Site |
---|---|---|---|---|---|---|
BloombergGPT | Pre-trained | 50-billion parameter LLM trained on a wide range of financial data (363 billion token dataset) | - | 🔗 | - | - |
FinChat | ? | Generative AI tool for investment research, helping to greatly reduce time requirements for data aggregation, visualization and summaries. | 🔗 | - | - | 🔗 |
FinGPT | Fine-tuned | Series of LLMs fine-tuned on base models (e.g., Llama-2) with open finance data | - | 🔗 | 🔗 | 🔗 |
FinMA | Fine-tuned | Financial LLM from fine-tuning LLaMa with finance-based instruction data with 136K data samples | 🔗 | 🔗 | 🔗 | - |
Ask FT | ? | LLM tool that allows users to ask any question and receive a response using Financial Times (FT) content published over the last two decades. | 🔗 | 🔗 | - | - |
Name | Type | Description | Repo | Paper | Demo | Site |
---|---|---|---|---|---|---|
Med-PaLM | Fine-tuned | Google's LLM (fine-tuned using PaLM as base model) designed to provide high quality answers to medical questions. | - | 🔗 | - | 🔗 |
Med-PaLM 2 | Fine-tuned | Enhanced version of Med-PaLM released on March 2023 by Google with improved performance | 🔗 | 🔗 | 🔗 | 🔗 |
PharmacyGPT | In-context Learning | GPT-4 model coupled with in-context learning (dynamic prompting approach) involving domain-specific data | - | 🔗 | - | - |
RUSSELL-GPT | Fine-tuned | LLM developed by National University Health System in Singapore to enhance clinicians' productivity (e.g., medical Q&A, case note summarization) | - | - | - | 🔗 |
PH-LLM | Fine-tuned | The Personal Health Large Language Model (PH-LLM) is a fine-tuned version of Gemini, designed to generate insights and recommendations to improve personal health behaviors related to sleep and fitness patterns. | - | 🔗 | - | 🔗 |
Name | Type | Description | Repo | Paper | Demo | Site |
---|---|---|---|---|---|---|
OWL | Fine-tuned | A large language model for IT operations fine-tuned based on a custom Owl-Instruct dataset with a wide range of IT-related information | - | 🔗 | - | - |
Name | Type | Description | Repo | Paper | Demo | Site |
---|---|---|---|---|---|---|
TelecomGPT: A Framework to Build Telecom-Specfic Large Language Models | Fine-tuned | Telecom-specific LLM which can be used for multiple downstream tasks in telecom domain | - | 🔗 | - | - |
- Include examples from the range of other domains/industries listed in Contributing
- Include non-LLM GenAI examples (expand scope of repo)