Swarm Models provides a unified, secure, and highly scalable interface for interacting with multiple LLM and multi-modal APIs across different providers. It is built to streamline your API integrations, ensuring production-grade reliability and robust performance.
-
Multi-Provider Support: Integrate seamlessly with APIs from OpenAI, Anthropic, Azure, and more.
-
Enterprise-Grade Security: Built-in security protocols to protect your API keys and sensitive data, ensuring compliance with industry standards.
-
Lightning-Fast Performance: Optimized for low-latency and high-throughput, Swarm Models delivers blazing-fast API responses, suitable for real-time applications.
-
Ease of Use: Simplified API interaction with intuitive
.run(task)
and__call__
methods, making integration effortless. -
Scalability for All Use Cases: Whether it's a small script or a massive enterprise-scale application, Swarm Models scales effortlessly.
-
Production-Grade Reliability: Tested and proven in enterprise environments, ensuring consistent uptime and failover capabilities.
Swarm Models simplifies the way you interact with different APIs by providing a unified interface for all models.
$ pip3 install -U swarm-models
OPENAI_API_KEY="your_openai_api_key"
GROQ_API_KEY="your_groq_api_key"
ANTHROPIC_API_KEY="your_anthropic_api_key"
AZURE_OPENAI_API_KEY="your_azure_openai_api_key"
Import the desired model from the package and initialize it with your API key or necessary configuration.
from swarm_models import YourDesiredModel
model = YourDesiredModel(api_key='your_api_key', *args, **kwargs)
Use the .run(task)
method or simply call the model like model(task)
with your task.
task = "Define your task here"
result = model.run(task)
# Or equivalently
#result = model(task)
print(result)
from swarm_models import OpenAIChat
import os
# Get the OpenAI API key from the environment variable
api_key = os.getenv("OPENAI_API_KEY")
# Create an instance of the OpenAIChat class
model = OpenAIChat(openai_api_key=api_key, model_name="gpt-4o-mini")
# Query the model with a question
out = model(
"What is the best state to register a business in the US for the least amount of taxes?"
)
# Print the model's response
print(out)
The TogetherLLM
class is designed to simplify the interaction with Together's LLM models. It provides a straightforward way to run tasks on these models, including support for concurrent and batch processing.
To use TogetherLLM
, you need to initialize it with your API key, the name of the model you want to use, and optionally, a system prompt. The system prompt is used to provide context to the model for the tasks you will run.
Here's an example of how to initialize TogetherLLM
:
import os
from swarm_models import TogetherLLM
model_runner = TogetherLLM(
api_key=os.environ.get("TOGETHER_API_KEY"),
model_name="meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
system_prompt="You're Larry fink",
)
Once initialized, you can run tasks on the model using the run
method. This method takes a task string as an argument and returns the response from the model.
Here's an example of running a single task:
task = "How do we allocate capital efficiently in your opinion Larry?"
response = model_runner.run(task)
print(response)
TogetherLLM
also supports running multiple tasks concurrently using the run_concurrently
method. This method takes a list of task strings and returns a list of responses from the model.
Here's an example of running multiple tasks concurrently:
tasks = [
"What are the top-performing mutual funds in the last quarter?",
"How do I evaluate the risk of a mutual fund?",
"What are the fees associated with investing in a mutual fund?",
"Can you recommend a mutual fund for a beginner investor?",
"How do I diversify my portfolio with mutual funds?",
]
responses = model_runner.run_concurrently(tasks)
for response in responses:
print(response)
-
Security: API keys and user data are handled with utmost care, utilizing encryption and best security practices to protect your sensitive information.
-
Production Reliability: Swarm Models has undergone rigorous testing to ensure that it can handle high traffic and remains resilient in enterprise-grade environments.
-
Fail-Safe Mechanisms: Built-in failover handling to ensure uninterrupted service even under heavy load or network issues.
-
Unified API: No more dealing with multiple SDKs or libraries. Swarm Models standardizes your interactions across providers like OpenAI, Anthropic, Azure, and more, so you can focus on what matters.
Model Name | Description |
---|---|
OpenAIChat |
Chat model for OpenAI's GPT-3 and GPT-4 APIs. |
Anthropic |
Model for interacting with Anthropic's APIs. |
AzureOpenAI |
Azure's implementation of OpenAI's models. |
Dalle3 |
Model for generating images from text prompts. |
NvidiaLlama31B |
Llama model for causal language generation. |
Fuyu |
Multi-modal model for image and text processing. |
Gemini |
Multi-modal model for vision and language tasks. |
Vilt |
Vision-and-Language Transformer for question answering. |
TogetherLLM |
Model for collaborative language tasks. |
FireWorksAI |
Model for generating creative content. |
ReplicateChat |
Chat model for replicating conversations. |
HuggingfaceLLM |
Interface for Hugging Face models. |
CogVLMMultiModal |
Multi-modal model for vision and language tasks. |
LayoutLMDocumentQA |
Model for document question answering. |
GPT4VisionAPI |
Model for analyzing images with GPT-4 capabilities. |
LlamaForCausalLM |
Causal language model from the Llama family. |
GroundedSAMTwo |
Analyzes and track objects in images. GPU Only |
- Documentation: Comprehensive guides, API references, and best practices are available in our official Documentation.
- GitHub: Explore the code, report issues, and contribute to the project via our GitHub repository.
Swarm Models is released under the MIT License.
- Add cohere models command r
- Add gemini and google ai studio
- Integrate ollama extensively