The simplicity and elegance of python-requests, but for LLMs. This library supports models from OpenAI and Anthropic. I will try to add more when I have the time, and am warmly accepting pull requests if that's of interest.
import llm
llm.set_api_key(openai="sk-...", anthropic="sk-...")
# Chat
llm.chat("what is 2+2") # 4. Uses GPT-3 by default if key is provided.
llm.chat("what is 2+2", engine="anthropic:claude-instant-v1") # 4.
# Completion
llm.complete("hello, I am") # A GPT model.
llm.complete("hello, I am", engine="openai:gpt-4") # A big GPT model.
llm.complete("hello, I am ", engine="anthropic:claude-instant-v1") # Claude.
# Back-and-forth chat [human, assistant, human]
llm.chat(["hi", "hi there, how are you?", "good, tell me a joke"]) # Why did chicken cross road?
# Streaming chat
llm.stream_chat(["what is 2+2"]) # 4.
llm.multi_stream_chat(["what is 2+2"],
engines=
["anthropic:claude-instant-v1",
"openai:gpt-3.5-turbo"])
# Results will stream back to you from both models at the same time like this:
# ["anthropic:claude-instant-v1", "hi"], ["openai:gpt-3.5-turbo", "howdy"],
# ["anthropic:claude-instant-v1", " there"] ["openai:gpt-3.5-turbo", " my friend"]
# Engines are in the provider:model format, as in openai:gpt-4, or anthropic:claude-instant-v1.
Given this feature is very lively, I've included a video of it in action.
multistream.mov
To install python-llm
, use pip: pip install python-llm
.
You can set API keys in a few ways:
- Through environment variables (you can also set a
.env
file).
export OPENAI_API_KEY=sk_...
export ANTHROPIC_API_KEY=sk_...
- By calling the method manually:
import llm
llm.set_api_key(openai="sk-...", anthropic="sk-...")
- By passing a JSON file like this:
llm.set_api_key("path/to/api_keys.json")
The JSON should look like:
{
"openai": "sk-...",
"anthropic": "sk-..."
}
- Caching!
- More LLM vendors!
- More tests!