Skip to content

Commit

Permalink
Merge pull request #17 from atomiechen/dev
Browse files Browse the repository at this point in the history
Bump version to 0.6.0
  • Loading branch information
atomiechen authored Feb 23, 2024
2 parents 43197b5 + 5b07de1 commit 7219e8f
Show file tree
Hide file tree
Showing 31 changed files with 1,967 additions and 793 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,14 @@
dist
*.egg-info

# credentials
*.env

# virtual environment
venv
.venv

# temporary files
tmp*
temp*

Expand Down
121 changes: 101 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,19 @@ A handy toolkit for using LLM.



## Install
## 🌟 Why HandyLLM?

☯️ Both sync and async APIs supported with unified design

🍡 OpenAI and Azure APIs all in one

☕️ Easy life with API endpoint management

📃 Writing chat prompt in a human-friendly mark-up format



## Installation

```shell
pip3 install handyllm
Expand All @@ -20,31 +32,96 @@ pip3 install git+https://github.com/atomiechen/handyllm.git



## Examples
## Usage

More example scripts are placed in [tests](./tests) folder.

### Using OpenAIClient

Each API method of `OpenAIClient` returns a `Requestor`, and you can execute its `call()` or `acall()` to get synchronous or asynchronous API calls.

Synchronous API usage:

```python
from handyllm import OpenAIClient
with OpenAIClient(api_key='<your-key>') as client:
response = client.chat(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "please tell me a joke"}]
).call() ## note .call() here
print(response['choices'][0]['message']['content'])
```

Asynchronous API usage:

```python
async with OpenAIClient('async', api_key='<your-key>') as client_async:
response = await client_async.chat(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "please tell me a joke"}]
).acall() ## note .acall() here
print(response['choices'][0]['message']['content'])
```

You can instantiate a client that supports both modes:

```python
client = OpenAIClient('sync') ## only supports sync APIs
client = OpenAIClient('async') ## only supports async APIs
client = OpenAIClient('both') ## supports both versions
```



Example scripts are placed in [tests](./tests) folder.
### Legacy: Using OpenAIAPI proxy

> [!IMPORTANT]
> This is not recommended anymore. Use `OpenAIClient` instead.
Under the hood it connects to a module client and only provides **synchronous** APIs, **without** `call()`.

```python
from handyllm import OpenAIAPI
OpenAIAPI.api_key = '<your-key>'
response = OpenAIAPI.chat(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "please tell me a joke"}]
) ## no .call() here
print(response['choices'][0]['message']['content'])
```



## OpenAI API Request

### Endpoints

Each API request will connect to an endpoint along with some API configurations, which include: `api_key`, `organization`, `api_base`, `api_type`, `api_version`, `model_engine_map`, `dest_url`.
Each API request will connect to an endpoint along with some API configurations, which include:

An `Endpoint` object contains these information. An `EndpointManager` acts like a list and can be used to rotate the next endpoint. See [test_endpoint.py](./tests/test_endpoint.py).
| | Description | Value |
| ---------------- | ------------------------------------------------------------ | ----------------------- |
| api_type | API type. Defaults to `openai`. | str: `openai` / `azure` |
| api_base | API base url. Defaults to OpenAI base url. | str |
| api_key | API key. | str |
| organization | Organization. | str |
| api_version | API version. **Must be provided for Azure end-points.** | str |
| model_engine_map | Map model name to engine name. Useful for Azure end-points if you have custom model names. | dict |

There are 5 methods for specifying endpoint info:
An `Endpoint` object contains these information. An `EndpointManager` acts like a list and can be used to rotate the next endpoint. See [test_endpoint.py](./tests/test_endpoint.py).

1. (each API call) Pass these fields as keyword parameters.
2. (each API call) Pass an `endpoint` keyword parameter to specify an `Endpoint`.
3. (each API call) Pass an `endpoint_manager` keyword parameter to specify an `EndpointManager`.
4. (global) Set class variables: `OpenAIAPI.api_base`, `OpenAIAPI.api_key`, `OpenAIAPI.organization`, `OpenAIAPI.api_type`, `OpenAIAPI.api_version`, `OpenAIAPI.model_engine_map`.
5. (global) Set environment variables: `OPENAI_API_KEY`, `OPENAI_ORGANIZATION`, `OPENAI_API_BASE`, `OPENAI_API_TYPE`, `OPENAI_API_VERSION`, `MODEL_ENGINE_MAP`.
Methods for configuring endpoint info (values will be inferred in **top-town** order):

**Note**: If a field is set to `None` in the previous method, it will be replaced by the non-`None` value in the subsequent method, until a default value is used (OpenAI's endpoint information).
| Configuration method | Description |
| -------------------------------------------------- | ------------------------------------------------------------ |
| API keyword parameters | e.g.: `chat(api_key='xxx', ...)` |
| API `endpoint` keyword parameter | Providing an `Endpoint`, e.g.: `chat(endpoint=MyEndpoint)` |
| API `endpoint_manager` keyword parameter | Providing an `EndpointManager`, e.g.: `chat(endpoint_manager=MyEndpointManager)` |
| `OpenAIClient` instance (or `OpenAIAPI`) variables | e.g.: `client.api_key = 'xxx'` / `OpenAIAPI.api_key = 'xxx'` |
| Environment variables | `OPENAI_API_KEY`, `OPENAI_ORGANIZATION`/`OPENAI_ORG_ID`, `OPENAI_API_BASE`, `OPENAI_API_TYPE`, `OPENAI_API_VERSION`, `MODEL_ENGINE_MAP`. |

**Azure OpenAI APIs are supported:** Specify `api_type='azure'`, and set `api_base` and `api_key` accordingly. Set `model_engine_map` if you want to use `model` parameter instead of `engine`/`deployment_id`. See [test_azure.py](./tests/test_azure.py). Please refer to [Azure OpenAI Service Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/) for details.
> [!TIP]
>
> **Azure OpenAI APIs are supported:** Specify `api_type='azure'`, and set `api_base` and `api_key` accordingly. Set `model_engine_map` if you want to use `model` parameter instead of `engine`/`deployment_id`. See [test_azure.py](./tests/test_azure.py). Please refer to [Azure OpenAI Service Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/) for details.
### Logger

Expand All @@ -55,16 +132,17 @@ You can pass custom `logger` and `log_marks` (a string or a collection of string
This toolkit supports client-side `timeout` control:

```python
from handyllm import OpenAIAPI
from handyllm import OpenAIClient
client = OpenAIClient()
prompt = [{
"role": "user",
"content": "please tell me a joke"
}]
response = OpenAIAPI.chat(
response = client.chat(
model="gpt-3.5-turbo",
messages=prompt,
timeout=10
)
).call()
print(response['choices'][0]['message']['content'])
```

Expand All @@ -73,15 +151,18 @@ print(response['choices'][0]['message']['content'])
Stream response of `chat`/`completions`/`finetunes_list_events` can be achieved using `steam` parameter:

```python
response = OpenAIAPI.chat(
from handyllm import OpenAIClient, stream_chat

client = OpenAIClient()
response = client.chat(
model="gpt-3.5-turbo",
messages=prompt,
timeout=10,
stream=True
)
).call()

# you can use this to stream the response text
for text in OpenAIAPI.stream_chat(response):
for text in stream_chat(response):
print(text, end='')

# or you can use this to get the whole response
Expand Down Expand Up @@ -120,7 +201,7 @@ Please refer to [OpenAI official API reference](https://platform.openai.com/docs



## Prompt
## Chat Prompt

### Prompt Conversion

Expand Down
6 changes: 5 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "HandyLLM"
version = "0.5.5"
version = "0.6.0"
authors = [
{ name="Atomie CHEN", email="atomic_cwh@163.com" },
]
Expand All @@ -19,8 +19,12 @@ classifiers = [
keywords = ["LLM", "Large Language Model", "Prompt", "OpenAI", "API"]
dependencies = [
"requests",
"httpx"
]

[project.urls]
"Homepage" = "https://github.com/atomiechen/HandyLLM"
"Bug Tracker" = "https://github.com/atomiechen/HandyLLM/issues"

[project.optional-dependencies]
test = ["python-dotenv"]
12 changes: 3 additions & 9 deletions src/handyllm/__init__.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
_API_BASE_OPENAI = 'https://api.openai.com/v1'
_API_TYPE_OPENAI = 'openai'
_API_TYPES_AZURE = (
'azure',
'azure_ad',
'azuread'
)
_API_VERSION_AZURE = '2023-05-15'

from .openai_client import OpenAIClient, ClientMode
from .requestor import Requestor
from .openai_api import OpenAIAPI
from .endpoint_manager import Endpoint, EndpointManager
from .prompt_converter import PromptConverter
from .utils import *
7 changes: 7 additions & 0 deletions src/handyllm/_constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
_API_BASE_OPENAI = 'https://api.openai.com/v1'
_API_TYPE_OPENAI = 'openai'
_API_TYPES_AZURE = (
'azure',
'azure_ad',
'azuread'
)
Loading

0 comments on commit 7219e8f

Please sign in to comment.