Skip to content

Commit

Permalink
Prepare for first release
Browse files Browse the repository at this point in the history
  • Loading branch information
olokobayusuf committed Aug 21, 2024
1 parent 21efbef commit 97a5368
Show file tree
Hide file tree
Showing 9 changed files with 1,543 additions and 161 deletions.
4 changes: 3 additions & 1 deletion .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@
"--timeout",
"999999",
"--colors",
"--recursive"
"--recursive",
"--extensions",
"ts,tsx"
],
"internalConsoleOptions": "openOnSessionStart",
"env": {
Expand Down
22 changes: 12 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,10 @@

[![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fy5vwgXkz2f%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Function%20community)](https://fxn.ai/community)

Use local LLMs in your browser and Node.js apps. This package is designed to patch `OpenAI` and `Anthropic` clients and run inference locally in the current process, using predictors hosted on [Function](https://fxn.ai).
Use local LLMs in your browser and Node.js apps. This package is designed to patch `OpenAI` and `Anthropic` clients for running inference locally, using predictors hosted on [Function](https://fxn.ai/explore).

> [!IMPORTANT]
> This package is still a work-in-progress, and will remain in alpha until browser support is added (see #1).
> [!CAUTION]
> **Never embed access keys client-side (i.e. in the browser)**. Instead, create a proxy URL in your backend.
Expand All @@ -16,7 +19,7 @@ npm install @fxn/llm
```

## Creating a Function LLM Instance
Function LLM works by setting up an in-process server that handles requests for an LLM API provider (e.g. OpenAI, Anthropic). The client provides a `baseUrl` property that can be passed to LLM clients in your code:
Function LLM works by setting up an in-process server that handles requests for an LLM API provider (e.g. OpenAI, Anthropic).
```js
import { FunctionLLM } from "@fxn/llm"

Expand All @@ -27,14 +30,13 @@ const fxnllm = new FunctionLLM({
});
```

> [!TIP]
> Create an access key by signing onto [Function](https://fxn.ai/settings/developer).
The client provides a `baseUrl` property that can be passed to LLM clients in your code.

> [!TIP]
> If you would like to see a new LLM provider supported, please submit a PR!
> [!IMPORTANT]
> Create an access key by signing onto [Function](https://fxn.ai/settings/developer).
## Running the OpenAI Client Locally
To run text-generation and embedding models locally using the OpenAI client, specify the `baseUrl` on the client:
## Using the OpenAI Client Locally
To run text-generation and embedding models locally using the OpenAI client, pass the Function LLM `baseUrl` on the OpenAI client:
```js
import OpenAI from "openai"

Expand All @@ -48,8 +50,8 @@ const openai = new OpenAI({
> [!WARNING]
> Currently, only `openai.embeddings.create` is supported. Text generation is coming soon!
## Running the Anthropic Client Locally
*INCOMPLETE*
## Using the Anthropic Client Locally
*Coming soon*

___

Expand Down
Empty file added examples/.gitkeep
Empty file.
Loading

0 comments on commit 97a5368

Please sign in to comment.