Skip to content

Commit

Permalink
feat(adapters): ✨ add Azure OpenAI
Browse files Browse the repository at this point in the history
  • Loading branch information
strayer committed Nov 4, 2024
1 parent ebbf6c7 commit ae2a474
Show file tree
Hide file tree
Showing 4 changed files with 252 additions and 4 deletions.
41 changes: 39 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
</p>

<p align="center">
Currently supports: Anthropic, Copilot, Gemini, Ollama, OpenAI and xAI adapters<br><br>
Currently supports: Anthropic, Copilot, Gemini, Ollama, OpenAI, Azure OpenAI and xAI adapters<br><br>
New features are always announced <a href="https://github.com/olimorris/codecompanion.nvim/discussions/categories/announcements">here</a>
</p>

Expand All @@ -28,7 +28,7 @@ Thank you to the following people:
## :sparkles: Features

- :speech_balloon: [Copilot Chat](https://github.com/features/copilot) meets [Zed AI](https://zed.dev/blog/zed-ai), in Neovim
- :electric_plug: Support for Anthropic, Copilot, Gemini, Ollama, OpenAI and xAI LLMs (or bring your own!)
- :electric_plug: Support for Anthropic, Copilot, Gemini, Ollama, OpenAI, Azure OpenAI and xAI LLMs (or bring your own!)
- :rocket: Inline transformations, code creation and refactoring
- :robot: Variables, Slash Commands, Agents/Tools and Workflows to improve LLM output
- :sparkles: Built in prompt library for common tasks like advice on LSP errors and code explanations
Expand Down Expand Up @@ -254,6 +254,7 @@ The plugin uses adapters to connect to LLMs. Out of the box, the plugin supports
- Gemini (`gemini`) - Requires an API key
- Ollama (`ollama`) - Both local and remotely hosted
- OpenAI (`openai`) - Requires an API key
- Azure OpenAI (`azure_openai`) - Requires an Azure OpenAI service with a model deployment
- xAI (`xai`) - Requires an API key

The plugin utilises objects called Strategies. These are the different ways that a user can interact with the plugin. The _chat_ strategy harnesses a buffer to allow direct conversation with the LLM. The _inline_ strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer. The _agent_ and _workflow_ strategies are wrappers for the _chat_ strategy, allowing for [tool use](#robot-agents--tools) and [agentic workflows](#world_map-agentic-workflows).
Expand Down Expand Up @@ -404,6 +405,42 @@ require("codecompanion").setup({
})
```

**Using Azure OpenAI**

To use Azure OpenAI, you need to have an Azure OpenAI service, an API key, and a model deployment. Follow these steps to configure the adapter:

1. Create an Azure OpenAI service in your Azure portal.
2. Deploy a model in the Azure OpenAI service.
3. Obtain the API key from the Azure portal.

Then, configure the adapter in your setup as follows:

```lua
require("codecompanion").setup({
strategies = {
chat = {
adapter = "azure_openai",
},
inline = {
adapter = "azure_openai",
},
},
adapters = {
azure_openai = function()
return require("codecompanion.adapters").extend("azure_openai", {
env = {
api_key = 'YOUR_AZURE_OPENAI_API_KEY',
endpoint = 'YOUR_AZURE_OPENAI_ENDPOINT',
},
schema = {
model = "YOUR_DEPLOYMENT_NAME",
},
})
end,
},
})
```

**Connecting via a Proxy**

You can also connect via a proxy:
Expand Down
42 changes: 40 additions & 2 deletions doc/codecompanion.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
*codecompanion.txt* For NVIM v0.10.0 Last change: 2024 October 29
*codecompanion.txt* For NVIM v0.10.0 Last change: 2024 November 04

==============================================================================
Table of Contents *codecompanion-table-of-contents*
Expand All @@ -15,7 +15,7 @@ Table of Contents *codecompanion-table-of-contents*
FEATURES *codecompanion-features*

- Copilot Chat <https://github.com/features/copilot> meets Zed AI <https://zed.dev/blog/zed-ai>, in Neovim
- Support for Anthropic, Copilot, Gemini, Ollama, OpenAI and xAI LLMs (or bring your own!)
- Support for Anthropic, Copilot, Gemini, Ollama, OpenAI, Azure OpenAI and xAI LLMs (or bring your own!)
- Inline transformations, code creation and refactoring
- Variables, Slash Commands, Agents/Tools and Workflows to improve LLM output
- Built in prompt library for common tasks like advice on LSP errors and code explanations
Expand Down Expand Up @@ -233,6 +233,7 @@ supports:
- Gemini (`gemini`) - Requires an API key
- Ollama (`ollama`) - Both local and remotely hosted
- OpenAI (`openai`) - Requires an API key
- Azure OpenAI (`azure_openai`) - Requires an Azure OpenAI service with a model deployment
- xAI (`xai`) - Requires an API key

The plugin utilises objects called Strategies. These are the different ways
Expand Down Expand Up @@ -411,6 +412,43 @@ set an API key:
})
<

**Using Azure OpenAI**

To use Azure OpenAI, you need to have an Azure OpenAI service, an API key, and
a model deployment. Follow these steps to configure the adapter:

1. Create an Azure OpenAI service in your Azure portal.
2. Deploy a model in the Azure OpenAI service.
3. Obtain the API key from the Azure portal.

Then, configure the adapter in your setup as follows:

>lua
require("codecompanion").setup({
strategies = {
chat = {
adapter = "azure_openai",
},
inline = {
adapter = "azure_openai",
},
},
adapters = {
azure_openai = function()
return require("codecompanion.adapters").extend("azure_openai", {
env = {
api_key = 'YOUR_AZURE_OPENAI_API_KEY',
endpoint = 'YOUR_AZURE_OPENAI_ENDPOINT',
},
schema = {
model = "YOUR_DEPLOYMENT_NAME",
},
})
end,
},
})
<

**Connecting via a Proxy**

You can also connect via a proxy:
Expand Down
172 changes: 172 additions & 0 deletions lua/codecompanion/adapters/azure_openai.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
local openai = require("codecompanion.adapters.openai")

---@class AzureOpenAI.Adapter: CodeCompanion.Adapter
return {
name = "azure_openai",
roles = {
llm = "assistant",
user = "user",
},
opts = {
stream = true,
},
features = {
text = true,
tokens = true,
vision = true,
},
url = "${endpoint}/openai/deployments/${deployment}/chat/completions?api-version=${api_version}",
env = {
api_key = "AZURE_OPENAI_API_KEY",
endpoint = "AZURE_OPENAI_ENDPOINT",
api_version = "2024-06-01",
deployment = "schema.model",
},
raw = {
"--no-buffer",
"--silent",
},
headers = {
["Content-Type"] = "application/json",
["api-key"] = "${api_key}",
},
handlers = {
--- Use the OpenAI adapter for the bulk of the work
setup = function(self)
return openai.handlers.setup(self)
end,
tokens = function(self, data)
return openai.handlers.tokens(self, data)
end,
form_parameters = function(self, params, messages)
return openai.handlers.form_parameters(self, params, messages)
end,
form_messages = function(self, messages)
return openai.handlers.form_messages(self, messages)
end,
chat_output = function(self, data)
return openai.handlers.chat_output(self, data)
end,
inline_output = function(self, data, context)
return openai.handlers.inline_output(self, data, context)
end,
on_exit = function(self, data)
return openai.handlers.on_exit(self, data)
end,
},
schema = {
-- See https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions
-- model does not exist in the Azure OpenAI API, but this is used as the deployment in the URL.
model = {
order = 1,
mapping = "parameters",
type = "enum",
desc = "ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.",
default = "gpt-4o",
choices = {
"gpt-4o",
"gpt-4o-mini",
"gpt-4-turbo-preview",
"gpt-4",
"gpt-3.5-turbo",
},
},
temperature = {
order = 2,
mapping = "parameters",
type = "number",
optional = true,
default = 1,
desc = "What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.",
validate = function(n)
return n >= 0 and n <= 2, "Must be between 0 and 2"
end,
},
top_p = {
order = 3,
mapping = "parameters",
type = "number",
optional = true,
default = 1,
desc = "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.",
validate = function(n)
return n >= 0 and n <= 1, "Must be between 0 and 1"
end,
},
stop = {
order = 4,
mapping = "parameters",
type = "list",
optional = true,
default = nil,
subtype = {
type = "string",
},
desc = "Up to 4 sequences where the API will stop generating further tokens.",
validate = function(l)
return #l >= 1 and #l <= 4, "Must have between 1 and 4 elements"
end,
},
max_tokens = {
order = 5,
mapping = "parameters",
type = "integer",
optional = true,
default = nil,
desc = "The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.",
validate = function(n)
return n > 0, "Must be greater than 0"
end,
},
presence_penalty = {
order = 6,
mapping = "parameters",
type = "number",
optional = true,
default = 0,
desc = "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
validate = function(n)
return n >= -2 and n <= 2, "Must be between -2 and 2"
end,
},
frequency_penalty = {
order = 7,
mapping = "parameters",
type = "number",
optional = true,
default = 0,
desc = "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
validate = function(n)
return n >= -2 and n <= 2, "Must be between -2 and 2"
end,
},
logit_bias = {
order = 8,
mapping = "parameters",
type = "map",
optional = true,
default = nil,
desc = "Modify the likelihood of specified tokens appearing in the completion. Maps tokens (specified by their token ID) to an associated bias value from -100 to 100. Use https://platform.openai.com/tokenizer to find token IDs.",
subtype_key = {
type = "integer",
},
subtype = {
type = "integer",
validate = function(n)
return n >= -100 and n <= 100, "Must be between -100 and 100"
end,
},
},
user = {
order = 9,
mapping = "parameters",
type = "string",
optional = true,
default = nil,
desc = "A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.",
validate = function(u)
return u:len() < 100, "Cannot be longer than 100 characters"
end,
},
},
}
1 change: 1 addition & 0 deletions lua/codecompanion/config.lua
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ local defaults = {
adapters = {
-- LLMs -------------------------------------------------------------------
anthropic = "anthropic",
azure_openai = "azure_openai",
copilot = "copilot",
gemini = "gemini",
ollama = "ollama",
Expand Down

0 comments on commit ae2a474

Please sign in to comment.