Currently supports: Anthropic, Ollama and OpenAI adapters
Important
This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-cases. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be intermittent. To be notified of breaking changes in the plugin, please subscribe to this issue.
- 💬 A Copilot Chat experience in Neovim
- 🔌 Adapter support for many LLMs
- 🤖 Agentic Workflows and Tools to improve LLM output
- 🚀 Inline code creation and modification
- ✨ Built in actions for specific language prompts, LSP error fixes and code advice
- 🏗️ Create your own custom actions for Neovim
- 💾 Save and restore your chats
- 💪 Async execution for improved performance
Chat.Buffer.mp4
Inline.Code.mp4
- The
curl
library installed - Neovim 0.9.2 or greater
- (Optional) An API key for your chosen LLM
- (Optional) The
base64
library installed
Install the plugin with your package manager of choice:
-- Lazy.nvim
{
"olimorris/codecompanion.nvim",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"nvim-telescope/telescope.nvim", -- Optional
{
"stevearc/dressing.nvim", -- Optional: Improves the default Neovim UI
opts = {},
},
},
config = true
}
-- Packer.nvim
use({
"olimorris/codecompanion.nvim",
config = function()
require("codecompanion").setup()
end,
requires = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"nvim-telescope/telescope.nvim", -- Optional
"stevearc/dressing.nvim" -- Optional: Improves the default Neovim UI
}
})
You only need to the call the setup
function if you wish to change any of the defaults:
Click to see the default configuration
require("codecompanion").setup({
adapters = {
anthropic = "anthropic",
ollama = "ollama",
openai = "openai",
},
strategies = {
chat = "openai",
inline = "openai",
tools = "openai",
},
tools = {
["code_runner"] = {
name = "Code Runner",
description = "Run code generated by the LLM",
enabled = true,
},
opts = {
auto_submit_errors = false, -- Automatically send and submit the errors to the LLM?
mute_errors = false, -- Hide any tool errors from being displayed in Neovim?
},
},
saved_chats = {
save_dir = vim.fn.stdpath("data") .. "/codecompanion/saved_chats", -- Path to save chats to
},
display = {
action_palette = {
width = 95,
height = 10,
},
chat = {
window = {
layout = "float",
border = "single",
height = 0.8,
width = 0.8,
relative = "editor",
opts = {
cursorcolumn = false,
cursorline = false,
foldcolumn = "0",
linebreak = true,
list = false,
signcolumn = "no",
spell = false,
wrap = true,
},
},
show_settings = true,
show_token_count = true,
},
},
keymaps = {
["<C-s>"] = "keymaps.save", -- Save the chat buffer and trigger the LLM
["<C-c>"] = "keymaps.close", -- Close the chat buffer
["q"] = "keymaps.cancel_request", -- Cancel the currently streaming request
["gc"] = "keymaps.clear", -- Clear the contents of the chat
["ga"] = "keymaps.codeblock", -- Insert a codeblock into the chat
["gs"] = "keymaps.save_chat", -- Save the current chat
["gt"] = "keymaps.add_tool", -- Add a tool to the current chat buffer
["]"] = "keymaps.next", -- Move to the next header in the chat
["["] = "keymaps.previous", -- Move to the previous header in the chat
},
log_level = "ERROR", -- TRACE|DEBUG|ERROR
send_code = true, -- Send code context to the LLM? Disable to prevent leaking code outside of Neovim
silence_notifications = false, -- Silence notifications for actions like saving saving chats?
use_default_actions = true, -- Use the default actions in the action palette?
})
Warning
Depending on your chosen adapter, you may need to set an API key.
The plugin uses adapters to bridge between LLMs and the plugin. Currently the plugin supports:
- Anthropic (
anthropic
) - Requires an API key - Ollama (
ollama
) - OpenAI (
openai
) - Requires an API key
Strategies are the different ways that a user can interact with the plugin. The chat and tool strategies harness a buffer to allow direct conversation with the LLM. The inline strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer.
To specify a different adapter to the defaults, simply change the strategies.*
table:
require("codecompanion").setup({
strategies = {
chat = "ollama",
inline = "ollama",
tool = "anthropic"
},
})
Tip
To create your own adapter please refer to the ADAPTERS guide.
You can customise an adapter's configuration as follows:
require("codecompanion").setup({
adapters = {
anthropic = require("codecompanion.adapters").use("anthropic", {
env = {
api_key = "ANTHROPIC_API_KEY_1"
},
}),
},
strategies = {
chat = "anthropic",
inline = "anthropic",
tool = "anthropic"
},
})
In the example above, we've changed the name of the default API key which the Anthropic adapter uses. Having API keys in plain text in your shell is not always safe. Thanks to this PR, you can run commands from within the configuration:
require("codecompanion").setup({
adapters = {
openai = require("codecompanion.adapters").use("openai", {
env = {
api_key = "cmd:op read op://personal/OpenAI/credential --no-newline",
},
}),
strategies = {
chat = "openai",
inline = "anthropic",
tool = "openai"
},
},
})
In this example, we're using the 1Password CLI to read an OpenAI credential.
LLMs have many settings such as model, temperature and max_tokens. In an adapter, these sit within a schema table and can be configured during setup:
require("codecompanion").setup({
adapters = {
anthropic = require("codecompanion.adapters").use("anthropic", {
schema = {
model = {
default = "claude-3-sonnet-20240229",
},
},
}),
},
})
Tip
Refer to your chosen adapter to see the settings available.
The author recommends pairing with edgy.nvim for an experience similar to that of GitHub's Copilot Chat:
{
"folke/edgy.nvim",
event = "VeryLazy",
init = function()
vim.opt.laststatus = 3
vim.opt.splitkeep = "screen"
end,
opts = {
right = {
{ ft = "codecompanion", title = "Code Companion Chat", size = { width = 0.45 } },
}
}
}
The plugin sets the following highlight groups during setup:
CodeCompanionTokens
- Virtual text in the chat buffer showing the token countCodeCompanionVirtualText
- All other virtual text in the chat bufferCodeCompanionVirtualTextTools
- Virtual text in the chat buffer for when a tool is running
The plugin has a number of commands:
:CodeCompanion
- Inline code writing and refactoring:CodeCompanionChat
- To open up a new chat buffer:CodeCompanionChat <adapter>
- To open up a new chat buffer with a specific adapter:CodeCompanionAdd
- To add visually selected chat to the current chat buffer:CodeCompanionToggle
- Toggle a chat buffer:CodeCompanionActions
- To open up the action palette window
For an optimum workflow, the plugin author recommendeds the following:
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "ga", "<cmd>CodeCompanionAdd<cr>", { noremap = true, silent = true })
-- Expand `cc` into CodeCompanion in the command line
vim.cmd([[cab cc CodeCompanion]])
Note
For some actions, visual mode allows your selection to be sent directly to the chat buffer or the LLM (in the case of inline code actions).
Note
Please see the RECIPES guide in order to add your own actions to the palette.
The Action Palette, opened via :CodeCompanionActions
, contains all of the actions and their associated strategies for the plugin. It's the fastest way to start leveraging CodeCompanion. Depending on whether you're in normal or visual mode will affect the options that are available to you in the palette.
The chat buffer is where you can converse with the LLM, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to the LLM in the form of prompts. These prompts are segmented by H1 headers: user
, system
and assistant
. When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with your LLM from within Neovim.
When in the chat buffer, there are number of keymaps available to you:
<C-s>
- Save the buffer and trigger a response from the LLM<C-c>
- Close the bufferq
- Cancel the stream from the LLMgc
- Clear the buffer's contentsga
- Add a codeblockgs
- Save the chat to diskgt
- Add a tool to an existing chat[
- Move to the next header]
- Move to the previous header
Chat buffers are not saved to disk by default, but can be by pressing gs
in the buffer. Saved chats can then be restored via the Action Palette and the Load saved chats action.
If display.chat.show_settings
is set to true
, at the very top of the chat buffer will be the adapter's model parameters which can be changed to tweak the response. You can find more detail about them by moving the cursor over them.
From the Action Palette, the Open Chats
action enables users to easily navigate between their open chat buffers. A chat buffer can be deleted (and removed from memory) by pressing <C-c>
.
Inline.Code.mp4
You can use the plugin to create inline code directly into a Neovim buffer. This can be invoked by using the Action Palette (as above) or from the command line via :CodeCompanion
. For example:
:CodeCompanion create a table of 5 fruits
:'<,'>CodeCompanion refactor the code to make it more concise
Note
The command can detect if you've made a visual selection and send any code as context to the LLM alongside the filetype of the buffer.
One of the challenges with inline editing is determining how the generative AI's response should be handled in the buffer. If you've prompted the LLM to "create a table of 5 fruits" then you may wish for the response to be placed after the cursor's current position in the buffer. However, if you asked the LLM to "refactor this function" then you'd expect the response to overwrite a visual selection. If this placement isn't specified then the plugin will use generative AI itself to determine if the response should follow any of the placements below:
- after - after the visual selection
- before - before the visual selection
- cursor - one column after the cursor position
- new - in a new buffer
- replace - replacing the visual selection
The strategy comes with a number of helpers which the user can type in the prompt, similar to GitHub Copilot Chat:
/doc
to add a documentation comment/optimize
to analyze and improve the running time of the selected code/tests
to create unit tests for the selected code
Tools.mp4
Important
Tools are currently at an alpha stage. I'm yet to properly battle test them so feedback is much appreciated.
As outlined by Andrew Ng in Agentic Design Patterns Part 3, Tool Use, LLMs can act as agents by leveraging external tools. Andrew notes some common examples such as web searching or code execution that have obvious benefits when using LLMs.
In this plugin, tools are simply context that's given to an LLM via a system
prompt. This gives it knowledge and a defined schema which it can include in its response for the plugin to parse, execute and feedback on. Tools can be leveraged by opening up the action palette and choosing the tools option. Or, tools can be added when in an existing chat buffer via the gt
keymap.
More information on how tools work and how you can create your own can be found in the TOOLS guide.
Warning
Workflows may result in the significant consumption of tokens if you're using an external LLM.
As outlined by Andrew Ng, agentic workflows have the ability to dramatically improve the output of an LLM. Infact, it's possible for older models like GPT 3.5 to outperform newer models (using traditional zero-shot inference). Andrew discussed how an agentic workflow can be utilised via multiple prompts that invoke the LLM to self reflect. Implementing Andrew's advice, the plugin supports this notion via the use of workflows. At various stages of a pre-defined workflow, the plugin will automatically prompt the LLM without any input or triggering required from the user.
Currently, the plugin comes with the following workflows:
- Adding a new feature
- Refactoring code
Of course you can add new workflows by following the RECIPES guide.
Note
These actions are only available in visual mode
As the name suggests, this action provides advice on a visual selection of code and utilises the chat
strategy. The response from the LLM is streamed into a chat buffer which follows the display.chat
settings in your configuration.
Taken from the fantastic Wtf.nvim plugin, this action provides advice on how to correct any LSP diagnostics which are present on the visually selected lines. Again, the send_code = false
value can be set in your config to prevent the code itself being sent to the LLM.
The plugin fires the following events during its lifecycle:
CodeCompanionRequest
- Fired during the API request. Outputsdata.status
with a value ofstarted
orfinished
CodeCompanionChatSaved
- Fired after a chat has been saved to diskCodeCompanionChat
- Fired at various points during the chat buffer. Comes with the following attributes:data.action = hide_buffer
- For when a chat buffer is hiddendata.action = show_buffer
- For when a chat buffer is visible after being hidden
CodeCompanionInline
- Fired during the inline API request alongsideCodeCompanionRequest
. Outputsdata.status
with a value ofstarted
orfinished
CodeCompanionTool
- Fired when a tool is running. Outputsdata.status
with a value ofstarted
orsuccess
/failure
Events can be hooked into as follows:
local group = vim.api.nvim_create_augroup("CodeCompanionHooks", {})
vim.api.nvim_create_autocmd({ "User" }, {
pattern = "CodeCompanionInline",
group = group,
callback = function(request)
print(request.data.status) -- outputs "started" or "finished"
end,
})
You can incorporate a visual indication to show when the plugin is communicating with an LLM in your Neovim configuration. Below are examples for two popular statusline plugins.
local M = require("lualine.component"):extend()
M.processing = false
M.spinner_index = 1
local spinner_symbols = {
"⠋",
"⠙",
"⠹",
"⠸",
"⠼",
"⠴",
"⠦",
"⠧",
"⠇",
"⠏",
}
local spinner_symbols_len = 10
-- Initializer
function M:init(options)
M.super.init(self, options)
local group = vim.api.nvim_create_augroup("CodeCompanionHooks", {})
vim.api.nvim_create_autocmd({ "User" }, {
pattern = "CodeCompanionRequest",
group = group,
callback = function(request)
self.processing = (request.data.status == "started")
end,
})
end
-- Function that runs every time statusline is updated
function M:update_status()
if self.processing then
self.spinner_index = (self.spinner_index % spinner_symbols_len) + 1
return spinner_symbols[self.spinner_index]
else
return nil
end
end
return M
local CodeCompanion = {
static = {
processing = false,
},
update = {
"User",
pattern = "CodeCompanionRequest",
callback = function(self, args)
self.processing = (args.data.status == "started")
vim.cmd("redrawstatus")
end,
},
{
condition = function(self)
return self.processing
end,
provider = " ",
hl = { fg = "yellow" },
},
}
I am open to contributions but they will be implemented at my discretion. Feel free to open up a discussion before embarking on a big PR and please make sure you've read the CONTRIBUTING.md guide.
- Steven Arcangeli for his genius creation of the chat buffer and his feedback
- Wtf.nvim for the LSP assistant action
- ChatGPT.nvim for the calculation of tokens