Skip to content

Commit

Permalink
feat: replace the advisor strategy with chat
Browse files Browse the repository at this point in the history
  • Loading branch information
olimorris committed Feb 15, 2024
1 parent 0ba2961 commit 7e4ce1b
Show file tree
Hide file tree
Showing 8 changed files with 85 additions and 250 deletions.
43 changes: 12 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Use the <a href="https://platform.openai.com/docs/guides/text-generation/chat-co

## :sparkles: Features

- :speech_balloon: Chat with the OpenAI APIs via a Neovim buffer
- :speech_balloon: Chat with the OpenAI APIs in a Neovim buffer
- :sparkles: Built in actions for specific language prompts, LSP error fixes and inline code generation
- :building_construction: Create your own custom actions for Neovim which hook into OpenAI
- :floppy_disk: Save and restore your chats
Expand All @@ -39,9 +39,8 @@ Use the <a href="https://platform.openai.com/docs/guides/text-generation/chat-co
## :camera_flash: Screenshots

<div align="center">
<p><strong>Chat buffer</strong><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/a19c8397-a1e2-44df-98be-8a1b4d307ea7" alt="chat buffer" /></p>
<p><strong>Chat</strong><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/a19c8397-a1e2-44df-98be-8a1b4d307ea7" alt="chat" /></p>
<p><strong>Inline code</strong><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/7e1f2e16-7b6f-453e-b3b0-650f3ac0fc0a" alt="Inline code" /></p>
<p><strong>Code advisor</strong><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/889df5ee-048f-4a13-b2b5-4d999a2de600" alt="code advisor" /><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/6bdeac30-c2a0-4213-be0e-a27a7695a3f4" alt="code advisor" /></p>
</div>

<!-- panvimdoc-ignore-end -->
Expand Down Expand Up @@ -101,7 +100,7 @@ require("codecompanion").setup({
ai_settings = {
-- Default settings for the Completions API
-- See https://platform.openai.com/docs/api-reference/chat/create
advisor = {
chat = {
model = "gpt-4-0125-preview",
temperature = 1,
top_p = 1,
Expand All @@ -123,17 +122,6 @@ require("codecompanion").setup({
logit_bias = nil,
user = nil,
},
chat = {
model = "gpt-4-0125-preview",
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = nil,
user = nil,
},
},
saved_chats = {
save_dir = vim.fn.stdpath("data") .. "/codecompanion/saved_chats", -- Path to save chats to
Expand All @@ -143,9 +131,6 @@ require("codecompanion").setup({
width = 95,
height = 10,
},
advisor = {
stream = true, -- Stream the output like a chat buffer?
},
chat = { -- Options for the chat strategy
type = "float", -- float|buffer
show_settings = true, -- Show the model settings in the chat buffer?
Expand Down Expand Up @@ -183,7 +168,7 @@ require("codecompanion").setup({
["["] = "keymaps.previous", -- Move to the previous header in the chat
},
log_level = "ERROR", -- TRACE|DEBUG|ERROR
send_code = true, -- Send code context to the API? Disable to prevent leaking code to OpenAI
send_code = true, -- Send code context to OpenAI? Disable to prevent leaking code outside of Neovim
silence_notifications = false, -- Silence notifications for actions like saving saving chats?
use_default_actions = true, -- Use the default actions in the action palette?
})
Expand Down Expand Up @@ -213,7 +198,7 @@ The author recommends pairing with [edgy.nvim](https://github.com/folke/edgy.nvi

### Highlight Groups

The plugin sets a number of highlights during setup:
The plugin sets the highlight groups during setup:

- `CodeCompanionTokens` - Virtual text showing the token count when in a chat buffer
- `CodeCompanionVirtualText` - All other virtual text in the chat buffer
Expand All @@ -235,7 +220,7 @@ vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", {
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
```

> **Note**: For some actions, visual mode allows your selection to be sent directly to the chat buffer or the API itself (in the case of `inline code` actions).
> **Note**: For some actions, visual mode allows your selection to be sent directly to the chat buffer or the API itself (in the case of _inline code_ actions).
### The Action Palette

Expand All @@ -257,15 +242,15 @@ require("codecompanion").setup({
})
```

> **Note**: We describe how to do this in detail within the `RECIPES.md` file
> **Note**: I will describe how to do this in detail within a `RECIPES.md` file in the near future.
Or, if you wish to turn off the default actions, set `use_default_actions = false` in your config.

### The Chat Buffer

<p><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/84d5e03a-0b48-4ffb-9ca5-e299d41171bd" alt="chat buffer" /></p>

The chat buffer is where you can converse with OpenAI API, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to the API, in the form of prompts. These prompts are segmented by H1 headers: `user` and `assistant` (see OpenAI's [Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) for more on this). When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with ChatGPT, from within Neovim.
The chat buffer is where you can converse with OpenAI API, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to OpenAI, in the form of prompts. These prompts are segmented by H1 headers: `user` and `assistant` (see OpenAI's [Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) for more on this). When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with ChatGPT, from within Neovim.

#### Keymaps

Expand All @@ -282,19 +267,15 @@ When in the chat buffer, there are number of keymaps available to you (which can

#### Saved Chats

Chat Buffers are not automatically saved, but can be by pressing `gs` in the buffer. Saved chats can then be restored via the Action Palette and the _Saved chats_ action.
Chat Buffers are not saved to disk by default, but can be by pressing `gs` in the buffer. Saved chats can then be restored via the Action Palette and the _Saved chats_ action.

#### Settings

If `display.chat.show_settings` is set to `true`, at the very top of the chat buffer will be the OpenAI parameters which can be changed to affect the API's response back to you. This enables fine-tuning and parameter tweaking throughout the chat. You can find more detail about them by moving the cursor over them or referring to the [Chat Completions reference guide](https://platform.openai.com/docs/api-reference/chat) if you're using OpenAI.

### In-Built Actions

The plugin comes with a number of [in-built actions](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua) which aim to improve your Neovim workflow. Actions make use of strategies which are abstractions built around Neovim and OpenAI functionality. Before we dive in to the actions, it's worth explaining what each of the strategies do:

- `chat` - A strategy for opening up a chat buffer allowing the user to converse directly with OpenAI
- `inline` - A strategy for allowing OpenAI responses to be written inline to a Neovim buffer
- `advisor` - A strategy for providing specific advice on a selection of code via a chat buffer
The plugin comes with a number of [in-built actions](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua) which aim to improve your Neovim workflow. Actions make use of either a _chat_ or an _inline_ strategy, which are abstractions built around Neovim and OpenAI. The chat strategy opens up a chat buffer whilst an inline strategy will write any output into the Neovim buffer.

#### Chat and Chat as

Expand All @@ -320,13 +301,13 @@ The strategy comes with a number of helpers which the user can type in the promp
#### Code advisor

As the name suggests, this action provides advice on a visual selection of code and utilises the `advisor` strategy. The response from the API is streamed into a chat buffer which follows the `display.chat` settings in your configuration. If you wish to turn the streaming off, set `display.advisor.stream = false` in your config.
As the name suggests, this action provides advice on a visual selection of code and utilises the `chat` strategy. The response from the API is streamed into a chat buffer which follows the `display.chat` settings in your configuration.

> **Note**: For some users, the sending of any code to an LLM may not be an option. In those instances, you can set `send_code = false` in your config.
#### LSP assistant

Taken from the fantastic [Wtf.nvim](https://github.com/piersolenski/wtf.nvim) plugin, this action provides advice (utilising the `advisor` strategy) on any LSP diagnostics which occur across visually selected lines and how they can be fixed. Again, the `send_code = false` value can be set in your config to only send diagnostic messages to OpenAI.
Taken from the fantastic [Wtf.nvim](https://github.com/piersolenski/wtf.nvim) plugin, this action provides advice on any LSP diagnostics which occur across visually selected lines and how they can be fixed. Again, the `send_code = false` value can be set in your config to only send diagnostic messages to OpenAI.

## :rainbow: Helpers

Expand Down
75 changes: 28 additions & 47 deletions doc/codecompanion.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Table of Contents *codecompanion-table-of-contents*

FEATURES *codecompanion-features*

- Chat with the OpenAI APIs via a Neovim buffer
- Chat with the OpenAI APIs in a Neovim buffer
- Built in actions for specific language prompts, LSP error fixes and inline code generation
- Create your own custom actions for Neovim which hook into OpenAI
- Save and restore your chats
Expand Down Expand Up @@ -43,7 +43,6 @@ INSTALLATION *codecompanion-installation*
opts = {},
},
},
cmd = { "CodeCompanionToggle", "CodeCompanionActions", "CodeCompanionChat" },
config = true
}

Expand Down Expand Up @@ -77,7 +76,7 @@ Click to see the default configuration ~
ai_settings = {
-- Default settings for the Completions API
-- See https://platform.openai.com/docs/api-reference/chat/create
advisor = {
chat = {
model = "gpt-4-0125-preview",
temperature = 1,
top_p = 1,
Expand All @@ -99,17 +98,6 @@ Click to see the default configuration ~
logit_bias = nil,
user = nil,
},
chat = {
model = "gpt-4-0125-preview",
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = nil,
user = nil,
},
},
saved_chats = {
save_dir = vim.fn.stdpath("data") .. "/codecompanion/saved_chats", -- Path to save chats to
Expand All @@ -119,9 +107,6 @@ Click to see the default configuration ~
width = 95,
height = 10,
},
advisor = {
stream = true, -- Stream the output like a chat buffer?
},
chat = { -- Options for the chat strategy
type = "float", -- float|buffer
show_settings = true, -- Show the model settings in the chat buffer?
Expand Down Expand Up @@ -159,7 +144,7 @@ Click to see the default configuration ~
["["] = "keymaps.previous", -- Move to the previous header in the chat
},
log_level = "ERROR", -- TRACE|DEBUG|ERROR
send_code = true, -- Send code context to the API? Disable to prevent leaking code to OpenAI
send_code = true, -- Send code context to OpenAI? Disable to prevent leaking code outside of Neovim
silence_notifications = false, -- Silence notifications for actions like saving saving chats?
use_default_actions = true, -- Use the default actions in the action palette?
})
Expand Down Expand Up @@ -190,7 +175,7 @@ The author recommends pairing with edgy.nvim

HIGHLIGHT GROUPS ~

The plugin sets a number of highlights during setup:
The plugin sets the highlight groups during setup:

- `CodeCompanionTokens` - Virtual text showing the token count when in a chat buffer
- `CodeCompanionVirtualText` - All other virtual text in the chat buffer
Expand All @@ -215,7 +200,7 @@ For an optimum workflow, the plugin author recommendeds the following keymaps:


**Note**For some actions, visual mode allows your selection to be sent directly
to the chat buffer or the API itself (in the case of `inline code` actions).
to the chat buffer or the API itself (in the case of _inline code_ actions).

THE ACTION PALETTE ~

Expand All @@ -239,32 +224,33 @@ You may add your own actions into the palette by altering your configuration:
<


**Note**We describe how to do this in detail within the `RECIPES.md` file
**Note**I will describe how to do this in detail within a `RECIPES.md` file in
the near future.
Or, if you wish to turn off the default actions, set `use_default_actions =
false` in your config.


THE CHAT BUFFER ~

The chat buffer is where you can converse with your GenAI API, directly from
The chat buffer is where you can converse with OpenAI API, directly from
Neovim. It behaves as a regular markdown buffer with some clever additions.
When the buffer is written (or "saved"), autocmds trigger the sending of its
content to the API, in the form of prompts. These prompts are segmented by H1
content to OpenAI, in the form of prompts. These prompts are segmented by H1
headers: `user` and `assistant` (see OpenAI’s Chat Completions API
<https://platform.openai.com/docs/guides/text-generation/chat-completions-api>
for more on this). When a response is received, it is then streamed back into
the buffer. The result is that you experience the feel of conversing with
GenAI, from within Neovim.
ChatGPT, from within Neovim.


KEYMAPS

When in the chat buffer, there are number of keymaps available to you (which
can be changed in the config):

- `<C-s>` - Save the buffer and trigger a response from the GenAI
- `<C-s>` - Save the buffer and trigger a response from the OpenAI API
- `<C-c>` - Close the buffer
- `q` - Cancel streaming from the GenAI
- `q` - Cancel streaming from OpenAI
- `gc` - Clear the buffer’s contents
- `ga` - Add a codeblock
- `gs` - Save the chat
Expand All @@ -274,15 +260,15 @@ can be changed in the config):

SAVED CHATS

Chat Buffers are not automatically saved, but can be by pressing `gs` in the
buffer. Saved chats can then be restored via the Action Palette and the _Saved
chats_ action.
Chat Buffers are not saved to disk by default, but can be by pressing `gs` in
the buffer. Saved chats can then be restored via the Action Palette and the
_Saved chats_ action.


SETTINGS

If `display.chat.show_settings` is set to `true`, at the very top of the chat
buffer will be the GenAI parameters which can be changed to affect the API’s
buffer will be the OpenAI parameters which can be changed to affect the API’s
response back to you. This enables fine-tuning and parameter tweaking
throughout the chat. You can find more detail about them by moving the cursor
over them or referring to the Chat Completions reference guide
Expand All @@ -293,13 +279,10 @@ IN-BUILT ACTIONS ~

The plugin comes with a number of in-built actions
<https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua>
which aim to improve your Neovim workflow. Actions make use of strategies which
are abstractions built around Neovim and OpenAI functionality. Before we dive
in to the actions, it’s worth explaining what each of the strategies do:

- `chat` - A strategy for opening up a chat buffer allowing the user to converse directly with OpenAI
- `inline` - A strategy for allowing OpenAI responses to be written inline to a Neovim buffer
- `advisor` - A strategy for providing specific advice on a selection of code via a chat buffer
which aim to improve your Neovim workflow. Actions make use of either a _chat_
or an _inline_ strategy, which are abstractions built around Neovim and OpenAI.
The chat strategy opens up a chat buffer whilst an inline strategy will write
any output into the Neovim buffer.


CHAT AND CHAT AS
Expand Down Expand Up @@ -343,10 +326,8 @@ prompt, similar to GitHub Copilot Chat
CODE ADVISOR

As the name suggests, this action provides advice on a visual selection of code
and utilises the `advisor` strategy. The response from the API is streamed into
a chat buffer which follows the `display.chat` settings in your configuration.
If you wish to turn the streaming off, set `display.advisor.stream = false` in
your config.
and utilises the `chat` strategy. The response from the API is streamed into a
chat buffer which follows the `display.chat` settings in your configuration.


**Note**For some users, the sending of any code to an LLM may not be an option.
Expand All @@ -355,10 +336,10 @@ your config.
LSP ASSISTANT

Taken from the fantastic Wtf.nvim <https://github.com/piersolenski/wtf.nvim>
plugin, this action provides advice (utilising the `advisor` strategy) on any
LSP diagnostics which occur across visually selected lines and how they can be
fixed. Again, the `send_code = false` value can be set in your config to only
send diagnostic messages to OpenAI.
plugin, this action provides advice on any LSP diagnostics which occur across
visually selected lines and how they can be fixed. Again, the `send_code =
false` value can be set in your config to only send diagnostic messages to
OpenAI.


HELPERS *codecompanion-helpers*
Expand Down Expand Up @@ -397,10 +378,10 @@ HEIRLINE.NVIM ~
If you use the fantastic Heirline.nvim
<https://github.com/rebelot/heirline.nvim> plugin, consider the following
snippet to display an icon in the statusline whilst CodeCompanion is speaking
to a GenAI model:
to OpenAI:

>lua
local GenAI = {
local OpenAI = {
static = {
processing = false,
},
Expand Down
17 changes: 13 additions & 4 deletions lua/codecompanion/actions.lua
Original file line number Diff line number Diff line change
Expand Up @@ -387,10 +387,11 @@ M.static.actions = {
},
{
name = "Code advisor",
strategy = "advisor",
strategy = "chat",
description = "Get advice on the code you've selected",
opts = {
modes = { "v" },
auto_submit = true,
user_prompt = true,
send_visual_selection = true,
},
Expand All @@ -403,14 +404,22 @@ M.static.actions = {
.. " developer. I will ask you specific questions and I want you to return concise explanations and codeblock examples."
end,
},
{
role = "user",
contains_code = true,
content = function(context)
return send_code(context)
end,
},
},
},
{
name = "LSP assistant",
strategy = "advisor",
strategy = "chat",
description = "Get help from OpenAI to fix LSP diagnostics",
opts = {
modes = { "v" },
auto_submit = true, -- Automatically submit the chat
user_prompt = false, -- Prompt the user for their own input
send_visual_selection = false, -- No need to send the visual selection as we do this in prompt 3
},
Expand Down Expand Up @@ -465,9 +474,9 @@ M.static.actions = {
},
},
{
name = "Load chats ...",
name = "Load saved chats ...",
strategy = "saved_chats",
description = "Load your previous chats",
description = "Load your previously saved chats",
condition = function()
local saved_chats = require("codecompanion.strategy.saved_chats")
return saved_chats:has_chats()
Expand Down
Loading

0 comments on commit 7e4ce1b

Please sign in to comment.