Skip to content

Commit

Permalink
feat: stream text with the author strategy
Browse files Browse the repository at this point in the history
  • Loading branch information
olimorris committed Feb 6, 2024
1 parent c1a7107 commit 39e98f7
Show file tree
Hide file tree
Showing 4 changed files with 99 additions and 74 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Use the <a href="https://platform.openai.com/docs/guides/text-generation/chat-co
</p>

> [!IMPORTANT]
> This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-case. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be minimal.
> This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-case. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be minimal.
<p align="center">
<img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/5e5a5e54-c1d9-4fe2-8ae0-1cfbfdd6cea5" alt="Header" />
Expand Down Expand Up @@ -307,16 +307,16 @@ Both of these actions utilise the `chat` strategy. The `Chat` action opens up a

This action enables users to easily navigate between their open chat buffers. A chat buffer maybe deleted (and removed from this action) by pressing `<C-q>` when in the chat buffer.

#### Code author

This action utilises the `author` strategy. This action can be useful for generating code or even refactoring a visual selection based on a prompt by the user. The action is designed to write code for the buffer filetype that it is initated in, or, if run from a terminal prompt, to write commands.

#### Code advisor

As the name suggests, this action provides advice on a visual selection of code and utilises the `advisor` strategy. The response from the API is streamed into a chat buffer which follows the `display.chat` settings in your configuration. If you wish to turn the streaming off, set `display.advisor.stream = false` in your config.

> **Note**: For some users, the sending of any code to an LLM may not be an option. In those instances, you can set `send_code = false` in your config.
#### Code author

This action utilises the `author` strategy. This action can be useful for generating code or even refactoring a visual selection based on a prompt by the user. The action is designed to write code for the buffer filetype that it is initated in, or, if run from a terminal prompt, to write commands.

#### LSP assistant

Taken from the fantastic [Wtf.nvim](https://github.com/piersolenski/wtf.nvim) plugin, this action provides advice (utilising the `advisor` strategy) on any LSP diagnostics which occur across visually selected lines and how they can be fixed. Again, the `send_code = false` value can be set in your config to only send diagnostic messages to OpenAI.
Expand Down
16 changes: 8 additions & 8 deletions doc/codecompanion.txt
Original file line number Diff line number Diff line change
Expand Up @@ -322,14 +322,6 @@ chat buffer maybe deleted (and removed from this action) by pressing `<C-q>`
when in the chat buffer.


CODE AUTHOR

This action utilises the `author` strategy. This action can be useful for
generating code or even refactoring a visual selection based on a prompt by the
user. The action is designed to write code for the buffer filetype that it is
initated in, or, if run from a terminal prompt, to write commands.


CODE ADVISOR

As the name suggests, this action provides advice on a visual selection of code
Expand All @@ -342,6 +334,14 @@ your config.
**Note**For some users, the sending of any code to an LLM may not be an option.
In those instances, you can set `send_code = false` in your config.

CODE AUTHOR

This action utilises the `author` strategy. This action can be useful for
generating code or even refactoring a visual selection based on a prompt by the
user. The action is designed to write code for the buffer filetype that it is
initated in, or, if run from a terminal prompt, to write commands.


LSP ASSISTANT

Taken from the fantastic Wtf.nvim <https://github.com/piersolenski/wtf.nvim>
Expand Down
17 changes: 7 additions & 10 deletions lua/codecompanion/client.lua
Original file line number Diff line number Diff line change
Expand Up @@ -242,16 +242,13 @@ function Client:advisor(args, cb)
return self:call(config.options.base_url .. "/v1/chat/completions", args, cb)
end

---@class CodeCompanion.AuthorArgs
---@field model string ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
---@field input nil|string The input text to use as a starting point for the edit.
---@field instruction string The instruction that tells the model how to edit the prompt.
---@field temperature nil|number Defaults to 1. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
---@field top_p nil|number Defaults to 1. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
---@field n nil|integer Defaults to 1. How many chat completion choices to generate for each input message.
function Client:author(args, cb)
args.stream = false
return self:call(config.options.base_url .. "/v1/chat/completions", args, cb)
---@class args CodeCompanion.AuthorArgs
---@param bufnr integer
---@param cb fun(err: nil|string, chunk: nil|table, done: nil|boolean) Will be called multiple times until done is true
---@return nil
function Client:author(args, bufnr, cb)
args.stream = true
return self:stream_call(config.options.base_url .. "/v1/chat/completions", args, bufnr, cb)
end

return Client
130 changes: 79 additions & 51 deletions lua/codecompanion/strategy/author.lua
Original file line number Diff line number Diff line change
@@ -1,15 +1,17 @@
local config = require("codecompanion.config")
local log = require("codecompanion.utils.log")
local utils = require("codecompanion.utils.util")
local api = vim.api

---@class CodeCompanion.Author
---@field settings table
---@field context table
---@field client CodeCompanion.Client
---@field opts table
---@field prompts table
local Author = {}

---@class CodeCompanion.AuthorArgs
---@field settings table
---@field context table
---@field client CodeCompanion.Client
---@field opts table
Expand All @@ -20,22 +22,17 @@ local Author = {}
function Author.new(opts)
log:trace("Initiating Author")

local self = setmetatable({
return setmetatable({
settings = config.options.ai_settings.author,
context = opts.context,
client = opts.client,
opts = opts.opts,
prompts = opts.prompts,
}, { __index = Author })
return self
end

---@param user_input string|nil
function Author:execute(user_input)
local conversation = {
model = self.opts.model,
messages = {},
}

local formatted_messages = {}

for _, prompt in ipairs(self.prompts) do
Expand All @@ -59,63 +56,94 @@ function Author:execute(user_input)
})
end

conversation.messages = formatted_messages

if config.options.send_code and self.opts.send_visual_selection and self.context.is_visual then
table.insert(conversation.messages, 2, {
table.insert(formatted_messages, 2, {
role = "user",
content = "For context, this is the code I will ask you to help me with:\n"
.. table.concat(self.context.lines, "\n"),
})
end

vim.bo[self.context.bufnr].modifiable = false
self.client:author(conversation, function(err, data)
if err then
vim.bo[self.context.bufnr].modifiable = true
log:error("Author Error: %s", err)
vim.notify(err, vim.log.levels.ERROR)
end
-- Clear any visual selection
if self.context.is_visual then
api.nvim_buf_set_text(
self.context.bufnr,
self.context.start_line - 1,
self.context.start_col - 1,
self.context.end_line - 1,
self.context.end_col,
{ "" }
)
api.nvim_win_set_cursor(self.context.winid, { self.context.start_line, self.context.start_col - 1 })
end

local response = data.choices[1].message.content
local cursor_pos = api.nvim_win_get_cursor(self.context.winid)
local pos = {
line = cursor_pos[1],
col = cursor_pos[2],
}

if string.find(response, "^%[Error%]") == 1 then
vim.bo[self.context.bufnr].modifiable = true
return require("codecompanion.utils.ui").display(
config.options.display,
response,
conversation.messages,
self.client
)
end
local function stream_buffer_text(text)
local line = pos.line - 1
local col = pos.col

vim.bo[self.context.bufnr].modifiable = true
local output = vim.split(response, "\n")
local index = 1
while index <= #text do
local newline = text:find("\n", index) or (#text + 1)
local substring = text:sub(index, newline - 1)

if self.context.buftype == "terminal" then
vim.api.nvim_put(output, "", false, true)
return
if #substring > 0 then
api.nvim_buf_set_text(self.context.bufnr, line, col, line, col, { substring })
col = col + #substring
end

if newline <= #text then
api.nvim_buf_set_lines(self.context.bufnr, line + 1, line + 1, false, { "" })
line = line + 1
col = 0
end

index = newline + 1
end

if self.context.is_visual and (self.opts.modes and utils.contains(self.opts.modes, "v")) then
vim.api.nvim_buf_set_text(
self.context.bufnr,
self.context.start_line - 1,
self.context.start_col - 1,
self.context.end_line - 1,
self.context.end_col,
output
)
else
vim.api.nvim_buf_set_lines(
self.context.bufnr,
self.context.cursor_pos[1] - 1,
self.context.cursor_pos[1] - 1,
true,
output
)
pos.line = line + 1
pos.col = col
api.nvim_win_set_cursor(self.context.winid, { pos.line, pos.col })
end

local output = {}
self.client:stream_chat(
vim.tbl_extend("keep", self.settings, {
messages = formatted_messages,
}),
self.context.bufnr,
function(err, chunk, done)
if err then
vim.notify("Error: " .. err, vim.log.levels.ERROR)
return
end

if chunk then
log:debug("chat chunk: %s", chunk)

local delta = chunk.choices[1].delta
if delta.content and not delta.role then
if self.context.buftype == "terminal" then
table.insert(output, delta.content)
else
stream_buffer_text(delta.content)
end
end
end

if done then
if self.context.buftype == "terminal" then
log:debug("terminal: %s", output)
api.nvim_put({ table.concat(output, "") }, "", false, true)
end
end
end
end)
)
end

function Author:start()
Expand Down

0 comments on commit 39e98f7

Please sign in to comment.