From eec6e6c5ee7ae04bd07e67c62123377ba0fb9519 Mon Sep 17 00:00:00 2001 From: Oli Morris Date: Wed, 11 Sep 2024 17:48:37 +0100 Subject: [PATCH] docs: update README.md --- README.md | 27 ++++++++++++++----------- doc/codecompanion.txt | 46 +++++++++++++++++++++++++------------------ 2 files changed, 42 insertions(+), 31 deletions(-) diff --git a/README.md b/README.md index 254ae5b6..35bb12f4 100644 --- a/README.md +++ b/README.md @@ -116,7 +116,7 @@ EOF ## :rocket: Quickstart > [!NOTE] -> Okay, okay...it's not quite a quickstart as you'll need to configure an [adapter](#gear-configuration) first. +> Okay, okay...it's not quite a quickstart as you'll need to configure an [adapter](#electric_plug-adapters) first. **Chat Buffer** @@ -238,13 +238,14 @@ The plugin also utilises objects called Strategies. These are the different ways The plugin allows you to specify adapters for each strategy and also for each [pre-defined prompt](#clipboard-pre-defined-prompts). - ### :hammer_and_wrench: Defaults > [!NOTE] > You only need to the call the `setup` function if you wish to change any of the config defaults. + +
Click to see the default configuration @@ -924,8 +925,6 @@ When given a task: -### :building_construction: Common Changes to the Defaults - **Changing the System Prompt** The default system prompt has been carefully curated to deliver responses which are similar to GitHub Copilot Chat. That is, terse, professional and with expertise in coding. However, if you'd like to change the default system prompt, you can change the `opts.system_prompt` table in the config. You can also set it as a function which can receive the current chat buffer's adapter as a parameter, giving you the option of setting system prompts that are model specific: @@ -992,7 +991,7 @@ In the example above, we're using the base of the Anthropic adapter but changing **Setting an API Key Using a Command** -Having API keys in plain text in your shell is not always safe. Thanks to [this PR](https://github.com/olimorris/codecompanion.nvim/pull/24), you can run commands from within your config. In the example below, we're using the 1Password CLI to read an OpenAI credential. +Having API keys in plain text in your shell is not always safe. Thanks to [this PR](https://github.com/olimorris/codecompanion.nvim/pull/24), you can run commands from within your config by prefixing them with `cmd:`. In the example below, we're using the 1Password CLI to read an OpenAI credential. ```lua require("codecompanion").setup({ @@ -1010,7 +1009,7 @@ require("codecompanion").setup({ **Using Ollama Remotely** -To use Ollama remotely, simply change the URL in the `env` table and set an API key: +To use Ollama remotely, change the URL in the `env` table, set an API key and pass it via an "Authorization" header: ```lua require("codecompanion").setup({ @@ -1036,6 +1035,8 @@ require("codecompanion").setup({ **Connecting via a Proxy** +You can also connect via a Proxy: + ```lua require("codecompanion").setup({ adapters = { @@ -1049,6 +1050,8 @@ require("codecompanion").setup({ **Changing an Adapter's Default Model** +A common ask is to change an adapter's default model. This can be done by altering the `schema.model.default` table: + ```lua require("codecompanion").setup({ adapters = { @@ -1074,7 +1077,7 @@ require("codecompanion").setup({ adapters = { llama3 = function() return require("codecompanion.adapters").extend("ollama", { - name = "llama3", -- Ensure this adapter is differentiated from Ollama + name = "llama3", -- Give this adapter a different name to differentiate it from the default ollama adapter schema = { model = { default = "llama3:latest", @@ -1108,7 +1111,7 @@ The look and feel of the chat buffer can be customised as per the `display.chat` When in the chat buffer, there are number of keymaps available to you: -- `?` - Bring up the options menu +- `?` - Bring up the menu that lists the keymaps and commands - ``|`` - Send the buffer to the LLM - `` - Close the buffer - `q` - Cancel the request from the LLM @@ -1128,17 +1131,17 @@ You can display your selected adapter's schema at the top of the buffer, if `dis **Slash Commands** -Slash Commands allow you to easily share additional context with your LLM from the chat buffer. Some of the Slash Commands allow to choose the underlying provider: +As outlined in the [Quickstart](#rocket-quickstart) section, Slash Commands allow you to easily share additional context with your LLM from the chat buffer. Some of the Slash Commands allow to change the default provider: -- `/buffer` - Has a `default` provider (which leverages `vim.ui.select`), `telescope` and `fzf_lua` -- `/files` - Has `telescope`, `mini_pick` and `fzf_lua` +- `/buffer` - Has a `default` provider (which leverages `vim.ui.select`) alongside `telescope` and `fzf_lua` providers +- `/files` - Has `telescope`, `mini_pick` and `fzf_lua` providers Please refer to [the config](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/config.lua) to see how to change the default provider. ### :pencil2: Inline Assistant > [!NOTE] -> If `send_code = false` in the config then this will take precedent and no code will be sent to the LLM +> If you've set `opts.send_code = false` in your config then the plugin will endeavour to ensure no code is sent to the LLM. One of the challenges with inline editing is determining how the LLM's response should be handled in the buffer. If you've prompted the LLM to _"create a table of 5 common text editors"_ then you may wish for the response to be placed at the cursor's position in the current buffer. However, if you asked the LLM to _"refactor this function"_ then you'd expect the response to _replace_ a visual selection. The plugin will use the inline LLM you've specified in your config to determine if the response should... diff --git a/doc/codecompanion.txt b/doc/codecompanion.txt index 12a123ba..bb43510e 100644 --- a/doc/codecompanion.txt +++ b/doc/codecompanion.txt @@ -1,4 +1,4 @@ -*codecompanion.txt* For NVIM v0.9.2 Last change: 2024 September 10 +*codecompanion.txt* For NVIM v0.9.2 Last change: 2024 September 11 ============================================================================== Table of Contents *codecompanion-table-of-contents* @@ -219,16 +219,19 @@ The plugin allows you to specify adapters for each strategy and also for each |codecompanion-pre-defined-prompt|. -COMMON CHANGES TO THE DEFAULTS ~ +DEFAULTS ~ + + [!NOTE] You only need to the call the `setup` function if you wish to change + any of the config defaults. **Changing the System Prompt** The default system prompt has been carefully curated to deliver responses which are similar to GitHub Copilot Chat. That is, terse, professional and with -expertise in development. However, if you’d like to change the default system -prompt, you can change the `opts.system_prompt` key in the config. You can also -set it as a function which can receive the current chat buffer’s adapter as a -parameter, giving you the option of setting system prompts that are model +expertise in coding. However, if you’d like to change the default system +prompt, you can change the `opts.system_prompt` table in the config. You can +also set it as a function which can receive the current chat buffer’s adapter +as a parameter, giving you the option of setting system prompts that are model specific: >lua @@ -301,8 +304,8 @@ changing the name of the default API key which it uses. Having API keys in plain text in your shell is not always safe. Thanks to this PR , you can run -commands from within your config. In the example below, we’re using the -1Password CLI to read an OpenAI credential. +commands from within your config by prefixing them with `cmd:`. In the example +below, we’re using the 1Password CLI to read an OpenAI credential. >lua require("codecompanion").setup({ @@ -320,8 +323,8 @@ commands from within your config. In the example below, we’re using the **Using Ollama Remotely** -To use Ollama remotely, simply change the URL in the `env` table and set an API -key: +To use Ollama remotely, change the URL in the `env` table, set an API key and +pass it via an "Authorization" header: >lua require("codecompanion").setup({ @@ -347,6 +350,8 @@ key: **Connecting via a Proxy** +You can also connect via a Proxy: + >lua require("codecompanion").setup({ adapters = { @@ -360,6 +365,9 @@ key: **Changing an Adapter’s Default Model** +A common ask is to change an adapter’s default model. This can be done by +altering the `schema.model.default` table: + >lua require("codecompanion").setup({ adapters = { @@ -386,7 +394,7 @@ adapter, these sit within a schema table and can be configured during setup: adapters = { llama3 = function() return require("codecompanion.adapters").extend("ollama", { - name = "llama3", -- Ensure this adapter is differentiated from Ollama + name = "llama3", -- Give this adapter a different name to differentiate it from the default ollama adapter schema = { model = { default = "llama3:latest", @@ -438,7 +446,7 @@ referenced in the chat buffer. When in the chat buffer, there are number of keymaps available to you: -- `?` - Bring up the options menu +- `?` - Bring up the menu that lists the keymaps and commands - ``|`` - Send the buffer to the LLM - `` - Close the buffer - `q` - Cancel the request from the LLM @@ -460,12 +468,12 @@ response from the LLM. **Slash Commands** -Slash Commands allow you to easily share additional context with your LLM from -the chat buffer. Some of the Slash Commands allow to choose the underlying -provider: +As outlined in the |codecompanion-quickstart| section, Slash Commands allow you +to easily share additional context with your LLM from the chat buffer. Some of +the Slash Commands allow to change the default provider: -- `/buffer` - Has a `default` provider (which leverages `vim.ui.select`), `telescope` and `fzf_lua` -- `/files` - Has `telescope`, `mini_pick` and `fzf_lua` +- `/buffer` - Has a `default` provider (which leverages `vim.ui.select`) alongside `telescope` and `fzf_lua` providers +- `/files` - Has `telescope`, `mini_pick` and `fzf_lua` providers Please refer to the config @@ -475,8 +483,8 @@ to see how to change the default provider. INLINE ASSISTANT ~ - [!NOTE] If `send_code = false` in the config then this will take precedent and - no code will be sent to the LLM + [!NOTE] If you’ve set `opts.send_code = false` in your config then the plugin + will endeavour to ensure no code is sent to the LLM. One of the challenges with inline editing is determining how the LLM’s response should be handled in the buffer. If you’ve prompted the LLM to _“create a table of 5 common text editors”_ then you may wish for the