- Minuet AI
- Features
- Requirements
- Installation
- Configuration
- API Keys
- System Prompt
- Providers
- Commands
- API
- FAQ
- TODO
- Contributing
- Acknowledgement
Minuet AI: Dance with Intelligence in Your Code π.
Minuet-ai
brings the grace and harmony of a minuet to your coding process.
Just as dancers move during a minuet.
- AI-powered code completion
- Support for multiple AI providers (OpenAI, Claude, Gemini, Codestral, Huggingface, and OpenAI-compatible services)
- Customizable configuration options
- Streaming support to enable completion delivery even with slower LLMs
- Support
nvim-cmp
,blink-cmp
,virtual text
frontend
With nvim-cmp / blink-cmp frontend:
With virtual text frontend:
- Neovim 0.10+.
- plenary.nvim
- optional: nvim-cmp
- optional: blink.cmp
- An API key for at least one of the supported AI providers
Lazy
specs = {
{
'milanglacier/minuet-ai.nvim',
config = function()
require('minuet').setup {
-- Your configuration options here
}
end,
},
{ 'nvim-lua/plenary.nvim' },
-- optional, if you are using virtual-text frontend, nvim-cmp is not
-- required.
{ 'hrsh7th/nvim-cmp' },
-- optional, if you are using virtual-text frontend, blink is not required.
{ 'Saghen/blink.cmp' },
}
Given the response speed and rate limits of LLM services, we recommend you
either invoke minuet
completion manually or use a cost-effective model for
auto-completion. The author recommends gemini-1.5-flash
for auto-completion.
Setting up with nvim-cmp:
require('cmp').setup {
sources = {
{
-- Include minuet as a source to enable autocompletion
{ name = 'minuet' },
-- and your other sources
}
},
performance = {
-- It is recommended to increase the timeout duration due to
-- the typically slower response speed of LLMs compared to
-- other completion sources. This is not needed when you only
-- need manual completion.
fetching_timeout = 2000,
},
}
-- If you wish to invoke completion manually,
-- The following configuration binds `A-y` key
-- to invoke the configuration manually.
require('cmp').setup {
mapping = {
["<A-y>"] = require('minuet').make_cmp_map()
-- and your other keymappings
},
}
Setting up with virtual text:
require('minuet').setup {
virtualtext = {
auto_trigger_ft = {},
keymap = {
accept = '<A-A>',
accept_line = '<A-a>',
-- Cycle to prev completion item, or manually invoke completion
prev = '<A-[>',
-- Cycle to next completion item, or manually invoke completion
next = '<A-]>',
dismiss = '<A-e>',
},
},
}
Setting up with blink-cmp:
require('blink-cmp').setup {
keymap = {
-- Manually invoke minuet completion.
['<A-y>'] = {
function(cmp)
cmp.show { providers = { 'minuet' } }
end,
},
},
sources = {
-- Enable minuet for autocomplete
default = { 'lsp', 'path', 'buffer', 'snippets', 'minuet' },
-- For manual completion only, remove 'minuet' from default
providers = {
minuet = {
name = 'minuet',
module = 'minuet.blink',
score_offset = 8, -- Gives minuet higher priority among suggestions
},
},
},
}
If you are using a distribution like lazyvim
, see the FAQ section to see how
to configure minuet
with lazyvim
, note that the author does not use
lazyvim
, the FAQ section does not guarantee to work. PRs are welcome to fix
the problem if it exists.
Minuet AI comes with the following defaults:
default_config = {
-- Enable or disable auto-completion. Note that you still need to add
-- Minuet to your cmp/blink sources. This option controls whether cmp/blink
-- will attempt to invoke minuet when minuet is included in cmp/blink
-- sources. This setting has no effect on manual completion; Minuet will
-- always be enabled when invoked manually. You can use the command
-- `MinuetToggle` to toggle this option.
cmp = {
enable_auto_complete = true,
},
blink = {
enable_auto_complete = true,
},
virtualtext = {
-- Specify the filetypes to enable automatic virtual text completion,
-- e.g., { 'python', 'lua' }. Note that you can still invoke manual
-- completion even if the filetype is not on your auto_trigger_ft list.
auto_trigger_ft = {},
keymap = {
accept = nil,
accept_line = nil,
-- Cycle to next completion item, or manually invoke completion
next = nil,
-- Cycle to prev completion item, or manually invoke completion
prev = nil,
dismiss = nil,
},
},
provider = 'codestral',
-- the maximum total characters of the context before and after the cursor
-- 12,800 characters typically equate to approximately 4,000 tokens for
-- LLMs.
context_window = 12800,
-- when the total characters exceed the context window, the ratio of
-- context before cursor and after cursor, the larger the ratio the more
-- context before cursor will be used. This option should be between 0 and
-- 1, context_ratio = 0.75 means the ratio will be 3:1.
context_ratio = 0.75,
throttle = 1000, -- only send the request every x milliseconds, use 0 to disable throttle.
-- debounce the request in x milliseconds, set to 0 to disable debounce
debounce = 400,
-- show notification when request is sent or request fails. options:
-- false to disable notification. Note that you should use false, not "false".
-- "verbose" for all notifications.
-- "warn" for warnings and above.
-- "error" just errors.
notify = 'verbose',
-- The request timeout, measured in seconds. When streaming is enabled
-- (stream = true), setting a shorter request_timeout allows for faster
-- retrieval of completion items, albeit potentially incomplete.
-- Conversely, with streaming disabled (stream = false), a timeout
-- occurring before the LLM returns results will yield no completion items.
request_timeout = 3,
-- if completion item has multiple lines, create another completion item only containing its first line.
add_single_line_entry = true,
-- The number of completion items (encoded as part of the prompt for the
-- chat LLM) requested from the language model. It's important to note that
-- when 'add_single_line_entry' is set to true, the actual number of
-- returned items may exceed this value. Additionally, the LLM cannot
-- guarantee the exact number of completion items specified, as this
-- parameter serves only as a prompt guideline.
n_completions = 3,
-- Defines the length of non-whitespace context after the cursor used to
-- filter completion text. Set to 0 to disable filtering.
--
-- Example: With after_cursor_filter_length = 3 and context:
--
-- "def fib(n):\n|\n\nfib(5)" (where | represents cursor position),
--
-- if the completion text contains "fib", then "fib" and subsequent text
-- will be removed. This setting filters repeated text generated by the
-- LLM. A large value (e.g., 15) is recommended to avoid false positives.
after_cursor_filter_length = 15,
-- proxy port to use
proxy = nil,
provider_options = {
-- see the documentation in each provider in the following part.
},
-- see the documentation in the `System Prompt` section
default_template = {
template = '...',
prompt = '...',
guidelines = '...',
n_completion_template = '...',
},
default_few_shots = { '...' },
}
Minuet AI requires API keys to function. Set the following environment variables:
OPENAI_API_KEY
for OpenAIGEMINI_API_KEY
for GeminiANTHROPIC_API_KEY
for ClaudeCODESTRAL_API_KEY
for CodestralHF_API_KEY
for Huggingface- Custom environment variable for OpenAI-compatible services (as specified in your configuration)
Note: Provide the name of the environment variable to Minuet, not the
actual value. For instance, pass OPENAI_API_KEY
to Minuet, not the value
itself (e.g., sk-xxxx
).
If using Ollama, you need to assign an arbitrary, non-null environment variable as a placeholder for it to function.
See prompt for the default system prompt used by minuet
.
You can customize the template
by encoding placeholders within triple braces.
These placeholders will be interpolated using the corresponding key-value pairs
from the table. The value can be either a string or a function that takes no
arguments and returns a string.
Here's a simplified example for illustrative purposes (not intended for actual configuration):
system = {
template = '{{{assistant}}}\n{{{role}}}'
assistant = function() return 'you are a helpful assistant' end,
role = "you are also a code expert.",
}
Note that n_completion_template
is a special placeholder as it contains one
%d
which will be encoded with config.n_completions
, if you want to
customize this template, make sure your prompt also contains only one %d
.
Similarly, few_shots
can be a table in the following form or a function that
takes no argument and returns a table in the following form:
{
{ role = "user", content = "something" },
{ role = "assistant", content = "something" }
-- ...
-- You can pass as many turns as you want
}
Below is an example to configure the prompt based on filetype:
require('minuet').setup {
provider_options = {
openai = {
system = {
prompt = function()
if vim.bo.ft == 'tex' then
return [[your prompt for completing prose.]]
else
return require('minuet.config').default_system.prompt
end
end,
},
few_shots = function()
if vim.bo.ft == 'tex' then
return {
-- your few shots examples for prose
}
else
return require('minuet.config').default_few_shots
end
end,
},
},
}
There's no need to replicate unchanged fields. The system will automatically
merge modified fields with default values using the tbl_deep_extend
function.
the following is the default configuration for OpenAI:
provider_options = {
openai = {
model = 'gpt-4o-mini',
system = "see [System Prompt] section for the default value",
few_shots = "see [System Prompt] section for the default value",
stream = true,
optional = {
-- pass any additional parameters you want to send to OpenAI request,
-- e.g.
-- stop = { 'end' },
-- max_tokens = 256,
-- top_p = 0.9,
},
},
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens.
provider_options = {
openai = {
optional = {
max_tokens = 256,
},
},
}
the following is the default configuration for Claude:
provider_options = {
claude = {
max_tokens = 512,
model = 'claude-3-5-haiku-20241022',
system = "see [System Prompt] section for the default value",
few_shots = "see [System Prompt] section for the default value",
stream = true,
optional = {
-- pass any additional parameters you want to send to claude request,
-- e.g.
-- stop_sequences = nil,
},
},
}
Codestral is a text completion model, not a chat model, so the system prompt
and few shot examples does not apply. Note that you should use the
CODESTRAL_API_KEY
, not the MISTRAL_API_KEY
, as they are using different
endpoint. To use the Mistral endpoint, simply modify the end_point
and
api_key
parameters in the configuration.
the following is the default configuration for Codestral:
provider_options = {
codestral = {
model = 'codestral-latest',
end_point = 'https://codestral.mistral.ai/v1/fim/completions',
api_key = 'CODESTRAL_API_KEY',
stream = true,
optional = {
stop = nil, -- the identifier to stop the completion generation
max_tokens = nil,
},
},
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens.
provider_options = {
codestral = {
optional = {
max_tokens = 256,
stop = { '\n\n' },
},
},
}
The following config is the default.
provider_options = {
gemini = {
model = 'gemini-1.5-flash-latest',
system = "see [System Prompt] section for the default value",
few_shots = "see [System Prompt] section for the default value",
stream = true,
optional = {},
},
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens. You can also adjust the safety settings following the example:
provider_options = {
gemini = {
optional = {
generationConfig = {
maxOutputTokens = 256,
},
safetySettings = {
{
-- HARM_CATEGORY_HATE_SPEECH,
-- HARM_CATEGORY_HARASSMENT
-- HARM_CATEGORY_SEXUALLY_EXPLICIT
category = 'HARM_CATEGORY_DANGEROUS_CONTENT',
-- BLOCK_NONE
threshold = 'BLOCK_ONLY_HIGH',
},
},
},
},
}
Use any providers compatible with OpenAI's chat completion API.
For example, you can set the end_point
to
http://localhost:11434/v1/chat/completions
to use ollama
.
Note that not all openAI compatible services has streaming support, you should
change stream=false
to disable streaming in case your services do not support
it.
The following config is the default.
provider_options = {
openai_compatible = {
model = 'llama-3.3-70b-versatile',
system = "see [System Prompt] section for the default value",
few_shots = "see [System Prompt] section for the default value",
end_point = 'https://api.groq.com/openai/v1/chat/completions',
api_key = 'GROQ_API_KEY',
name = 'Groq',
stream = true,
optional = {
stop = nil,
max_tokens = nil,
},
}
}
Use any provider compatible with OpenAI's completion API. This request uses the text completion API, not chat completion, so system prompts and few-shot examples are not applicable.
For example, you can set the end_point
to
http://localhost:11434/v1/completions
to use ollama
.
Refer to the Completions Legacy section of the OpenAI documentation for details.
Note that not all openAI compatible services has streaming support, you should
change stream=false
to disable streaming in case your services do not support
it.
provider_options = {
openai_fim_compatible = {
model = 'deepseek-coder',
end_point = 'https://api.deepseek.com/beta/completions',
api_key = 'DEEPSEEK_API_KEY',
name = 'Deepseek',
stream = true,
optional = {
stop = nil,
max_tokens = nil,
},
}
}
The following configuration is not the default, but recommended to prevent request timeout from outputing too many tokens.
provider_options = {
openai_fim_compatible = {
optional = {
max_tokens = 256,
stop = { '\n\n' },
},
},
}
Currently only text completion model in huggingface is supported, so the system prompt and few shot examples does not apply.
provider_options = {
huggingface = {
end_point = 'https://api-inference.huggingface.co/models/bigcode/starcoder2-3b',
type = 'completion',
strategies = {
completion = {
markers = {
prefix = '<fim_prefix>',
suffix = '<fim_suffix>',
middle = '<fim_middle>',
},
strategy = 'PSM', -- PSM, SPM or PM
},
},
optional = {
parameters = {
-- The parameter specifications for different LLMs may vary.
-- Ensure you specify the parameters after reading the API
-- documentation.
stop = nil,
max_tokens = nil,
do_sample = nil,
},
},
},
}
This command allows you to change the provider after Minuet
has been setup.
Enable or disable auto-completion for nvim-cmp
or blink.cmp
. Note that you
still need to add Minuet to your sources. This command controls whether
cmp/blink will attempt to invoke minuet when minuet is included in sources.
This command has no effect on manual completion; Minuet will always be enabled
when invoked manually.
Toggle the automatic display of virtual-text
completion in the current buffer
on or off.
minuet-ai.nvim
offers the following functions to customize your key mappings:
{
-- accept whole completion
require('minuet.virtualtext').action.accept,
-- accept by line
require('minuet.virtualtext').action.accept_line,
require('minuet.virtualtext').action.next,
require('minuet.virtualtext').action.prev,
require('minuet.virtualtext').action.dismiss,
-- whether the virtual text is visible in current buffer
require('minuet.virtualtext').action.is_visible,
}
You can configure the icons of minuet
by using the following snippet
(referenced from cmp's
wiki):
local cmp = require('cmp')
cmp.setup {
formatting = {
format = function(entry, vim_item)
-- Kind icons
vim_item.kind = string.format('%s %s', kind_icons[vim_item.kind], vim_item.kind) -- This concatenates the icons with the name of the item kind
-- Source
vim_item.menu = ({
minuet = "σ±»"
})[entry.source.name]
return vim_item
end
},
}
When using Minuet with auto-complete enabled, you may occasionally experience a
noticeable delay when pressing <CR>
to move to the next line. This occurs
because Minuet triggers autocompletion at the start of a new line, while cmp
blocks the <CR>
key, awaiting Minuet's response.
To address this issue, consider the following solutions:
- Unbind the
<CR>
key from your cmp keymap. - Utilize cmp's internal API to avoid blocking calls, though be aware that this API may change without prior notice.
Here's an example of the second approach using Lua:
local cmp = require 'cmp'
opts.mapping = {
['<CR>'] = cmp.mapping(function(fallback)
-- use the internal non-blocking call to check if cmp is visible
if cmp.core.view:visible() then
cmp.confirm { select = true }
else
fallback()
end
end),
}
With nvim-cmp:
{
'milanglacier/minuet-ai.nvim',
config = function()
require('minuet').setup {
-- Your configuration options here
}
end
},
{
'nvim-cmp',
optional = true,
opts = function(_, opts)
-- if you wish to use autocomplete
table.insert(opts.sources, 1, {
name = 'minuet',
group_index = 1,
priority = 100,
})
opts.performance = {
-- It is recommended to increase the timeout duration due to
-- the typically slower response speed of LLMs compared to
-- other completion sources. This is not needed when you only
-- need manual completion.
fetching_timeout = 2000,
}
opts.mapping = vim.tbl_deep_extend('force', opts.mapping or {}, {
-- if you wish to use manual complete
['<A-y>'] = require('minuet').make_cmp_map(),
})
end,
}
With blink-cmp:
-- set the following line in your config/options.lua
vim.g.lazyvim_blink_main = true
{
'milanglacier/minuet-ai.nvim',
config = function()
require('minuet').setup {
-- Your configuration options here
}
end,
},
{
'saghen/blink.cmp',
optional = true,
opts = {
keymap = {
['<A-y>'] = {
function(cmp)
cmp.show { providers = { 'minuet' } }
end,
},
},
sources = {
-- if you want to use auto-complete
default = { 'minuet' },
providers = {
minuet = {
name = 'minuet',
module = 'minuet.blink',
score_offset = 100,
},
},
},
},
}
- Implement
RAG
on the codebase and encode the codebase information into the request to LLM. - DONE Virtual text UI support.
Contributions are welcome! Please feel free to submit a Pull Request.
- cmp-ai: Reference for the integration with
nvim-cmp
. - continue.dev: not a neovim plugin, but I find a lot LLM models from here.
- copilot.lua: Reference for the virtual text frontend.