llama-copilot is a Neovim plugin that integrates with ollama's AI models for code completion.
Install it using any plugin manager, require nvim-lua/plenary.nvim.
With packer
use {
"Faywyn/llama-copilot.nvim",
requires = "nvim-lua/plenary.nvim"
}
Calling the setup function is not required, it is necessary if you want to use other llm or host.
-- Default config
require('llama-copilot').setup({
host = "localhost",
port = "11434",
model = "codellama:7b-code",
max_completion_size = 15, -- use -1 for limitless
debug = false
})
- nvim-lua/plenary.nvim
- Need ollama and any model.
Note
Initially for codellama:7b-code (and up to 70b). It hasn't been tested with other llm model.
llama-copilot provides user commands :LlamaCopilotComplet
and :LlamaCopilotAccept
that can be used to trigger code generation (based on the current context) and accept the code.
Here's how you can use it:
- Position your cursor where you want to generate code.
- Type
:LlamaCopilotComplet
and press Enter. - Wait for the code to generate
- Type
:LlamaCopilotAccept
to place the completion on your file or:q
to quit the open window