plugins.llm.settings
Options provided to the require('llm').setup
function.
Type: attribute set of anything
Default:
{ }
Example:
{
keys = {
"Input:Cancel" = {
key = "<C-c>";
mode = "n";
};
"Input:Submit" = {
key = "<cr>";
mode = "n";
};
};
max_history = 15;
max_tokens = 1024;
model = "glm-4-flash";
prefix = {
assistant = {
hl = "Added";
text = "⚡ ";
};
user = {
hl = "Title";
text = "😃 ";
};
};
save_session = true;
url = "https://open.bigmodel.cn/api/paas/v4/chat/completions";
}
Declared by:
plugins.llm.settings.enable_suggestions_on_files
Lets you enable suggestions only on specific files that match the pattern matching syntax you will provide.
It can either be a string or a list of strings, for example:
- to match on all types of buffers:
"*"
- to match on all files in my_project/:
"/path/to/my_project/*"
- to match on all python and rust files:
[ "*.py" "*.rs" ]
Type: null or string or list of string or raw lua code
Default:
null
Plugin default: "*"
Example:
[
"*.py"
"*.rs"
]
Declared by:
plugins.llm.settings.enable_suggestions_on_startup
Lets you choose to enable or disable “suggest-as-you-type” suggestions on neovim startup.
You can then toggle auto suggest with LLMToggleAutoSuggest
.
Type: null or boolean or raw lua code
Default:
null
Plugin default: true
Declared by:
plugins.llm.settings.accept_keymap
Keymap to accept the model suggestion.
Type: null or string or raw lua code
Default:
null
Plugin default: "<Tab>"
Declared by:
plugins.llm.settings.api_token
Token for authentificating to the backend provider.
When api_token
is set, it will be passed as a header: Authorization: Bearer <api_token>
.
Type: null or string or raw lua code
Default:
null
Declared by:
plugins.llm.settings.backend
Which backend to use for inference.
Type: null or string or raw lua code
Default:
null
Plugin default: "huggingface"
Declared by:
plugins.llm.settings.context_window
Size of the context window (in tokens).
Type: null or unsigned integer, meaning >=0, or raw lua code
Default:
null
Plugin default: 1024
Declared by:
plugins.llm.settings.debounce_ms
Time in ms to wait before updating.
Type: null or unsigned integer, meaning >=0, or raw lua code
Default:
null
Plugin default: 150
Declared by:
plugins.llm.settings.disable_url_path_completion
llm-ls
will try to add the correct path to the url to get completions if it does not already
end with said path.
You can disable this behavior by setting this option to true
.
Type: null or boolean or raw lua code
Default:
null
Plugin default: false
Declared by:
plugins.llm.settings.dismiss_keymap
Keymap to dismiss the model suggestion.
Type: null or string or raw lua code
Default:
null
Plugin default: "<S-Tab>"
Declared by:
plugins.llm.settings.model
The model ID, behavior depends on backend
Type: null or string or raw lua code
Default:
null
Example:
"bigcode/starcoder2-15b"
Declared by:
plugins.llm.settings.tls_skip_verify_insecure
Whether to skip TLS verification when accessing the backend.
Type: null or boolean or raw lua code
Default:
null
Plugin default: false
Declared by:
plugins.llm.settings.tokenizer
llm-ls
uses tokenizers to make sure the prompt fits the context_window
.
To configure it, you have a few options:
- No tokenization:
llm-ls
will count the number of characters instead Leave this option set tonull
(default) - From a local file on your disk. Set the
path
attribute. - From a Hugging Face repository:
llm-ls
will attempt to downloadtokenizer.json
at the root of the repository - From an HTTP endpoint:
llm-ls
will attempt to download a file via an HTTP GET request
Type: null or (submodule) or (submodule) or (submodule) or raw lua code
Default:
null
Plugin default: null
Example:
{
path = "/path/to/my/tokenizer.json";
}
Declared by:
plugins.llm.settings.tokens_to_clear
List of tokens to remove from the model’s output.
Type: null or (list of (string or raw lua code)) or raw lua code
Default:
null
Plugin default:
[
"<|endoftext|>"
]
Declared by:
plugins.llm.settings.url
The http url of the backend.
Type: null or string or raw lua code
Default:
null
Plugin default: null
Declared by: