Skip to content

Latest commit

 

History

History
89 lines (54 loc) · 1.64 KB

README.md

File metadata and controls

89 lines (54 loc) · 1.64 KB

llamaVSCode

Using Llama to with VSCode

Download and Install LLAMA

Download and install Ollama.

CodeLLAMA

There are multiple LLM available for Ollama. In this case we will be using Codellama which can use text prompts to generate and discuss code. Once Ollama is installed download Codellama model

ollama pull codellama

Recheck if the model is available locally

ollama list

run Codellama

ollama run codellama

test the model

run

Model File

A model file is the blueprint to create and share models with Ollama.

FROM codellama

# sets the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# sets the context window size to 1500, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 1500

# sets a custom system message to specify the behavior of the chat assistant
SYSTEM You are expert Code Assistant

activate the new configuration

ollama create codegpt-codellama -f Modfile 

run

Check id the new configuration is listed

ollama list

run

Test the new configuration

ollama run codegpt-codellama

run

CodeGPT Extension

Install the codeGPT extension in VSCode.

run

Then select Ollama from the dropdown menu

run

and select the configuration we created

run

Generate Code

run