Welcome to the Chat-with-Code application repository! This project utilizes Retrieval Augmented Generation techniques using LlamaIndex, LangChain, and Ollama to interact with codebases
Screencast.from.2024-03-10.21-57-18.webm
Before running the application, ensure you have set up the following environment variables:
-
GITHUB_TOKEN: This token allows access to the GitHub API. Please set it in the
.env
file.Example
.env
file:GITHUB_TOKEN=your_github_token_here
-
OPEN_API_KEY: This key is used for specific functionalities, such as accessing external APIs. In Linux, you can set it using the following command:
export OPEN_API_KEY=your_open_api_key_here
For Windows users, set the environment variable using the appropriate command.
set OPEN_API_KEY=your_open_api_key_here
To set up and activate the virtual environment and install all required dependencies, you can use the following commands:
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # For Linux/Mac
venv\Scripts\activate # For Windows
# Install dependencies
pip install -r requirements.txt[Screencast from 2024-03-10 21-57-18.webm](https://github.com/parvpareek/chat-with-repo/assets/26191530/a2ae9073-a9fd-4231-b889-a7c870426c70)
If you wish to use Ollama (Local Language Model) for enhanced interactions, follow these steps:
-
Install Ollama
sudo apt install curl curl -fsSL https://ollama.com/install.sh | sh pip install -r ollama.txt
-
Once installed, pull the llama2 model
ollama pull llama2
-
Once the llama2 llm has been pulled execute the following command to start the server in a different terminal:
ollama serve
After setting up the environment variables and installing dependencies, you can run the Chat-with-Code application.
python src/main.py
Feel free to explore the functionalities and engage in interactive code discussions!
For any issues or feedback, please open an issue in this repository. Thank you for using Chat-with-Code!