This this a chat robot with memory throughout the right rag!! This project is trying to build an Agent that brings out its best ability to memory and use it to communicate with users.
2024-08-06:
Firstly, you need to install the necessary packages
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Find the version that suits you, and the different versions don't change much
Then install the other dependencies
pip install -r requirements.txt
To vertify if you are successful, run test.ipynb, which finish the fundamental functions of llama-index
Select the method you want to run, and run it
python main.py --method <method>
the <method> you can choose are:
- local -> this mean that you will use the local model.
- azure -> this mean that you will use the azure base openai.
- openai -> this mean that you will use the openai or domestic factor base openai.
Firstly, you need to set your .env file
# The information of the QQ-bot, if you don not have please create it at https://q.qq.com/ first.
APPID =
TOKEN =
APPSECRET =
# OPENAI key
API_SECRET_KEY =
BASE_URL =
# language
LANGUAGE = Chinese
# Weather key
TOMORROW_API =
LOCATION =
# Azure OpenAI Service
AZURE_OPENAI_KEY=
AZURE_OPENAI_ENDPOINT=
since the project is not finished yet, the .env file will update while the project is developing
- Our short term goal is to learn how to use the llama-index library, and learn how to make the Vector Rag and the Graph Rag, comparing these functions and make a better one to over the efficiency of the previous.
- Our long term goal is to make a chat robot with memory throughout these Rag, this can be the agent that collecting data for you, it can search online and talk to you with its own memory.
If you are interested in this project, please contact us.