Follow the official instruction
Also, set your user as docker
group by following the Linux post-installation steps
docker network create mblt_int
docker compose build
docker compose up
If you want to run it with GPU, you must install nvidia-container-toolkit
by following this guide. Then, you can run it by following instruction.
docker compose -f docker-compose.yml -f docker-compose.gpu.yml up
This demo was originally implemented for demonstration purpose only, which made it only usable with one user.
However, you can set this demo available for multi-users by setting production environment variable.
Just copy backend/src/.env.example
to backend/src/.env
and make PRODUCTION="True"
.
If production mode, changing model does not affect immediately. The server will automatically change model for each LLM request when needed.
You can change list of LLM models by editing backend/src/models.txt
. These change will be applied when server is restarted.
You can change system prompts without any docker rebuild by editing backend/src/system.txt
and backend/src/inter-prompt.txt
. These files will be applied when conversation is reset.
docker compose up -d
docker compose down
Path to this repository should be /home/mobilint/aries-llm-demo
.
If needed, you can change the path in llm-demo.desktop
and run.sh
file before copying.
sudo cp llm-demo.desktop /usr/share/applications
Then, you can add LLM
icon in apps to the favorites (left sidebar).