This repository contains the configuration needed to run StackAI locally using docker compose.
- At least 64GB of RAM
- At least 16 CPU cores
- 1TB of disk space
- Ubuntu 24.04 LTS
- Python 3.10 or higher.
- You will need internet access during the setup process.
- Docker and Docker Compose (compose version v2.26 or higher). Follow instructions below to install them if needed.
Check the steps below for instructions on how to check if you meet this requirement.
- You will need access to stackai's container image registry on Azure.
- Depending on how you configure the containers, different ports should be exposed, if you follow the default settings, the following ports need to be exposed:
- Port 3000: TCP port used for the StackAI frontend HTTP
- Port 8000: TCP port used for the StackAI backend HTTP
- Port 8800: TCP port used for the Kong API Gateway HTTP
- Port 8443: TCP port used for the Kong API Gateway HTTPS
- Port 9000: TCP port used for the MinIO service.
If you set up the Caddy reverse proxy (See steps below), you may change the ports above for the ports 80 or/and 443.
.
├── caddy
├── mongodb
├── scripts
├── supabase
├── stackend
├── stackrepl
├── stackweb
...
Each of the folders in the project contains the configuration for one of the services needed to run StackAI.
After running the environment variables initialization script (see below), each folder will contain a .env
file with the environment variables needed to run the service. This, along the docker-compose.yml files, are the most important configuration files that may need to be edited.
Follow the instructions in the order they are presented.
Note: Throughout this setup guide, when you see cd /path/to/stackai-onprem
, replace /path/to/stackai-onprem
with the actual path to where you have cloned or downloaded this repository on your system.
# linux
sudo apt install make
# RHEL
sudo dnf install make
Make sure that make is installed correctly by running:
make --version
You will need docker and docker compose installed in your machine.
To install them, open a terminal and navigate to the root folder of the project by running:
cd /path/to/stackai-onprem
Then execute the following command, log in again after it finishes:
:WARNING: The script will log you out from your current session. Log in again and verify successful setup running the following command:
make setup-docker-in-ubuntu
The commands needed to install python, pip and virtualenv may change depending on your specific distribution.
The following commands should work for most Ubuntu based distributions:
Update the package index:
# linux
sudo apt update
# RHEL
sudo dnf update
Install python3, pip and virtualenv:
# linux
sudo apt install python3-pip python3-venv
# RHEL
sudo dnf install python3.11 python3.11-pip
sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
Ensure that python is installed and working correctly. Start by making sure that the python version is >= 3.10:
python3 --version
Then, make sure that virtualenv is installed and working by running:
python3 -m venv .venv
As a result, you should see a new folder named .venv
in your current directory.
Then, make sure that you can source the virtual environment by running:
source .venv/bin/activate
Last, make sure that pip is working correctly in the virtual environment by running:
python3 -m pip install pymongo
And then:
python3 -c "import pymongo; print('pymongo imported successfully')"
You can remove the virtual environment by running:
deactivate
rm -rf .venv
You will need to log in to StackAI's container registry on Azure to pull the images we provided you with.
docker login -u <the_username_we_provided_you_with> -p <the_password_we_provided_you_with> stackai.azurecr.io
Each of the services has a series of environment variables that need to be configured in order to run it. In this step of the set up process, we will create the .env
files for all the services. After the script finishes, you should be able to go to find the following files in each service's folder:
supabase/.env
weaviate/.env
unstructured/.env
stackend/.env
stackrepl/.env
stackweb/.env
...
The script will initialize the environment variables with random secrets and a valid default configuration. It is encouraged that you manually review the generated values after the script finishes and make any adjustments needed, specially to the networking related configuration.
cd /path/to/stackai-onprem
The script will prompt you to input the public ip/ url where the services will be exposed.
make install-environment-variables
You can configure custom domains for the three main services: the frontend application, the API, and the Supabase backend. We recommend using a primary domain and two subdomains.
For example:
- APP URL:
https://stackai.onprem.com
- API URL:
https://api.stackai.onprem.com
- SUPABASE URL:
https://db.stackai.onprem.com
There are two steps to configure your domains:
Run the following command and enter your domains when prompted. This command will update all the necessary .env
files across the services.
make configure-domains
You also need to update the Caddyfile to reflect your new domains. Replace the placeholder domains in the file with the ones you have configured.
docker compose up -d
make run-postgres-migrations
make run-template-migrations
In order to update the services, you will need to follow the instructions below:
- Stop all the services with
docker compose stop
- Update the
image
field of the docker-compose.yml file of the service you want to update.
Example, to update the stackend service from v1.0.0
to v1.0.1
Usually you don't need to do this cause you use the latest version.
This line
stackend:
image: stackai.azurecr.io/stackai/stackend-backend:v1.0.0
Should be updated to:
stackend:
image: stackai.azurecr.io/stackai/stackend-backend:v1.0.1
- Pull the new images with
docker compose pull
docker compose up
In the case of the frontend (stackweb), you will need to rebuild the image with
docker compose down stackweb
source stackweb/.env
docker compose build stackweb
docker compose up stackweb
- Run database migrations if needed
make run-postgres-migrations
make run-template-migrations
-
Navigate to the
stackend
folder. -
Configure the embedding models you want to use in the
stackend/embeddings_config.toml
file. -
Configure the local LLM models you want to use in the
stackend/llm_local_config.toml
file and thestackend/llm_config.toml
files. -
Restart the services that depend on this configuration
docker compose dow stackend celery_worker docker compose up stackend celery_worker
-
Enable SAML in your instance:
make saml-enable
-
Check the the SAML configurations you need to setup in your IdP (Identity Provider):
make saml-status
- Run
make saml-add-provider metadata_url='{idp-metadata-ur}' domains='{comma-sepparated-domains}'
- You can list saml providers running
make saml-list-providers
- You can delete providers running
make saml-delete-provider provider_id='{provider-id}'