Skip to content

This repo contains all the infrastructure code to spin up quick demo environments of n8n

Notifications You must be signed in to change notification settings

Morni-Andoka/self-hosted-ai-demo

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

n8n Demo setup

This repo helps quickly bootstrap an n8n demo environment using docker-compose.

Requirements

  • Docker compose
  • Optionally an Nvidia GPU for faster inference on Ollama

Setup

  • Clone this repo
  • Optionally edit the credentials in the .env file
  • Start the containers:
    • If you have an Nvidia GPU, run docker compose --profile gpu-nvidia up
    • Otherwise to run inference services on your CPU, run docker compose --profile cpu up
  • Wait a couple of minutes for all the containers to become healthy
  • Open http://localhost:5678 in your browser and fill in the details
  • Open the included workflow: http://localhost:5678/workflow/srOnR8PAY3u4RSwb
  • Wait until Ollama has downloaded the llama3.1 model (you can check the docker console)

Included service endpoints

Updating

  • Run docker compose pull to fetch all the latest images
  • Run docker compose create && docker compose up -d to update and restart all the containers

About

This repo contains all the infrastructure code to spin up quick demo environments of n8n

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published