A full‑stack, from‑scratch clone of n8n — built and maintained by sohampirale.
This repository is not a fork of the upstream project. It is an independently implemented workflow automation platform inspired by n8n that includes the editor, runtime, persistence, and a set of built‑in nodes and tools — including a custom AI Agent node and messaging/email tools (Telegram send & wait, Gmail send, and more).
This README was drafted from the description you provided. I have created a complete, practical README to help users and contributors get started quickly. If you want me to read repository files and tailor sections (examples, exact scripts, env names, node names) to your code, tell me and I will fetch the repo or you can paste the key files (package.json, server start scripts, .env.example, Dockerfile, docker-compose.yml, and a sample workflow config).
Quick links
- Project: sohampirale/n8n_clone
- Maintainer: sohampirale
- Current date: 2025-11-10
n8n_clone is a self-built workflow automation system with:
- Visual workflow editor (drag & drop nodes)
- Workflow runtime and execution engine
- Webhook handling and public webhook exposure support
- Credentials and secure storage
- Multiple persistence options (SQLite for dev, PostgreSQL recommended for production)
- Built-in nodes for common integrations and a set of custom nodes you created:
- AI Agent node (LLM orchestration)
- Telegram Send node
- Telegram WaitForResponse node
- Gmail Send node
- Plus other HTTP / JSON / Utility nodes
Key goals
- Provide an extensible workflow automation platform suitable for experiments and production use
- Make it straightforward to add custom nodes and tools (AI tooling and messaging integrations)
- Keep the developer experience ergonomic (hot reload, TypeScript, Docker support)
AI Agent node
- Orchestrates prompts, chains, tools, and decisions inside workflows.
- Can call LLM providers (OpenAI, Anthropic, local LLM endpoints) depending on configured provider keys.
- Exposes inputs for prompt template, context, memory, and tool invocation spec.
- Can return structured data (JSON) or produce human‑readable text.
- Supports tool integration so the agent can call Telegram or Gmail tools (send messages, wait for user replies, send emails) as part of a single workflow run.
Telegram Send + WaitForResponse
- Telegram Send: sends messages to Telegram chat IDs using a bot token.
- Telegram WaitForResponse: opens a blocking or background wait for the next reply (useful for multi-turn conversations).
- Typical usage: Workflow sends a question with Telegram Send, then pipeline yields to WaitForResponse which resumes the workflow when a reply arrives.
Gmail Send
- Sends email using Gmail SMTP or Gmail API credentials (OAuth or app passwords depending on your setup).
- Supports attachments, HTML bodies, recipients, CC/BCC, and templating variables.
- Frontend: Typescript React-based editor (single-page app).
- Backend: Typescript Node server providing:
- REST & WebSocket APIs for editor and runtime control
- Webhook endpoints for triggers
- Execution engine that runs nodes in series/parallel with retry and error handling
- Persistence: TypeORM / Prisma or custom DB layer (SQLite for quick start and Postgres for production)
- Nodes: Each node is implemented as a TypeScript module with a defined input schema, output schema, and execute() handler
- Agent/Tooling: Agent node loads tool definitions (adapters that wrap Telegram/Gmail/HTTP/etc.) and LLM clients
- Node.js 18+ (LTS recommended)
- pnpm / npm / yarn (pnpm recommended for monorepos)
- Docker & Docker Compose (recommended for production or to avoid local DB setup)
- PostgreSQL (for production)
- Optional API keys:
- OPENAI_API_KEY (if using OpenAI)
- TELEGRAM_BOT_TOKEN
- GMAIL_CLIENT_ID / GMAIL_CLIENT_SECRET / GMAIL_REFRESH_TOKEN (if using Gmail API)
Below are commonly used environment variables. Adjust names to match your repo's .env.example.
- N8N_PORT — port to run the web UI and API (default: 5678)
- N8N_HOST — host (default: 0.0.0.0)
- DB_TYPE — sqlite | postgres
- DB_SQLITE_PATH — path to sqlite file (e.g., ./data/database.sqlite)
- DB_POSTGRES_HOST
- DB_POSTGRES_PORT
- DB_POSTGRES_USER
- DB_POSTGRES_PASSWORD
- DB_POSTGRES_DATABASE
- NODE_ENV — development | production
- OPENAI_API_KEY — API key for OpenAI (if using)
- AGENT_DEFAULT_MODEL — default LLM model for AI Agent node
- TELEGRAM_BOT_TOKEN — for Telegram nodes
- GMAIL_* — keys/tokens for Gmail integration
- WEBHOOK_URL — public URL for webhooks (if behind proxy)
- N8N_BASIC_AUTH_ACTIVE — true/false
- N8N_BASIC_AUTH_USER — basic auth username
- N8N_BASIC_AUTH_PASSWORD — basic auth password
- Create or update docker-compose.yml with your configuration. Example:
version: "3.8"
services:
app:
build: .
image: sohampirale/n8n_clone:latest # replace with your image name
ports:
- "5678:5678"
environment:
- N8N_PORT=5678
- DB_TYPE=sqlite
- DB_SQLITE_PATH=/data/database.sqlite
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=changeme
- TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
- OPENAI_API_KEY=${OPENAI_API_KEY}
volumes:
- ./data:/data- Start:
docker compose up -d- Visit: http://localhost:5678
- Clone
git clone https://github.com/sohampirale/n8n_clone.git
cd n8n_clone- Install dependencies
# pnpm
pnpm install
# or npm / yarn
npm install- Copy .env.example to .env and populate keys:
cp .env.example .env
# Edit .env to add TELEGRAM_BOT_TOKEN, OPENAI_API_KEY, DB config, etc.- Run in development mode (hot reload)
pnpm dev
# or
npm run dev- Build & start (production mode)
pnpm build
pnpm start- Unit tests: npm run test (or pnpm test)
- Integration / E2E: follow instructions in the tests folder (may rely on external services / Docker)
Each node in this platform follows a simple pattern:
- Node metadata (name, displayName, description, inputs/outputs, credentials)
- Node parameter definitions (fields shown in the editor)
- Execution function which receives input data and context and returns output data
- Tool declarations (for Agent tools)
To add a node:
- Create a new TypeScript module in src/nodes/
- Export the node descriptor and the execute handler
- Register the node in the node registry (src/nodes/index.ts)
- Rebuild and test in the editor
The AI Agent node enables LLM-driven workflows with tool access.
Typical fields:
- model: string (model identifier)
- promptTemplate: string (handles templating using incoming workflow data)
- tools: list of tool connectors (TelegramSend, GmailSend, HTTP, etc.)
- maxTokens, temperature, top_p, stop
Example flow (high level):
- Webhook trigger receives a user request.
- AI Agent node generates a response and determines it needs to ask the user a follow-up on Telegram.
- Telegram Send node sends the question.
- Telegram WaitForResponse node pauses the workflow until user replies.
- The reply resumes the workflow; AI Agent continues processing and sends a final email using Gmail Send.
- Treat API keys and credentials as secrets. Use environment variables or secret stores in production.
- When enabling webhooks, ensure you configure a secure WEBHOOK_URL and optionally use signing/verification.
- If using OAuth credentials (Gmail), store refresh tokens securely.
- Limit access to the editor (enable basic auth or other auth mechanisms).
- Use PostgreSQL for production (DB_TYPE=postgres).
- Run the app behind a reverse proxy (NGINX) with HTTPS.
- Use process managers (systemd, Docker, or Kubernetes) and monitor memory/CPU.
- Periodically backup the database and credential store.
- Consider separating execution workers from the API server for high throughput workflows.
- App won't start:
- Check logs: docker compose logs -f app or pnpm start logs
- Ensure DB is reachable and migrations applied
- Webhooks not firing:
- Confirm WEBHOOK_URL is correct and reachable from outside
- Check firewall or reverse proxy for route/port issues
- Telegram messages not delivered:
- Verify TELEGRAM_BOT_TOKEN and chat_id are correct
- Check Telegram API rate limits
Thanks for considering contributing! Typical contribution workflow:
- Fork or branch from main
- Add your feature or fix
- Add tests where appropriate
- Open a pull request with a clear description and testing steps
Please follow the repo's code style (TypeScript, lint rules). If you want me to create a CONTRIBUTING.md, I can draft one that matches your repo rules — tell me or allow me to read the repo to align it exactly.