Production-ready Elasticsearch, Kibana, and APM Server deployment with automated setup
This repository provides a complete, production-ready Elastic Stack 9.2.0 setup with:
- 🔍 Elasticsearch - Distributed search and analytics engine
- 📊 Kibana - Data visualization and management
- 📈 APM Server - Application Performance Monitoring
- 🐳 Container-based - Works with Docker or Podman
- 🔐 Security enabled - Built-in authentication and authorization
- 🚀 Automated setup - One-command deployment
# Clone and navigate to the project
git clone https://github.com/siyamsarker/elastic-apm-quickstart.git
cd "elastic-apm-quickstart"
# Create environment file from example
cp .env.example .env
# Edit .env file and set your passwords
# IMPORTANT: Replace all 'changeme' values with strong, unique passwords
nano .env # or use your preferred editor
# Make setup script executable
chmod +x setup.sh
# Run automated setup (detects Docker/Podman automatically)
./setup.shThat's it! 🎉 Your Elastic Stack will be ready in minutes.
- Memory: Minimum 4GB RAM available for containers
- Ports: 9200, 5601, and 8200 must be available
- OS: macOS, Linux, or Windows with WSL2
The maintenance scripts (cleanup-old-indices.sh, disk-usage-monitor.sh, ilm-15-day-retention.sh) require the following tools:
- jq: JSON processor for parsing Elasticsearch responses
- curl: Command-line tool for HTTP requests
- awk: Text processing tool for filtering indices
Installation Instructions:
| Tool | macOS | Linux (Debian/Ubuntu) | Linux (CentOS/RHEL) | Windows (WSL2) |
|---|---|---|---|---|
| jq | brew install jq |
sudo apt update && sudo apt install jq |
sudo yum install epel-release && sudo yum install jq |
Follow Linux instructions for your WSL2 distro |
| curl | Pre-installed (or brew install curl) |
Pre-installed (or sudo apt install curl) |
Pre-installed (or sudo yum install curl) |
Pre-installed in most WSL2 distros |
| awk | Pre-installed (BSD awk) | Pre-installed (GNU awk) | Pre-installed (GNU awk) | Pre-installed in most WSL2 distros |
Notes:
- On macOS, install Homebrew (
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)") if not already present. - On Linux, ensure you have
sudoprivileges for package installation. - On Windows, use WSL2 with a Linux distribution (e.g., Ubuntu) and follow the corresponding Linux instructions.
- Verify installation with
jq --version,curl --version, andawk --version.
- ✅ macOS: Docker Desktop
- ✅ Linux: Docker Engine
- ✅ Windows: Docker Desktop with WSL2
- 🍺 macOS:
brew install podman podman-compose - 📦 Linux: Install podman +
pip install podman-compose - 🪟 Windows: Podman Desktop
Before running the setup, you need to configure environment variables with strong passwords.
# Copy the example file
cp .env.example .envOption A: Using OpenSSL (Recommended)
# Generate Elasticsearch password
openssl rand -base64 24
# Generate Kibana password
openssl rand -base64 24
# Generate Kibana encryption key (exactly 32 characters)
openssl rand -base64 32 | head -c 32
# Generate APM secret token
openssl rand -base64 24Option B: Using /dev/urandom
# Generate random passwords
cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 24 | head -n 1Open the .env file and replace all changeme values with the generated passwords:
nano .env # or use vim, code, etc.Example .env file:
ELASTIC_PASSWORD=your_generated_elastic_password_here
KIBANA_PASSWORD=your_generated_kibana_password_here
KIBANA_ENCRYPTION_KEY=your_32_character_encryption_key
FLEET_ENROLLMENT_TOKEN=
APM_SECRET_TOKEN=your_generated_apm_token_here- Use different passwords for each variable
- Passwords should be at least 16 characters long
- The Kibana encryption key must be exactly 32 characters
- Keep your
.envfile secure and never commit it to git (already in .gitignore) - Save a backup of your passwords in a secure password manager
The intelligent setup.sh script automatically detects your container runtime and handles all configuration:
# 🚀 Normal setup
./setup.sh
# 🧹 Clean installation (removes existing data and re-setup)
./setup.sh --clean
# 🗑️ Remove all containers and volumes (no re-setup)
./setup.sh --clean-only
# 📊 Check service status
./setup.sh --status
# 🛑 Stop all services
./setup.sh --stop
# ❓ Show help
./setup.sh --help| Command | Requires .env | Function |
|---|---|---|
./setup.sh |
✅ Yes | Start/setup all services |
./setup.sh --clean |
✅ Yes | Remove data, then re-setup |
./setup.sh --clean-only |
❌ No | Only remove containers/volumes |
./setup.sh --status |
Show service status | |
./setup.sh --stop |
❌ No | Stop all services |
./setup.sh --help |
❌ No | Show usage information |
Use Cases:
- First time setup:
./setup.sh - Corrupted data / fresh start:
./setup.sh --clean - Just cleanup before manual reinstall:
./setup.sh --clean-only - Check if services are running:
./setup.sh --status - Temporarily stop services:
./setup.sh --stop
- 🔍 Auto-detects Docker or Podman
- 🔐 Configures security (passwords, tokens)
- ⏳ Waits for services to be ready
- 🧪 Health checks all components
- 📋 Displays service URLs and credentials
This setup includes scripts to manage Elasticsearch indices with a 15-day retention policy and monitor disk usage. Below is an overview of how to configure the ILM policy, when to use the provided scripts, and how to schedule automated cleanup.
The ilm-15-day-retention.sh script configures Index Lifecycle Management (ILM) policies to automatically manage and delete indices older than 15 days for APM, logs, traces, and metrics data. It creates policies with the following characteristics:
- Hot Phase: Indices are set to high priority (100) with a rollover after 1 day or when the primary shard reaches 10GB.
- Delete Phase: Indices are deleted after 15 days.
- Applicable Data: Covers APM traces, logs, metrics, and general logs/traces.
Before running the scripts, ensure they have executable permissions:
# Set executable permissions for maintenance scripts
chmod +x ilm-15-day-retention.sh cleanup-old-indices.sh disk-usage-monitor.shTo configure the ILM policy:
# Run the ILM setup script
./ilm-15-day-retention.shWhat it does:
- Verifies Elasticsearch connectivity.
- Creates or updates ILM policies for various data types (e.g.,
traces-apm.traces-15day-policy,logs-15day-retention). - Applies a default 15-day retention policy for new indices.
- Generates
disk-usage-monitor.shandcleanup-old-indices.shfor monitoring and cleanup tasks.
Note: Ensure the .env file contains the ELASTIC_PASSWORD before running the script. The script will exit with an error if the .env file or password is missing.
The following scripts help manage and monitor your Elasticsearch indices:
-
ilm-15-day-retention.sh:- When to use: Run this script initially after setting up Elasticsearch to configure the 15-day retention ILM policies or when you need to update these policies. It should be executed once during setup or after changes to the retention requirements.
- Usage:
./ilm-15-day-retention.sh
- Output: Displays the status of policy creation and lists all policies with 15-day retention.
-
disk-usage-monitor.sh:- When to use: Use this script to monitor disk usage and identify indices consuming the most space or those older than 15 days. Run it periodically to check the health of your Elasticsearch cluster or when troubleshooting storage issues.
- Usage:
./disk-usage-monitor.sh
- Output: Shows the top 20 indices by size, data stream information, and a list of APM-related indices older than 15 days.
-
cleanup-old-indices.sh:- When to use: Use this script to manually delete indices older than 15 days.
⚠️ WARNING: This script permanently deletes data - review indices withdisk-usage-monitor.shbefore running. - Usage:
./cleanup-old-indices.sh
- Output: Lists and deletes indices older than 15 days.
- When to use: Use this script to manually delete indices older than 15 days.
Note: Always review the output of disk-usage-monitor.sh before running cleanup-old-indices.sh to avoid accidental data loss. The cleanup script will permanently delete data without confirmation.
There are three different approaches to delete old indices from your Elasticsearch cluster:
This is the automated, production-ready approach where Elasticsearch manages data lifecycle automatically.
How it works:
-
Run the ILM setup script to create policies:
./ilm-15-day-retention.sh
-
Attach policies to your indices (one-time manual step):
# Example: Attach policy to an index via Kibana Dev Tools or curl curl -X PUT -u elastic:${ELASTIC_PASSWORD} \ "http://localhost:9200/my-index-name/_settings" \ -H "Content-Type: application/json" \ -d '{"index.lifecycle.name": "logs-15day-retention"}'
-
Elasticsearch automatically handles:
- Daily rollover (or at 10GB per shard)
- Deletion after 15 days from rollover
Timeline:
- Day 0: Index created with policy attached
- Day 1: Rollover to new index
- Day 16: Old index automatically deleted
Pros:
- ✅ Fully automated after initial setup
- ✅ Production-ready and reliable
- ✅ No manual intervention needed
- ✅ Elasticsearch handles everything
Cons:
⚠️ Only affects indices with policies attached⚠️ Requires manual policy attachment for existing indices⚠️ 15-day wait before first deletion
Use the generated cleanup script to manually delete old indices on demand.
How it works:
-
Generate the cleanup script:
./ilm-15-day-retention.sh # Creates cleanup-old-indices.sh -
Review indices that will be deleted (optional):
./disk-usage-monitor.sh
-
Run the script to delete indices:
./cleanup-old-indices.sh
⚠️ WARNING: This will permanently delete all indices older than 15 days without confirmation!
Pros:
- ✅ Immediate deletion
- ✅ Works on existing indices without policies
- ✅ Full control over when cleanup happens
- ✅ Simple and straightforward
Cons:
⚠️ Manual execution required⚠️ Deletion is permanent and irreversible⚠️ No confirmation prompt before deletion⚠️ Must remember to run periodically
Automate the cleanup script to run on a schedule using cron.
How it works:
-
Verify the cleanup script is executable:
chmod +x cleanup-old-indices.sh
-
Add to crontab:
crontab -e
-
Add this line to run daily at 2 AM:
0 2 * * * /path/to/elastic-apm-quickstart/cleanup-old-indices.sh >> /path/to/elastic-apm-quickstart/cleanup.log 2>&1
-
Verify the cron job is scheduled:
crontab -l
Pros:
- ✅ Automated daily cleanup
- ✅ Works with existing indices
- ✅ Logs output for monitoring
- ✅ No ILM policy setup needed
Cons:
⚠️ Requires cron access⚠️ Less flexible than ILM⚠️ Must ensure script has correct permissions⚠️ Needs monitoring to ensure it runs⚠️ Deletes data automatically without manual review
| Scenario | Recommended Option |
|---|---|
| Production environment | Option 1: ILM Automatic |
| Need immediate cleanup | Option 2: Manual Script |
| Simple automated cleanup | Option 3: Cron Job |
| Testing/Development | Option 2: Manual Script |
| Large-scale deployment | Option 1: ILM Automatic |
Important: Regardless of which option you choose, always test in a non-production environment first and ensure you have backups of critical data.
To automate the cleanup of indices older than 15 days, you can schedule cleanup-old-indices.sh to run via a cron job. Follow these steps:
-
Verify the Script is Executable:
chmod +x cleanup-old-indices.sh
-
Add to Cron:
- Open the crontab editor:
crontab -e
- Add a cron job to run the script daily at 2 AM (adjust the time as needed):
Replace
0 2 * * * /path/to/elastic-apm-quickstart/cleanup-old-indices.sh >> /path/to/elastic-apm-quickstart/cleanup.log 2>&1
/path/to/elastic-apm-quickstart/with the actual path to your project directory. - Save and exit the editor.
- Open the crontab editor:
-
Verify the Cron Job:
- Check the cron log (typically
/var/log/syslogor/var/log/cronon Linux) to ensure the job runs as scheduled. - Review the
cleanup.logfile for output and any errors.
- Check the cron log (typically
Note: Ensure the .env file is accessible to the cron job (in the project directory) and contains the correct ELASTIC_PASSWORD. Test the script manually first to confirm it works as expected.
If you prefer to run commands manually, use the appropriate compose command for your runtime:
For Docker:
# Start all services
docker compose up -d
# OR if using older docker-compose
docker-compose up -dFor Podman:
# Start all services
podman-compose up -dNote: The automated setup script handles Elasticsearch initialization and password setup automatically. Manual setup requires additional steps for proper security configuration.
Once deployed, access your Elastic Stack services:
| Service | URL | Description |
|---|---|---|
| 🔍 Elasticsearch | http://localhost:9200 | Search and analytics engine |
| 📊 Kibana | http://localhost:5601 | Data visualization dashboard |
| 📈 APM Server | http://localhost:8200 | Application performance monitoring |
- Username:
elastic - Password: Set in your
.envfile (ELASTIC_PASSWORD)
- APM Secret Token: Set in your
.envfile (APM_SECRET_TOKEN) - Kibana Encryption Key: Set in your
.envfile (KIBANA_ENCRYPTION_KEY)
⚠️ Security Note: Change these credentials for production use! Generate strong, unique passwords.
IMPORTANT: APM Server is configured with STRICT authentication requirements. All APM agents MUST include a valid secret token or API key.
Security Features:
- ✅ Secret token authentication required for all requests
- ✅ API key authentication enabled
- ✅ Anonymous access disabled
- ✅ RUM (Real User Monitoring) requires authentication
- All APM agents MUST provide the
APM_SECRET_TOKENfrom your.envfile - Requests without valid authentication will be rejected
- This prevents unauthorized data ingestion and protects your APM server
Testing Authentication:
# This should FAIL (no authentication)
curl http://localhost:8200/
# This should SUCCEED (with authentication)
curl -H "Authorization: Bearer YOUR_APM_SECRET_TOKEN" http://localhost:8200/| Parameter | Value |
|---|---|
| APM Server URL | http://localhost:8200 |
| Secret Token | Your APM_SECRET_TOKEN from .env file |
| Service Name | your-app-name |
.env file in production. Never hardcode tokens in your application code.
|-----------|-------|
| APM Server URL | http://localhost:8200 |
| Secret Token | Zi07Ksmqd1iCFyOlFWhGnhuP1KHg8fSaxx |
| Service Name | your-app-name |
🟢 Node.js
const apm = require('elastic-apm-node').start({
serverUrl: 'http://localhost:8200',
secretToken: process.env.APM_SECRET_TOKEN, // Load from environment variable
serviceName: 'my-nodejs-app',
serviceVersion: '1.0.0',
environment: 'production'
});Installation:
npm install elastic-apm-nodeEnvironment Variable:
export APM_SECRET_TOKEN="your_secret_token_here"🐍 Python
import os
import elasticapm
apm = elasticapm.Client({
'SERVER_URL': 'http://localhost:8200',
'SECRET_TOKEN': os.getenv('APM_SECRET_TOKEN'), # Load from environment variable
'SERVICE_NAME': 'my-python-app',
'SERVICE_VERSION': '1.0.0',
'ENVIRONMENT': 'production'
})Installation:
pip install elastic-apmEnvironment Variable:
export APM_SECRET_TOKEN="your_secret_token_here"☕ Java
# Using environment variable
-javaagent:elastic-apm-agent.jar
-Delastic.apm.server_urls=http://localhost:8200
-Delastic.apm.secret_token=${APM_SECRET_TOKEN}
-Delastic.apm.service_name=my-java-app
-Delastic.apm.service_version=1.0.0
-Delastic.apm.environment=productionEnvironment Variable:
export APM_SECRET_TOKEN="your_secret_token_here"Download: Elastic APM Java Agent
🔷 .NET
using Elastic.Apm;
using Elastic.Apm.NetCoreAll;
// In Startup.cs
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseAllElasticApm(Configuration);
// ... other middleware
}appsettings.json:
{
"ElasticApm": {
"ServerUrl": "http://localhost:8200",
"SecretToken": "${APM_SECRET_TOKEN}",
"ServiceName": "my-dotnet-app",
"ServiceVersion": "1.0.0",
"Environment": "production"
}
}Or use environment variable:
export ELASTIC_APM_SECRET_TOKEN="your_secret_token_here"Installation:
dotnet add package Elastic.Apm.NetCoreAllNote: For an authoritative list of changes and breaking notes, see the official Elastic Stack 9.2.0 release notes.
- Stability and performance improvements across the stack
- Ongoing security hardening and default-safe configurations
- APM and observability enhancements
- Refer to Elastic docs for full, version-specific details
Common Issues & Solutions
Port Conflicts:
# Check if ports are in use
lsof -i :9200,5601,8200
# Kill processes using these ports
sudo kill -9 $(lsof -t -i:9200)Memory Issues:
# Check available memory
free -h
# Increase Docker memory limit (Docker Desktop)
# Settings → Resources → Memory → 4GB+Log Analysis:
# Quick status check
./setup.sh --status
# Detailed logs
docker compose logs -f elasticsearch
docker compose logs -f kibana
docker compose logs -f apm-server"Unable to retrieve version information"
Solution Steps:
-
Wait for Elasticsearch:
curl -u elastic:OYyP6OIrT9aUaoXjk2tLaDxx http://localhost:9200/_cluster/health
-
Verify Kibana Password:
# Check if kibana_system password is set curl -u elastic:OYyP6OIrT9aUaoXjk2tLaDxx -X GET "http://localhost:9200/_security/user/kibana_system"
-
Reset if needed:
./setup.sh --clean
Connection & Token Issues
Verify APM Server:
# Test APM endpoint
curl -I http://localhost:8200
# Check APM server health
curl http://localhost:8200Token Validation:
# Test with secret token
curl -H "Authorization: Bearer Zi07Ksmqd1iCFyOlFWhGnhuP1KHg8fSaxx" http://localhost:8200Firewall Check:
# Test network connectivity
telnet localhost 8200# 🔄 Restart everything
./setup.sh --stop && ./setup.sh --clean
# 📊 Health check
curl -u elastic:OYyP6OIrT9aUaoXjk2tLaDxx http://localhost:9200/_cluster/health
# 📜 View all logs
docker compose logs -f
# 📋 Container status
docker ps -a# 🚀 Start services
docker compose up -d
docker-compose up -d # Legacy syntax
# 🛑 Stop services
docker compose down
docker-compose down # Legacy syntax
# 📜 View logs
docker compose logs -f [service-name]
docker-compose logs -f [service-name] # Legacy syntax
# 🧹 Reset everything (removes data)
docker compose down -v
docker-compose down -v # Legacy syntax# 🚀 Start services
podman-compose up -d
# 🛑 Stop services
podman-compose down
# 📜 View logs
podman-compose logs -f [service-name]
# 🧹 Reset everything
podman-compose down -v📁 Elastic APM 9.2.0/
├── 📜 README.md # 📝 This documentation
├── 🚀 setup.sh # 🤖 Automated setup script
├── 🐳 docker-compose.yml # 📦 Container orchestration
├── 🔐 .env # 🔑 Environment variables
├── 📈 apm-server.yml # ⚙️ APM server configuration
├── 🧹 cleanup-old-indices.sh # 🗑️ Script for cleaning old indices
├── 📊 disk-usage-monitor.sh # 📈 Script for monitoring disk usage
├── 🔄 ilm-15-day-retention.sh # 🔧 Script for configuring ILM policies
| File | Purpose | Description |
|---|---|---|
setup.sh |
🤖 Automation | Intelligent setup script with runtime detection |
docker-compose.yml |
📦 Orchestration | Service definitions and networking |
.env |
🔑 Configuration | Passwords, tokens, and environment variables |
apm-server.yml |
⚙️ APM Config | APM server-specific settings |
cleanup-old-indices.sh |
🗑️ Index Cleanup | Deletes indices older than 15 days (dry-run by default) |
disk-usage-monitor.sh |
📈 Disk Monitoring | Monitors index sizes and identifies old indices |
ilm-15-day-retention.sh |
🔧 ILM Configuration | Configures 15-day retention policies for indices |
| Aspect | Development | Production |
|---|---|---|
| Passwords | 🔓 Default (provided) | 🔐 Custom secure passwords |
| SSL/TLS | ❌ HTTP only | ✅ HTTPS with valid certificates |
| Network | 🏠 Local access | 🔥 Firewall + VPN |
| Monitoring | 👀 Basic logging | 📊 Full observability |
- Change all passwords in
.envfile - Enable SSL/TLS certificates
- Configure proper network security
- Set up backup procedures
- Enable audit logging
- Configure monitoring alerts
- Review security settings
- 🔍 Runtime Detection: Automatically detects Docker/Podman
- 🔧 Service Management: Start, stop, status checking
- ❤️ Health Monitoring: Waits for services to be ready
- 🔐 Security Setup: Configures Kibana system user automatically
- 🧹 Clean Installation: Option to reset everything
- 📜 Comprehensive Logging: Detailed progress information
# 🎆 Normal setup
./setup.sh
# 🧹 Clean setup (removes all data)
./setup.sh --clean
# 📊 Check service status
./setup.sh --status
# 🛑 Stop all services
./setup.sh --stop
# ❓ Show help
./setup.sh --help- 📚 Elastic Stack Documentation
- 📈 APM Server Reference
- 📊 Kibana User Guide
- 🔍 Elasticsearch Reference
- 🐳 Docker Compose Reference
- 🐳 Podman Documentation
Contributions are welcome! Please feel free to:
- 🔍 Report bugs or issues
- 💡 Suggest improvements
- 🔄 Submit pull requests
- 📝 Update documentation
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ by Siyam Sarker for the Elastic Stack community