Status: Lab / Proof-of-Concept • Scope: Education & Demos only • License: MIT
This repository describes and automates five containerized IBM MQ architectures for hands-on learning, prototyping, and demos. They intentionally favor clarity over hardening and are not intended for production.
Five designs are provided:
- Standalone QMs — spin up N independent queue managers for API experiments and admin demos.
- MFT Domain — canonical IBM MQ Managed File Transfer setup with Coordination, Command, and two Agents.
- Multi-Instance (MI) QM + VIP — one queue manager identity with Active/Standby containers on shared storage and an optional TCP VIP (HAProxy).
- Native-HA (Raft) QM + VIP — one queue manager identity distributed across three containers using MQ’s built-in Raft replication; optional VIP provides a stable client endpoint.
- Native-HA Cross-Region Replication (DR) — two Native-HA groups (primary + replica) with asynchronous log shipping for disaster recovery across regions.
All designs run on a single Docker host and default bridge network for simplicity.
- Architects and engineers learning IBM MQ topology options.
- Developers testing client configuration, reconnection, and admin interfaces.
- Pre-sales enablement and workshop labs.
Not for production: These patterns do not implement enterprise security, durability, or SRE operational guardrails.
-
Docker Engine and Docker Compose v2 (
docker compose …
) -
Host OS with
bash
andss
(ornetstat
) -
Images:
- MQ base image for Standalone, MI, and Native-HA
- MQ Advanced (MFT) image for the MFT lab (
/opt/mqm/mqft/bin
must exist)
# A: Standalone
build_mq_qmgrs.sh
# B: MFT
build_mq_mft_qmgrs.sh
verify_mft.sh
# C: MI (Active/Standby) + optional VIP
build_mq_mi_qmgrs.sh
promote_standby.sh
verify_mi.sh
docker-compose.vip.yml # generated (MI variant)
haproxy/haproxy.cfg # generated (MI variant)
Makefile # generated (MI variant)
# D: Native-HA (Raft) 3-node + VIP
build_mq_nativeha.sh # generates docker-compose.nha.yml + VIP stack + Makefile
verify_nativeha.sh
docker-compose.nha.yml # generated by builder
docker-compose.vip.yml # generated (Native-HA variant)
haproxy/haproxy.cfg # generated (Native-HA variant)
Makefile # generated (Native-HA variant)
# A) Standalone QMs
chmod +x build_mq_qmgrs.sh
./build_mq_qmgrs.sh 3
# B) MFT Lab (requires MQ Advanced image)
chmod +x build_mq_mft_qmgrs.sh
./build_mq_mft_qmgrs.sh
./verify_mft.sh
# C) MIQM + VIP
chmod +x build_mq_mi_qmgrs.sh
./build_mq_mi_qmgrs.sh
./verify_mi.sh
# If VIP artifacts were generated:
make vip-up
# D) Native-HA (Raft) 3-node + VIP
chmod +x build_mq_nativeha.sh
./build_mq_nativeha.sh
make vip-up # starts HAProxy VIP for Native-HA
./verify_nativeha.sh
# Optional failover simulation:
./verify_nativeha.sh --simulate-failover
# E) Native-HA Cross-Region Replication (DR)
# Manual: see build_mq_crr.md for detailed steps
Start N independent queue managers for workshops, demos, and REST/Admin exploration—each with its own data volume.
- Single Docker host & bridge network.
- Each QM exposes 1414 (listener), optional 9443 (Admin Web), 9449 (Admin REST).
- No clustering, HA, or DR implied.
- Lab-level CHLAUTH defaults (permissive), no TLS; not secure.
- Local/bind storage durability and latency vary by host.
- No service discovery beyond Docker DNS.
Script | Role | Key Behaviors | Important Vars | Quick Use |
---|---|---|---|---|
build_mq_qmgrs.sh |
Provision N standalone QMs | Port checks; generates docker-compose.yml ; creates ./data/QM* ; maps 1414/9443/9449; starts containers; prints summary |
NUM_QMGRS (positional), IMAGE_NAME , BASE_LISTENER_PORT=1414 , BASE_WEB_PORT=9443 , BASE_REST_PORT=9449 , DATA_DIR |
./build_mq_qmgrs.sh 3 → QM1..QM3 |
flowchart LR
subgraph Host["Docker Host (single)"]
direction LR
subgraph Net["Docker Network"]
direction TB
QM1["qm1 : QM1\n1414 | 9443 | 9449"] --- V1[("Vol: ./data/QM1")]
QM2["qm2 : QM2\n1415 | 9444 | 9450"] --- V2[("Vol: ./data/QM2")]
QMn["qmN : QMN\n…"] --- Vn[("Vol: ./data/QMN")]
end
end
Clients((Clients/Tools)) -->|MQI/JMS| QM1
Clients --> QM2
Clients --> QMn
Admin[[Admin UI/REST]] --> QM1
Admin --> QM2
Admin --> QMn
Show a reference MFT domain with Coordination, Command, and two Agents: one local to a QM and one agent-only container using the MQ client.
QMCOORD
hosts Coordination.QMCMD
hosts Command.QMAGENT
hosts Agent Server +AGENT_LCL
.mftagent
runsAGENT_REM
(no QM; connects toQMAGENT
).
- Requires MQ Advanced (MFT) image.
- DEV channel (
DEV.APP.SVRCONN
) and relaxed CHLAUTH for labs; tighten in prod. - File paths are container filesystems; no enterprise file governance.
Script | Role | Key Behaviors | Important Vars | Quick Use |
---|---|---|---|---|
build_mq_mft_qmgrs.sh |
Provision 4-container MFT lab | Creates QMs & agent-only container; defines DEV listener/channel; sets up Coordination/Command; creates & starts AGENT_LCL and AGENT_REM ; prints summary |
IMAGE_NAME/TAG (must include MFT), COORD_QM , CMD_QM , AGENT_QM , MFT_DOMAIN , AGENT_LOCAL_NAME , AGENT_REMOTE_NAME , PORT_* , MQ_ADMIN_PASSWORD , MQ_APP_PASSWORD |
./build_mq_mft_qmgrs.sh |
verify_mft.sh |
End-to-end MFT smoke test | Prepares files; runs fteCreateTransfer REM→LCL; validates contents; optional reverse/wildcard tests; tails agent logs on error |
CMD_CNAME , COORD_CNAME , AGENT_CNAME , REM_CNAME , DOMAIN , AGENT_LCL , AGENT_REM |
./verify_mft.sh |
flowchart LR
subgraph Net["Docker Network"]
direction TB
QMCOORD["qmcoord : QMCOORD\nMFT Coordination"]:::qm
QMCMD["qmcmd : QMCMD\nMFT Command"]:::qm
QMAGENT["qmagent : QMAGENT\nAgent Server + AGENT_LCL"]:::qm
AGENTREM["mftagent : AGENT_REM\n(no QM; MQ client)"]:::agent
end
classDef qm stroke:#6b8cff,fill:#eef3ff
classDef agent stroke:#7c4dff,fill:#f7f2ff
AGENT_LCL[[AGENT_LCL]]:::agent --> QMAGENT
AGENT_REM[[AGENT_REM]]:::agent --> AGENTREM
AGENT_LCL -.register/status.-> QMCOORD
AGENT_REM -.register/status.-> QMCOORD
QMCMD -->|fteCreateTransfer/Monitors| QMCOORD
QMCMD --> AGENT_LCL
QMCMD --> AGENT_REM
AGENTREM -->|SVRCONN qmagent:1414| QMAGENT
AGENT_REM ===>|file copy| AGENT_LCL
Demonstrate MIQM: one ACTIVE and one STANDBY instance sharing the same POSIX-locking storage (e.g., NFSv4/EFS). Optional VIP (HAProxy) presents a stable TCP endpoint.
- Single QM identity (e.g.,
QM1
) across two containers. - Both mount the same
/mnt/mqm
; only the ACTIVE opens the listener. - HAProxy checks TCP and routes to the ACTIVE.
- Storage must support byte-range locking (NFSv4); misconfig can cause failover issues.
- Standby has no listener; health inferred via TCP.
- RDQM isn’t supported in vanilla containers.
Script | Role | Key Behaviors | Important Vars | Quick Use | |
---|---|---|---|---|---|
build_mq_mi_qmgrs.sh |
Provision 1 ACTIVE + 1 STANDBY MIQM | Creates QM1 on shared storage (bind or NFS); starts ACTIVE on qm1a (strmqm QM1 ) and STANDBY on qm1b (strmqm -x QM1 ); defines DEV channel/listener; prints ports. VIP-integrated variant also generates docker-compose.vip.yml , haproxy/haproxy.cfg , and a Makefile . |
IMAGE_NAME/TAG , QM_NAME , PORT_ACTIVE , PORT_STANDBY , NFS_SERVER , NFS_EXPORT |
./build_mq_mi_qmgrs.sh |
|
promote_standby.sh |
Controlled role swap | Ends current ACTIVE (immediate/quiesce), waits for promotion, restarts old active as STANDBY | A_CNAME , B_CNAME , QM_NAME , `MODE=immediate |
quiesce, TIMEOUT` |
./promote_standby.sh |
verify_mi.sh |
MI-only verifier | Confirms roles; verifies shared storage hash (qm.ini ); checks TCP behavior (active accepts / standby refuses); optional VIP probe; optional failover simulation |
MI_A , MI_B , QM_NAME , PORT_ACTIVE , PORT_STANDBY , VIP_* |
./verify_mi.sh |
|
(VIP artifacts) | Optional VIP (HAProxy) | docker-compose.vip.yml + haproxy/haproxy.cfg + Makefile targets: vip-up , vip-down , promote , status |
VIP_PORT (default 14150 ), VIP_STATS_PORT , VIP_USER , VIP_PASS |
make vip-up |
flowchart LR
Client((Clients)) -- MQI/JMS --> VIP[[HAProxy VIP :14150]]
subgraph Containers
direction LR
A["qm1a : QM1 (ACTIVE)\nListener 1414"]:::active
B["qm1b : QM1 (STANDBY)\nNo listener"]:::standby
end
subgraph Storage["Shared Storage (NFSv4/EFS)\nPOSIX byte-range locks required"]
VOL[("/mnt/mqm : QM1 data + logs")]
end
VIP -->|tcp| A
VIP -.failover.-> B
A --- VOL
B --- VOL
classDef active stroke:#28a745,fill:#eaffea
classDef standby stroke:#c0a000,fill:#fffbe6
Note: RDQM relies on kernel modules and is not supported in containers. This lab uses IBM MQ’s Native-HA (Raft) — supported for container learning and demos.
Demonstrate MQ’s Native-HA (Raft) with three containers forming one queue manager identity (one Active, two Replica). An optional HAProxy VIP provides a single, stable client endpoint that always targets the Active.
- Three containers (e.g.,
qmha-a
,qmha-b
,qmha-c
) each hosting a local instance of the same queue manager (e.g.,QMHA
). - Raft replicates log/state across nodes; one Active at a time, two Replicas.
- HAProxy health-checks MQ TCP and forwards clients to the current Active.
- Lab defaults: open DEV channel, no TLS, simple passwords; not secure.
- Intended for on-host demos; for supported production, use Kubernetes/OpenShift with the MQ Operator.
Script | Role | Key Behaviors | Important Vars | Quick Use |
---|---|---|---|---|
build_mq_nativeha.sh |
Provision 3-node Native-HA QM + generate VIP | Creates qmha-a/b/c with MQ_NATIVE_HA=true ; writes INI fragments under /etc/mqm for Raft peers; opens DEV listener; generates VIP stack (docker-compose.vip.yml , haproxy/haproxy.cfg ) and a Makefile |
IMAGE_NAME/TAG , QM_NAME (default QMHA ), per-node host ports (PORT_A/B/C ), REPL_PORT , VIP_PORT (default 14180 ), VIP_STATS_PORT , VIP_USER , VIP_PASS |
./build_mq_nativeha.sh then make vip-up |
verify_nativeha.sh |
Native-HA + VIP verifier | Confirms roles (Active/Replica), checks per-node listener behavior, verifies VIP TCP; performs client put/get via VIP; optional failover simulation and recheck | PORT_A/B/C , VIP_PORT , VIP_NAME |
./verify_nativeha.sh · ./verify_nativeha.sh --simulate-failover |
(VIP artifacts) | Optional VIP (HAProxy) | docker-compose.vip.yml + haproxy/haproxy.cfg + Makefile targets: vip-up , vip-down , vip-reload , status , verify , failover |
VIP_PORT (default 14180 ), VIP_STATS_PORT , VIP_USER , VIP_PASS |
make vip-up |
flowchart LR
Client((Clients)) -- MQI/JMS --> VIP[[HAProxy VIP :14180]]
subgraph Cluster["Native-HA (Raft) Group"]
direction LR
A["qmha-a : QMHA\nROLE: Active/Replica\nListener 1414"]:::nha
B["qmha-b : QMHA\nROLE: Replica\nNo listener"]:::nha
C["qmha-c : QMHA\nROLE: Replica\nNo listener"]:::nha
end
VIP -->|tcp| A
VIP -.failover.-> B
VIP -.failover.-> C
A <-. Raft Replication .-> B
A <-. Raft Replication .-> C
B <-. Raft Replication .-> C
classDef nha stroke:#0d6efd,fill:#eef5ff,stroke-width:1.2px
Demonstrate cross-region replication (CRR) for a Native-HA queue manager using two Docker Compose stacks: a primary region and a replica region. MQ asynchronously ships Raft log updates from the primary to the replica to enable disaster recovery.
- Region A and Region B each host a three-node Native-HA group for the same queue manager identity.
- Replication is asynchronous; only Region A serves clients until a planned or unplanned failover.
- Optional HAProxy VIPs can front the active instance in each region.
- Requires IBM MQ 9.4 or later container image.
- CRR adds network and storage overhead; promotion requires the replica to be fully synchronized.
- This lab runs all nodes on one host by default—adjust for true multi-host or multi-region tests.
flowchart LR
Clients((Clients)) --> VIPA[[Region A VIP :14180]]
Clients -.DR failover.-> VIPB[[Region B VIP :14180]]
subgraph RegionA["Region A (Primary)"]
direction LR
A1["qmha-a : QMHA\\nROLE: Active/Replica\\nListener 1414"]:::nha
A2["qmha-b : QMHA\\nROLE: Replica\\nNo listener"]:::nha
A3["qmha-c : QMHA\\nROLE: Replica\\nNo listener"]:::nha
end
subgraph RegionB["Region B (Replica)"]
direction LR
B1["qmha-dr-a : QMHA\\nROLE: Replica\\nNo listener"]:::nha
B2["qmha-dr-b : QMHA\\nROLE: Replica\\nNo listener"]:::nha
B3["qmha-dr-c : QMHA\\nROLE: Replica\\nNo listener"]:::nha
end
VIPA -->|tcp| A1
VIPA -.failover.-> A2
VIPA -.failover.-> A3
VIPB -->|tcp| B1
VIPB -.failover.-> B2
VIPB -.failover.-> B3
%% Raft replication within regions
A1 <-.Raft.-> A2
A1 <-.Raft.-> A3
A2 <-.Raft.-> A3
B1 <-.Raft.-> B2
B1 <-.Raft.-> B3
B2 <-.Raft.-> B3
%% Cross-region replication
A1 -.CRR (async).-> B1
A1 -.-> B2
A1 -.-> B3
A2 -.-> B1
A2 -.-> B2
A2 -.-> B3
A3 -.-> B1
A3 -.-> B2
A3 -.-> B3
classDef nha stroke:#0d6efd,fill:#eef5ff,stroke-width:1.2px
- Create separate
primary/
anddr/
directories with Docker Compose files (seebuild_mq_crr.md
). - Bring up both stacks with
docker compose up -d
. - Inside a primary node, register replicas:
docker exec -it qmha-a bash -lc "mqcli crtmqha --dr-replica qmha-dr-a:1500 qmha-dr-b:1500 qmha-dr-c:1500"
- Verify roles with
dspmq
and test message flow. Simulate failover by stopping Region A and promoting Region B (mqcli rdqm --promote
).
These assets are educational. They intentionally use permissive channel rules, no TLS, and simple credentials for approachability. For any environment beyond a lab:
- Enforce TLS on channels and admin endpoints; use managed PKI.
- Replace DEV channels & CHLAUTH relaxations with least-privilege mappings.
- Integrate identity (LDAP/OIDC), logging (SIEM), backup/restore, monitoring/alerting.
- Use durable, compliant storage and multi-AZ/host HA patterns appropriate to your risk profile.
- Ports in use → Adjust
PORT_*
variables or stop conflicting services. - MFT CLI missing → Use an MQ Advanced image; verify
/opt/mqm/mqft/bin
. - MI failover doesn’t occur → Verify NFSv4 with locking; both MI containers must mount the same path; check
AMQERR01.LOG
. - Native-HA roles don’t settle → Check INI fragments under
/etc/mqm
, container hostnames (must match Raft peer entries), and container network reachability on the replication port. - VIP not routing to Active → Confirm HAProxy is up (
make status
) and that only the Active is listening on 1414; checkhaproxy/haproxy.cfg
health-checks.
# General
docker compose down --remove-orphans
# Standalone
sudo rm -rf ./data
# MFT
sudo rm -rf ./data ./mft
# MI + VIP
make vip-down || true
rm -f docker-compose.vip.yml
rm -rf haproxy
sudo rm -rf ./shared # ⚠️ deletes MIQM data
# Native-HA + VIP
make vip-down || true
rm -f docker-compose.nha.yml docker-compose.vip.yml
rm -rf haproxy
sudo rm -rf ./nha # ⚠️ deletes Native-HA data for all three nodes
MIT © rob lee. See script headers for SPDX identifiers.