Load balancing the load balancer (dockerized?) #183
Unanswered
markschmid
asked this question in
Q&A
Replies: 1 comment
-
Hey Mark, Let me make sure I understand your requirements:
Assuming that's true, you'll have two layers that need load balancing/proxying:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey
I'm struggling getting good information on this, so I'm trying to at least get your opinion (or even some advice):
It seems pretty standard to load balance containerized services, aka to "put an nginx/haproxy/etc in front of your docker (swarm)". Yet, in order to achieve high availability on the load balancing layer as well, there might be the requirement to run multiple instances of that reverse proxy / load balancer.
In my case, I'm lacking the option to make use of ready-made cloud loadbalancer-instances (e.g. AWS). Given this, I can run my nginx/haproxy/etc:
1 "natively" (as their own nginx/haproxy/etc process) on their respective compute instance, or
2 dockerized (non-swarm-mode) on their respective compute instance, or
3 dockerized in a swarm with at least 3 manager nodes, or
4 yet another option
Are there reasons not to run the load balancers dockerized (e.g. in a dedicated swarm for the loadbalancing layer) as well?
If docker, would you run those loadbalancer services rather in the same swarm with the load balanced services or in a separate swarm which is 1 tier "higher up" in the service design?
I hope you can make sense of my question, thanks in advance.
Mark
Beta Was this translation helpful? Give feedback.
All reactions