Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to limit the resource allocation to each mode -"master, worker1 and worker2" #103

Open
bisha007a opened this issue Sep 6, 2020 · 2 comments

Comments

@bisha007a
Copy link

Hi Team,
I am unable to restrict or customize resource allocation consumed by each node. I did not find that parameter anywhere.

spark-worker-2    | 20/09/06 13:39:01 INFO Worker: Starting Spark worker 172.18.0.4:39983 with 3 cores, 4.8 GB RAM
spark-master      | 20/09/06 13:39:01 INFO Master: Registering worker 172.18.0.4:39983 with 3 cores, 4.8 GB RAM

Is there a option to restrict the resource cosumption?

Below is my yaml. I am running the docker on ubuntu

version: "3.8"
services:
  spark-master:
    image: bde2020/spark-master:2.4.5-hadoop2.7
    container_name: spark-master
    ports:
      - "8080:8080"
      - "7077:7077"
    environment:
      - INIT_DAEMON_STEP=setup_spark
      - "constraint:node==<yourmasternode>"
  spark-worker-1:
    image: bde2020/spark-worker:2.4.5-hadoop2.7
    container_name: spark-worker-1
    depends_on:
      - spark-master
    ports:
      - "50007:8081"
    environment:
      - "SPARK_MASTER=spark://spark-master:7077"
      - "constraint:node==<yourworkernode>"
  spark-worker-2:
    image: bde2020/spark-worker:2.4.5-hadoop2.7
    container_name: spark-worker-2
    depends_on:
      - spark-master
    ports:
      - "50009:8081"
    environment:
      - "SPARK_MASTER=spark://spark-master:7077"
      - "constraint:node==<yourworkernode>
```"

@ianp1
Copy link

ianp1 commented Nov 3, 2020

I found you could use docker-compose resource limitation feature.
For example, you could add

spark-worker-2:
  image: bde2020/spark-worker:2.4.5-hadoop2.7
  mem_limit: 500m
  cpus: 0.5

which would limit the worker to 500MB of memory and half of a single cpu-core to work with. This seemed to work during my tests, however this may not be the preferred way of doing it.

@marciosdn
Copy link

To use mem_limit/cpus, we must to use yaml version under 3.
Because my compose version is 1.17.1, I used yaml 2.2:

version: '2.2'
services:$
  spark-master:$
    image: bde2020/spark-master:3.1.1-hadoop3.2$
    container_name: spark-master$
    cpus: 1$
    mem_limit: 1000M$

I read about to use docker-compose --compatibility up using deploy/resources, but I could not to test it.
How to specify Memory & CPU limit in docker compose version 3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants