Concurrency Limit apalis 0.5.0 #252
-
Hello Since version 0.5.0, we lost the concurrency limit feature introduced by this PR. But what I do not understand is what's the point of having this function
let redis_connection = make_redis_connection(&settings.broker).await?;
let config = {
let mut config = Config::default();
config.set_buffer_size(1);
config.set_fetch_interval(Duration::from_millis(100));
config.set_max_retries(3);
config.set_keep_alive(Duration::from_secs(120));
config
};
let storage = RedisStorage::new_with_config(redis_connection, config);
let worker_msintuit =
WorkerBuilder::new(format!("worker-{}", rng.gen::<u16>()))
.data(app_state.clone())
.with_storage(storage)
.build_fn(handler); Thank you :) |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 5 replies
-
0.5 brought clarity between concurrency and parallelism. Internally here is what happens:
Register with count offers concurrency. The number of workers will all follow the same process as above. The problem you are facing is the change that the service is now clone and each worker gets a clone of the service. The config provided handles interaction between the Backend and the Storage. The tower layers and register control the worker. These are separate. |
Beta Was this translation helpful? Give feedback.
-
See also |
Beta Was this translation helpful? Give feedback.
-
You may not want to have a buffer_size of 1 anymore, This just means more requests to Redis, you should prefer the layer approach, to control workers.
This does not affect workers, it just tells the backed to fetch one job every 100ms. |
Beta Was this translation helpful? Give feedback.
-
I'm working on controlling concurrency in my application too, and I want to check if my understanding is correct. It seems like there is no source of back pressure for a worker unless you configure a concurrency limit with tower's ConcurrencyLImitLayer or similarly use a rate limit layer, or other layer that would stop the service from being ready, so without that a single worker would consume anything on the queue as fast as it could and spawn tasks for it. Is that correct? I am using a concurrency limit layer but I'm trying to understand if there is some other source of backpressure I am missing. |
Beta Was this translation helpful? Give feedback.
0.5 brought clarity between concurrency and parallelism.
Internally here is what happens:
service
is ready. Meaning if you have layers likeConcurrencyLayer
will be checked.Register with count offers concurrency. The number of workers will all follow the same process as above.
If you only need one worker, have you considered just using
register
?The problem you are facing is the change that the service is now clone and each worker gets a clone of the…