Add non-blocking bulkheads #230
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds support for four non-blocking Resilience4j-style semaphore bulkheads with an adaptive concurrency limit (determined by a provided
Limiter). Tasks are put onQueue(s) and then dispatched by calling threads or threads that complete dispatched tasks.There are four bulkhead implementations:
RoundRobinDispatcherBulkhead: a parallel dispatcher for unrestricted contexts, does not preserve FIFO order.FifoParallelDispatcherBulkhead: a parallel dispatcher forVoidcontexts; maintains FIFO order.FifoSerialDispatcherBulkhead: a serial dispatcher for unrestricted contexts preserving FIFO order.EnumContextPartialFifoOrderBulkhead: a parallel dispatcher forEnumcontexts preserving a partial FIFO order (i.e., FIFO per context instance is maintained).The work on this PR is not complete. In order to complete it, I would like to know if you are willing to maintain it, and whether you feel the contribution works as intended and is generally useful. One key aspect in the review should be whether there is guaranteed progress dispatching (e.g., pertaining to
maxDispatchPerCall).I think the contribution is useful as the current code base either provides blocking behavior (with the usual downsides), or non-queueing behavior that could cause high tail latencies due to required retries that worsen the execution order of tasks. This work addresses both concerns.
ToDo:
AbstractDispatcherBulkhead.AbstractBuilder.BlockingAdaptiveExecutorSimulation).README.mdwith usage information.checkstyle.xmlconfig file is too old for me to run in IDEA).Buildermethods.Bulkheadare located in anapimodule, (e.g., incom.netflix.concurrency-limits:concurrency-limits-api). For consistency, I put it in thecoremodule, moving all interfaces to a newapimodule might be a good idea. And also, this might be possible in a backwards compatible way by setting a Gradleapidependency on every subproject that uses the newapisubproject?Bulkhead#executeCompletionStage(...).LowCardinalityContextBulkhead, and one implementationEnumContextPartialFifoOrderBulkhead; it might make sense to add a generic implementation for any context type with low cardinality, based on a ConcurrentMap, that creates bulkheads on-the-fly.If you think there is a use case for a LIFO backlog, please let me know; I could refactor the code a bit and implement one with a deque.