-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Config parameter 'coordinator_not_ready_retry_timeout_ms' #1209
base: master
Are you sure you want to change the base?
Conversation
I appreciate the motivation here, but I think a better solution would be to pass a timeout parameter to |
Yes, I can add a timeout parameter to |
Can this be merged now? |
@dpkp I have checked the changes made for KAFKA-4426 but I think the problem I mentioned here does not fit to that situation. KAFKA-4426 handles the closing of the KafkaConsumer in different scenario including unavailable coordinator. In this case, the consumer is polling without knowing if the coordinator is available or not, and the host code using the KafkaConsumer may decide to close the KafkaConsumer if it gets notified about the unavailable coordinator. I faced the same problem in Java client. But in Java client there is a difference with the Python client in ConsumerCoordinator.poll; if the partitions are manually assigned the coordinator readiness check is skipped in the Java client. |
This branch has conflicts that must be resolved |
9c8c8af
to
8ebb14c
Compare
Rebased all former changes and resolved conflicts
This change is