Replies: 1 comment
-
I eventually found that the deadlock was caused by my code not calling |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm having an issue where
rd_kafka_destroy
called to destroy a consumer hangs indefinitely. When I attach gdb to the process, I find 3 threads, with the stack traces shown at the end of this post.I couldn't reproduce this problem outside of my code base (when extracting the rdkafka calls my application makes into a single, simpler test file, I didn't get the issue) , so I suspect the issue is somewhere in my code, but I'd like to understand what happens when destroying a consumer and what could cause such a deadlock. From the stack trace, I'm assuming the main thread is trying to join the
rdk:main
thread, which is itself trying to join therdk:broker1
thread, which is blocked insiderd_kafka_q_pop_serve
.For references, my code essentially does the following (producing an consuming to/from a new topic with a single partition):
rd_kafka_t
and produce 100 events using a series ofrd_kafka_produce_batch
;rd_kafka_poll
is called periodically;rd_kafka_flush
is called after each call tord_kafka_produce_batch
and delivery callbacks are used to ensure all the messages in the batch have been produced;rd_kafka_t
is created. It hasenable.auto.commit
set tofalse
,auto.offset.reset
set to earliest.rd_kafka_subscribe
is used to subscribe it the topic/partition;rd_kafka_consumer_poll
is used to poll for messages repeatedly. I can see the 100 messages coming correctly andrd_kafka_message_destroy
is used on each message after the message is processed.rd_kafka_unsubscribe
on the consumer;rd_kafka_destroy
to destroy the consumer, this is where it blocks.GDB stack traces:
Beta Was this translation helpful? Give feedback.
All reactions