Skip to content
This repository has been archived by the owner on May 27, 2022. It is now read-only.

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. #46

Open
magicdogs opened this issue Mar 17, 2017 · 18 comments

Comments

@magicdogs
Copy link

Hi , when i had config logback.xml file and changed root level="debug" , the application log a lot of such as "org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. " information and blockeding my application , if i change logback.xml file root level to info , it work fine , why is this ? thanks a lot.

exception log information

image

lib information

image

@shades198
Copy link

I am also having the same issue. Just that even after switching log level to info messages still don't go inside kafka broker

@wuming333666
Copy link

i am also having the same issue

@zjingchuan
Copy link

i am also having the same issue 。。。

@iDube
Copy link

iDube commented Nov 1, 2017

i am also having the same issue 。..

@xiaods
Copy link

xiaods commented Nov 6, 2017

change hostname to 0.0.0.0

@feilongyang
Copy link

i am also having the same issue

@shikonglaike
Copy link

when I make the following changes. I have the same issue,
_20180118160426
@danielwegener Could you tell me why.

@danielwegener
Copy link
Owner

Because kafka tries to recursively log to itself which may lead it into a deadlock (which forunately is eventually resolved by the metadata timeout, but still breaks your client).
The ensureDeferredAppends queues all recursive log entries and delays the actual sending until a non-kafka message is attempted to be logged that "frees" them. However, as soon as you put ALL loggers to debug, kafka internals also try to log debug information - and those are not all captured by stargsWith(KAFKA_LOGGER_PREFIXED) - and these debug logs are internal so we can not safely assume to catch all of them them while still supporting multiple version of the kafka-client library´

So the solution for now: do not enable global debug logging (rather do it selectively per package).
The only really safe solution would be to shadow package the kafka-client with its transitive dependencies and replace its usages of slf4j with an implementation that either never logs to kafka itself or tags all of its messsages as messages that always get queued. But I am not really happy with that solution either (possibly licensing issues, possibly appender releases for each kafka-client release).

@shikonglaike
Copy link

@danielwegener ,I see and appreciate your reply,

@YouXiang-Wang
Copy link

@danielwegener So you need to update your configuration example form "debug" ===> "INFO".

@zhaojingyang
Copy link

I have the same problem. when I make the following changes. It work.
send a message after super.start() in KafkaAppender.start() function.I don't known why

@OverRide
public void start() {
// only error free appenders should be activated
if (!checkPrerequisites()) return;

    if (partition != null && partition < 0) {
        partition = null;
    }

    lazyProducer = new LazyProducer();

    super.start();

  ```

final byte[] payload = "sssd".getBytes();
final byte[] key = "sdsss".getBytes();
final Long timestamp = System.currentTimeMillis();
final ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(topic, partition, timestamp, key, payload);
lazyProducer.get().send(record);
lazyProducer.get().flush();

    }

@OneYearOldChen
Copy link

@magicdogs How do you solve this problem?

@magicdogs
Copy link
Author

@OneYearOldChen update logback.xml file set root level to info ...

@lichenglin
Copy link

add this to your logback-spring.xml
<logger name="org.apache.kafka" level="info"/>

@danielwegener
Copy link
Owner

@Birdflying1005 good point :)

@danielwegener
Copy link
Owner

Can you guys imagine some doc/faq-entry or something that would have helped you to not run into this issue? I'd be happy to add it to the documentation

@madanctc
Copy link

can you add spring.kafka.producer.retries=5 and request.timeout.ms=600000

@omrryldrrm
Copy link

hi,
change the this parameter maxBlockTime =2000 ms

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests