You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While performing some load testing on our fluentbit -> data-prepper -> opensearch stack, we discovered that past a certain http request size (i.e. once a fluent-bit instance is under high enough load), data-prepper begins to throw the following errors:
2024-01-30T02:56:29.331 [armeria-common-worker-epoll-3-3] WARN com.linecorp.armeria.server.DefaultUnhandledExceptionsReporter - Observed 1 exception(s) that didn't reach a LoggingService in the last 10000ms(10000000000ns). Please consider adding a LoggingService as the outermost decorator to get detailed error logs. One of the thrown exceptions:
com.linecorp.armeria.server.HttpStatusException: 413 Request Entity Too Large
at com.linecorp.armeria.server.HttpStatusException.of0(HttpStatusException.java:105) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.HttpStatusException.of(HttpStatusException.java:99) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.Http1RequestDecoder.channelRead(Http1RequestDecoder.java:327) ~[armeria-1.26.4.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1471) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1345) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1385) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.flush.FlushConsolidationHandler.channelRead(FlushConsolidationHandler.java:152) ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.linecorp.armeria.common.ContentTooLargeException: maxContentLength: 10485760, contentLength: 23500850, transferred: 10487808
at com.linecorp.armeria.common.ContentTooLargeExceptionBuilder.build(ContentTooLargeExceptionBuilder.java:93) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.Http1RequestDecoder.channelRead(Http1RequestDecoder.java:312) ~[armeria-1.26.4.jar:?]
... 38 more
2024-01-30T02:56:29.448 [armeria-common-worker-epoll-3-3] ERROR com.amazon.osis.HttpAuthorization - Unable to process the request: maxContentLength: 10485760, contentLength: 23500850, transferred: 10487808
Unfortunately fluent-bit doesn't seem to have any way to limit the size of chunks sent to an output (see fluent/fluent-bit#1938). We tried to mitigate this by using gzip, but that just produced different (but similar) errors:
2024-01-30T23:14:22.778 [armeria-common-worker-epoll-3-1] ERROR org.opensearch.dataprepper.HttpRequestExceptionHandler - Unexpected exception handling HTTP request
com.linecorp.armeria.common.ContentTooLargeException: maxContentLength: 10485760
at com.linecorp.armeria.common.ContentTooLargeExceptionBuilder.build(ContentTooLargeExceptionBuilder.java:93) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.encoding.AbstractStreamDecoder.decode(AbstractStreamDecoder.java:55) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.HttpDecodedRequest.filter(HttpDecodedRequest.java:55) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.HttpDecodedRequest.filter(HttpDecodedRequest.java:38) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.stream.FilteredStreamMessage.lambda$collect$0(FilteredStreamMessage.java:166) ~[armeria-1.26.4.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2272) ~[?:?]
at com.linecorp.armeria.common.stream.FilteredStreamMessage.collect(FilteredStreamMessage.java:142) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.stream.AggregationSupport.aggregate(AggregationSupport.java:126) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.FilteredHttpRequest.aggregate(FilteredHttpRequest.java:61) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.HttpRequest.aggregate(HttpRequest.java:565) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.common.HttpRequest.aggregate(HttpRequest.java:547) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve1(AnnotatedService.java:314) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve0(AnnotatedService.java:298) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve(AnnotatedService.java:268) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService.serve(AnnotatedService.java:79) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.DecodingService.serve(DecodingService.java:118) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.encoding.DecodingService.serve(DecodingService.java:49) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.annotation.AnnotatedService$ExceptionHandlingHttpService.serve(AnnotatedService.java:554) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:112) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:75) ~[armeria-1.26.4.jar:?]
at com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator.lambda$serveRequest$2(HttpAuthDecorator.java:125) ~[FizzyDrPepper-2.6.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2272) ~[?:?]
at com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator.serveRequest(HttpAuthDecorator.java:93) ~[FizzyDrPepper-2.6.jar:?]
at com.amazon.dataprepper.plugins.source.auth.HttpAuthDecorator.serve(HttpAuthDecorator.java:89) ~[FizzyDrPepper-2.6.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:112) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.internal.server.RouteDecoratingService.serve(RouteDecoratingService.java:75) ~[armeria-1.26.4.jar:?]
at com.linecorp.armeria.server.throttling.AbstractThrottlingService.lambda$serve$0(AbstractThrottlingService.java:63) ~[armeria-1.26.4.jar:?]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478) ~[?:?]
at com.linecorp.armeria.common.DefaultContextAwareRunnable.run(DefaultContextAwareRunnable.java:45) ~[armeria-1.26.4.jar:?]
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:413) ~[netty-transport-classes-epoll-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: io.netty.handler.codec.compression.DecompressionException: Decompression buffer has reached maximum size: 10485760
at io.netty.handler.codec.compression.ZlibDecoder.prepareDecompressBuffer(ZlibDecoder.java:80) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.compression.JdkZlibDecoder.decode(JdkZlibDecoder.java:265) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:344) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
at com.linecorp.armeria.common.encoding.AbstractStreamDecoder.decode(AbstractStreamDecoder.java:48) ~[armeria-1.26.4.jar:?]
... 41 more
To Reproduce
Steps to reproduce the behavior:
Generate large quantities of logs from one fluent-bit instance to data-prepper via http.
Expected behavior
There should be a way to be able to handle high volumes of logs coming from a single source.
Environment (please complete the following information):
Running on AWS Opensearch Ingestion Service (OSIS) with persistent buffering (Kafka).
Additional context
Any suggestions or workarounds would be much appreciated. I acknowledge that sending such huge volumes of logs from a single source isn't ideal but it is sometimes unavoidable in our environment.
The text was updated successfully, but these errors were encountered:
@cameronattard , The current maximum is 10 MB as this is the default from Armeria. Data Prepper 2.7 will have a configuration that allows you configure the maximum request size.
You will be able to configure this to a higher value with something like the following:
Describe the bug
While performing some load testing on our fluentbit -> data-prepper -> opensearch stack, we discovered that past a certain http request size (i.e. once a fluent-bit instance is under high enough load), data-prepper begins to throw the following errors:
Unfortunately fluent-bit doesn't seem to have any way to limit the size of chunks sent to an output (see fluent/fluent-bit#1938). We tried to mitigate this by using gzip, but that just produced different (but similar) errors:
To Reproduce
Steps to reproduce the behavior:
Generate large quantities of logs from one fluent-bit instance to data-prepper via http.
Expected behavior
There should be a way to be able to handle high volumes of logs coming from a single source.
Environment (please complete the following information):
Running on AWS Opensearch Ingestion Service (OSIS) with persistent buffering (Kafka).
Additional context
Any suggestions or workarounds would be much appreciated. I acknowledge that sending such huge volumes of logs from a single source isn't ideal but it is sometimes unavoidable in our environment.
The text was updated successfully, but these errors were encountered: