-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory Exception #45
Comments
Hi @aleksandre , First thanks for the issue and the pull request - #46 (Added BulkListLimit parameter). Unfortunately, it seems to me like it is not the main cause for the OutOfMemoryException. I want to verify first whether this is the real problem, In the other hand, I suspect that your exception is occurred because log4stash is opening requests endlessly without closing them (which is also weird but makes more sense in my opinion). Can you please check these things for me:
I recall several problems with this async mechanism and that's why I suspect on this part. I hope to change it to be something more stable (not using those |
HI @urielha , Thanks for your fast response. As you asked, here is the stacktrace :
The problem seems to happen with the StringBuilder, even when when my system is not using 100 % of it's RAM. I used the following parameters to reproduce the problem :
|
Emm.. What kind of objects do you usually have? Does it make sense that this object you are trying to log is really huge (in terms of object represented as json string) ? If the answer is no, I suppose we are back with the bad Anyway, <appender name="ElasticSearchAppender" type="log4stash.ElasticSearchAppender, log4stash">
<SerializeObjects>false</SerializeObjects>
<!-- The rest of your configuration.... -->
</appender>
|
Hi @aleksandre , Im trying to implement log4stash Asynchronously as well. Did your PR changes solve the issue ? Thanks & Regards. |
@pravinkarthy, have you encountered this problem too? |
hi.. I have not encountered any problem as of now. I just wanted to know if we do not include "False" in the appender, will then log4stash operate in Async mode ? Am asking this because the default constructor has "IndexAsync" property set to "true"; |
@pravinkarthy you are right about the About the out of memory exception:I ran some tests and found that the main cause is the bulk size. @aleksandre - What is your configuration for |
Hi @urielha and @pravinkarthy , I only had OutOfMemory exception while doing performance testing on the appender. With a bulk size of 1000, I had to be very agressive with the tests to trigger the error. I have been using version 2.1.0 in production since about a month and it has been very stable so far. I have a few applications logging +1 million requests per day. In my case, the requests are evenly distributed during the day and the volume is low, so the memory is not a problem. If your workload is bigger and happens in 'spikes', you may need to lower the bulk size and reduce the bulk idle timeout, like urielha said. Otherwise, you would need a new parameter to put a hard limit on the appender cache size. Here is my config : |
Yes, the PR solved the issue, but we are not using it because our real life workload does not cause the error. |
thanks a lot for the info @urielha and @aleksandre ! |
FYI, If you choose to set this to true - choose wisely the For example, |
I agree that in the current implementation the value of BulkSize is critical. Since dropping isn't necessarily bound to the BulkSize, wouldn't it make sense to use a separat "DropLimit"? E.g. I want to set BulkSize to 100, but if there are more than 20'000, I want to start dropping. |
Mmm I want to make sure I understand - I think it will be a little challenging to implement but it makes sense. |
I looked into the implementation. I agree, it's a bigger change. One option I see is to extend LogBulkSet to keep every "not yet fully sent" bulk and offer an accumulated count of all bulks that aren't fully sent. After each |
I'm not sure I fully understood your approach. On more thing - with Set there is no order so it could send only newer bulks and "starve" the old ones, maybe a list / Queue is more suitable? (actually ConcurrentQueue seems like the most suitable one) If I'm wrong please elaborate. Finally, I would be happy if you could create a commit for that, thanks! :) |
Hi,
I use the log4stash appender in Async mode. When the Elasticsearch server is not availaible for too long, I end up getting an OutOfMemoryException.
Is there an existing mechanism to limit the number of log entries that should be kept in memory by the appender ?
If there is no existing mechanism, I would propose to add a new parameter BulkListLimit that would prevent the exception. ElasticSearchAppender would stop adding new entries when that limit is reached.
To make the solution backward compatible, the parameter would be optional and set to no limit by default.
What do you think ?
The text was updated successfully, but these errors were encountered: