-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak #2976
Comments
Hi @janusn in the Glances configuration file you have the following key:
So it's a "normal" behavior to have your memory increase for the first hour. After that it should be stable. |
I have been monitoring the memory consumption since your reply. There were a few set backs such as container recreated by watchtower. Nonetheless, here is what I have found. The memory usage on 2 of 3 instances of the glances keep growing over 24 hours period but the other one does not.
|
On my configuration (running Glances in local outside a Docker container) and with a history_size=3. When Glances start: rss=64245760, vms=590016512, shared=17039360, text=3026944, lib=0, data=91455488, dirty=0, uss=51081216, pss=52712448, swap=0 One hour later: rss=65421312, vms=590155776, shared=17039360, text=3026944, lib=0, data=91594752, dirty=0, uss=51511296, pss=53140480, swap=0 So memory increase around 1.4 MB (regarding 31 MB on your test). I will make a long term test next week. |
Note for myself: https://pythonhosted.org/Pympler/muppy.html |
Ok found it thanks to mem_top:
|
The issue is confirmed and is in the processes.py file / class GlancesProcesses / dict self.processlist_cache. The size of this dict can be very large because the key is the PID, on Linux system the default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platforms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million)... So we need to have a peace of code after the main loop of the update method to clean key/value when PID is no more existing:
|
Adding this code after the loop:
seems to solve the leak: Running Glances with (history_size=0):
Also confirmed by mem_top:
had to make some extra-test before pushing but look ok. |
Ok pushed on the develop branch. After some test, il will release a version 4.2.1. |
@janusn Glances 4.2.1 is available. You can upgrade your system. |
Update: I checked again and I realized that the image tagged 4.2.1-full is different from the image I was running, tagged latest-full. Let me run the 4.2.1-full a couple day. I will report my result afterwards
Original comment: Thanks for the quick fix. On one of the 3 containers, the memory usage is still growing albeit it is much slower than before.
|
Check the bug
Before filling this bug report, please search if a similar issue already exists.
In this case, just add a comment on this existing issue.
Describe the bug
A clear and concise description of what the bug is.
The memory usage reported by docker stats keeps increasing on 3 separated instances running on 3 different machines.
To Reproduce
Steps to reproduce the behavior:
# docker compose up -d
/var/log/docker_stats/glances.log
Expected behavior
A clear and concise description of what you expected to happen.
I expect the memory logged in the glances.log to be stable over time.
Screenshots
If applicable, add screenshots to help explain your problem.
Environement (please complete the following information)
docker compose
Additional context
Add any other context about the problem here.
Here is a sample of the content of the file glances.log after 2 days of running:
the content of the compose.yaml:
modification of glances.conf:
You can also pastebin:
The text was updated successfully, but these errors were encountered: