-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swagger cache fills itself up without boundaries #141
Comments
Issue-Label Bot is automatically applying the label Links: app homepage, dashboard and code for this bot. |
If I modify the previous example to use the same client everywhere, I do not have any issue with the memory footprint being too high. See the following modifications: import asyncio
from etcd3 import AioClient
async def read_db(client):
while True:
resp = await client.range("/")
async def all_run(concurrent=10):
"""Run many reads concurrently
"""
client = None
try:
client = AioClient()
await asyncio.gather(
*(read_db(client) for i in range(concurrent)),
return_exceptions=False,
)
finally:
if client:
await client.close()
def main():
loop = asyncio.get_event_loop()
try:
result = loop.run_until_complete(all_run())
except asyncio.CancelledError:
pass
finally:
loop.close()
main() However, would it be best practice to use the same |
See issue #370 Previously, the Krake API created one etcd client for each request received. Because of the etcd client used, a memory leakage appeared, where the client would put too much elements in cache, see Revolution1/etcd3-py#141 To circumvent the issue, one single client is now used for the whole Krake API. As this client leverages a pool of connections, it could make sense to only use one of them. With this method, the API does not show any sign of memory leak. Signed-off-by: Jean Chorin <jean.chorin@cloudandheat.com>
That might be the solution. But I'll have to dig into it to find the cause. Actually the "swagger cache" was just a temporary solution. My goal is to auto generate data model class from the swagger spec. Instead of generate model class at runtime. That, the cache won't be a problem |
See issue #370 Previously, the Krake API created one etcd client for each request received. Because of the etcd client used, a memory leakage appeared, where the client would put too much elements in cache, see Revolution1/etcd3-py#141 To circumvent the issue, one single client is now used for the whole Krake API. As this client leverages a pool of connections, it could make sense to only use one of them. With this method, the API does not show any sign of memory leak. Signed-off-by: Jean Chorin <jean.chorin@cloudandheat.com>
See issue #370 Previously, the Krake API created one etcd client for each request received. Because of the etcd client used, a memory leakage appeared, where the client would put too much elements in cache, see Revolution1/etcd3-py#141 To circumvent the issue, one single client is now used for the whole Krake API. As this client leverages a pool of connections, it could make sense to only use one of them. With this method, the API does not show any sign of memory leak. Signed-off-by: Jean Chorin <jean.chorin@cloudandheat.com>
0.1.6
3.6.9
Ubuntu 18.04
Description
My setup:
I am using an API with
aiohttp
. For every request received, anAioClient
is created by an aiohttp middleware. The client is closed after the request has been handled.My issue:
If too many requests are sent to the API, the memory footprint of the API process increases continuously, until my machine breaks and resets.
What I Did
Here is a minimal example. It connects to an etcd database running locally, with ~20 elements present at the prefix
"/"
.This script, when running, uses more than 1 Go of memory after only 5 minutes.
Workaround
I narrowed down the issue to the caches of
SwaggerNode
andSwaggerSpec
.By changing the function
read_db
in the above example like the following:my memory footprint is kept at 120 Mo even after 20 minutes.
The text was updated successfully, but these errors were encountered: