Working on a MongoDB Test Example but hitting CPU threshold too quickly #2874
Unanswered
guel-codes
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi! 250k requests/s is a lot for a single process to handle :) It is likely that locust’s warning is correct and you need to use multiple processes. Explore the |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The test is producing around 250k requests a second. I am using the gevent
monkey.patch_all()
so that pymongo can use greenlets. But I don't think that is the problem because I also tested with plan pymongo and the default threads. Here is some additional context on that: https://pymongo.readthedocs.io/en/stable/examples/gevent.htmlLocust throws this warning during the test:
CPU usage above 90%! This may constrain your throughput and may even give inconsistent response time measurements! See https://docs.locust.io/en/stable/running-distributed.html for how to distribute the load over multiple CPU cores or machines
. This link is mainly for distributing the test across multiple compute instances, however that doesn't seem to be the problem here. My laptop doesn't see a spike in CPU usage at all.I guess I should also mention that I am running my test against a mongo container locally. However, the container itself is only spiking at about 10% CPU utilization.
Locust is unable to properly read the response times because of the amount of requests. So I'm not sure where to go from here. I'm wondering if this is just a weird behavior I am seeing when testing locally. I have not tried to spin up an atlas cluster and test there yet, but that is going to be my next step. Any thoughts/ideas are welcome. Thanks 🙌🏽😊
The test got into the millions of requests after less a minute of running 👀.
Beta Was this translation helpful? Give feedback.
All reactions