-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Inability to write when reached maxmemory in FLASH mode #645
Comments
It sounds running out of the replication buffer hard limit and triggers a fast fullsync (check your log for confirmation). Increasing the replication hardlimit in your conf will help if it is running out of the replication hardlimit. client-output-buffer-limit replica 2gb 2gb 60 In the SSD case, you can also adjust the following parameters to a value larger than 1, to better handle large write loads. maxmemory-eviction-tenacity 35 |
@paulmchen Thank you very much for your suggestion. After I modify the configuration as your suggestion, there are still same issues with inability to write and low write performance. There are two screenshot of the test results. |
@paulmchen |
Hi @jianjun126 are the writes being rejected or hanging, it could be a similar problem to #646, however with the expireset taking up all the memory instead of the slots_to_keys map. |
@jianjun126 as it looks like you have a single master configuration (without a slave), it won't be a replication backlog issue or a fast full sync issue. It is suggested to run FlameGraph to determine where the bottleneck is causing low CPU usage. Follow these instructions to set up and run FlameGraph to identify your system's performance issue:
|
@paulmchen |
According to the FG diagram, more than 73% of CPU cycles are spent performing evictions. EvictionPoolPopulate also consumes more than 55% of CPU cycles (see evict.cpp). My suspicion is that your volatile-ttl setting for maxmemory-policy may not work well with your benchmark command with ex=66000 memtier_benchmark -s 127.0.0.1 -p 6333 -t 4 -c 20 -n 2000000 --distinct-client-seed --command="set key data ex 66000" --key-prefix="testkey_v3_" --key-minimum=100000000 --key-maximum=999000000 -R -d 800 You may try the following:
|
@paulmchen My test environment is a dual-socket server, where the CPU is 8269CY, and the memory has 6 channels. And almost no other tasks are executed at the same time during the test. So, it's almost certainly not a hardware resource issue. If necessary, I will retest as soon as possible with your suggested parameters and environment. |
铂金8269CY,26核52线程,主频2.5G; |
@paulmchen @msotheeswaran I wrote a script to test the performance of keydb when the write rate is low, but I found that even at a low write rate, there is still the problem of not being able to write. The test method is to randomly write 5000-8000 times per second for 50-70 seconds within 300 seconds; and write 250 times per second for the rest of the 300 seconds, and the amount of data written each time is 8000 bytes. During the test, I adjusted the maxmemory, whether it is 1GB, 4GB, or 24GB, this problem will occur. The timing of the occurrence is approximately 1.5 hours or 3.5 hours of the test duration. |
That is a bug, the following commit addresses the 0 and low QPS issue. 0 QPS is caused by eviction right after the maxmemory is reached. However, it may have a side effect on the code which could cause issues after reaching maxmemory, and memory usage may continue to grow. @JohnSully, John, could the following commits be added to main as well, are there any side effects? For example, the memory will keep growing without effectively being evicted on time? @jianjun126 you can try this commit and see if it helps. |
@paulmchen |
@JohnSully @msotheeswaran John and Malavan, it seems to be a pretty serious problem, i am able to reproduce it as well. Note: with the fixes in #439 , Zero QPS problem is gone, however after reaching the maximum memory, the memory continues to grow. . Could it be something related to the GC not taking effect? |
I believe it is actually from the expireset, currently when a key is evicted to storage the expire is still in memory in the expireset. There is no mechanism to expire keys in the storage provider, so without keeping the entry in the expire set the key will stay in rocksdb without being expired until it is accessed again. I am working on a bigger change to add support for expiring from rocksdb in the meantime you can try this commit: 6eb595d but I have not tested it. Edit: There was a mistake so you will also need this commit: 6a32023 |
Hi @jianjun126 Not sure if the following (suggested by @JohnSully) can help your case, I tried and it reduced occurrences of 0 qps a lot, at least from my testing environment.
|
@msotheeswaran Hi Malavan, I tried the code modified in 6a32023, with memtier_benchmark, it alse does addresses the 0 OPS issue and avoids memory growth, but there was still low OPS issue. When the test program first started running, the write rate could reach 40,000 OPS, but after two hours, it was only 1000-2000OPS. |
@hengku Hi hengku, Thanks very much for your attention and suggestions. I tried the parameters you suggested with keydb v6.3.3. However, during the test, there were still 0OPS issues which lasted for 235 seconds. Here is the config file and test command. From my test results, this seems to be quite different from yours. Could you help me to check the config file or share your testing process? |
oh, I am using 6.2.1 version with some in-house code changes. I also observed 0 qps issue recently and it seems fixed by setting those 2 parameters. Below is my conf and memtier command: port 6379 memtier_benchmark -s 192.168.0.2 -p 6379 -t 10 -c 100 -n 10000000000 -d 256 --key-minimum=1 --key-maximum=10000000000 --ratio 1:0 --key-pattern=P:P |
@jianjun126 I tried another way using the same testing environment and memtier command above, which I didn't observe 0/ very low qps or continuous growing used memory. Not sure if you still want to take a try and to see if that works for your case.
|
@hengku Hi hengku, thanks again for your share and suggestions. I tried the parameters and command you suggested. The test results are summarized below. |
@jianjun126 what was the memtier command you used for this? Eventually in memory will be full, and all new keys will require evicting existing keys to FLASH first, which would result in much lower QPS. |
@msotheeswaran Hi Malavan My application scenario is the same as what you said. Memory is always full. New data is continuously written to keydb under a high OPS, the existing data needs to be continuously evicted, and the older data is deleted from disk after some time. Could you give some suggestions for this application scenario? |
@jianjun126 How come you disable mmap? This results in large buffers getting created and free'd which can exacerbate this problem |
@JohnSully we want to use direct I/O, so mmap cannot be enabled. |
@msotheeswaran-sc Through the release note, I found that this issue seems to have been fixed. Therefore, I tested using the same configuration as before (maxmemory was modified to 8GB). Through testing, I found that the performance of the new version has indeed improved, but there are still similar issues as before. If the test data len is changed from "- d 800" to "- d 8", it will also solve the 0 OPS problem. If maxstorage is configured, there will be a large number of OOMs. |
Describe the bug
Hi,
I have tested in centos7 with memtier_benchmark (v1.4.0). When the memory reaches its limit, there may be instances of inability to write or very low write performance, such as:
After the memory limit is reached, if key timeout or starting test queries performance, the write performance will also degrade to a very low value. such as:
To reproduce
keydb command:
./keydb-server ./keydb.conf --storage-provider flash /data1/6333/ --storage-provider-options "use_direct_reads=true;allow_mmap_reads=false;use_direct_writes=true;allow_mmap_writes=false"
keydb config:
keydb.conf.zip
memtier_benchmark command:
taskset -c 26-29,78-84 memtier_benchmark -s 127.0.0.1 -p 6333 -t 4 -c 20 -n 2000000 --distinct-client-seed --command="set __key__ __data__ ex 66000" --key-prefix="testkey_v3_" --key-minimum=100000000 --key-maximum=999000000 -R -d 800
Expected behavior
When reaching maximum memory in FLASH mode, keydb can write normally and maintain good performance
Additional information
I have tried to modify some parameters of keydb, but the above phenomenon still exists.
The text was updated successfully, but these errors were encountered: