-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thoughts about throughput benchmarks fairness #14
Comments
Background: the focus of phuslu/lru development is web/http cases, so
I chosen 8/16 cores for benchmark because it's the common case(e.g. containers specification) and a "comfort zone" of go application, see #13 golang/go#65064 panjf2000/gnet#558
as @maypok86 pointed, it's unfair for cache-friendly implementation. But it still similar with our production environment(k8s workers), and the performance results index seems stable, so I kept/respected the result. We should/need append the disclaimer about this "unfairness"
Still have no good ideas, seems it's hard do this in github actions or my arm vps. |
the draft: Disclaimer: This was tested on the busy GitHub runners with 8 CPUs and the results may be very different from your real environment. |
For comment of scalalang2/go-cache-benchmark#4 (comment) ,
I have similar idea. In this scenario I guess phuslu/lru will significantly increases hit ratio and a bit deceases the throughputs. |
Oh, I never thought I'd come back to this. Throughput
Memory consumption
The idea is really not a bad one, although I have a suspicion that the results of such benchmarks can easily be misinterpreted, but it's worth a try. Unfortunately, I have a feeling that the author of go-freelru does not understand the difference between the memory cost of metadata and the storage of keys and values. For some reason, he uses principles from dedicated cache servers (redis, memcached). There really is only one limitation - the amount of available RAM (with its total amount). Redis users do not care how many entries will be stored, only the total RAM consumption is important. BUT when you don't support cost-based eviction, trying to rely on total RAM consumption is a terrible mistake.
What is an onheap cache?A small digression about onheap, offheap caches, dedicated cache servers. Offheap caches and dedicated cache servers can, in principle, be considered together here. They have quite a few specific characteristics.
For example, the solutions from the bigcache article are likely to raise a large number of questions in the system design section of the interview. It looks like they didn't need the offheap cache at all and they just came up with a problem for themselves, which they heroically solved. The offheap cache in VictoriaMetrics looks like a good solution, but it's more like the main storage there. Onheap caches are usually used in a slightly different way.
So I usually think of onheap caches as the L1 cache of the processor. Small and super fast :). Hit ratio and GCI noticed that you added a GC pause check. That's cool! But unfortunately there are several pitfalls.
In reality, the results will be worse, but let's count on a scenario that is beneficial to lru for credibility.
In total we have: lost = rps * 2 * 60 * (hit_diff / 100) * latency = 50 * 2 * 60 * (5 / 100) * 5ms = 1.5s aka 1500ms P.S. I'm sorry for these tons of bullshit. |
The discussion in maypok86/otter#72 can be a supplementary material for
@maypok86 sorry for reply late, because your said above is totally correct and I agree it almost all. This referred issue describes a less common problem that I happened to encounter before and it also affected the evolution of phuslu/lru, so I jumped in and discussed it a lot and quoted it here. |
After reading scalalang2/go-cache-benchmark#4 , especially @maypok86’s comment. I think below points should be clarify for fairness
The text was updated successfully, but these errors were encountered: