Replies: 4 comments 4 replies
-
Noticing a related open PR already cncf-tags/green-reviews-tooling#13 In addition, need to discuss how to generate traffic against such demo micro-services. k6 load testing framework was brought up in prior discussions. |
Beta Was this translation helpful? Give feedback.
-
Update
The microservice demo PR was merged today. We need to validate that the deployment is good to go. Then we'll open another PR to comment it out (#GitOps) so that Flux removes it so that it's not running as we gather metrics for idle Falco.
I wrote a +/- list for a GitHub Action workflow with self-hosted runners (using ARC) in the design doc, would really appreciate your feedback:
We're not quite there yet - we'll focus on this right after the metrics-gathering effort for idle Falco :) I'm conscious that the GCP microservice demo project doesn't have a client so we'll either have to use a rudimentary script with a loop of curl commands or use k6. A colleague create a handy demo of the later (using the k6 disruptor - not exactly what we will do but close enough): https://github.com/grafana/xk6-disruptor-demo/tree/main/demos/online-boutique#the-test-script Now that all the infrastructure is setup and the necessary applications have been deployed, our immediate priority at the moment is to surface the right metrics from the environment, make sure contributors (such as yourself) have the right access to the metrics, and visualising these metrics in a Grafana dashboard. |
Beta Was this translation helpful? Give feedback.
-
Update@incertum here is where we are at the moment with creating a (manual) process for the benchmark tests: cncf-tags/green-reviews-tooling#50 - your feedback would be appreciated! The plan is to automate this after KubeCon with self-hosted runners (mentioned previously). Thankfully ARC has support for repos in different organisations so we should be able to hook up the runners to this repo and/or |
Beta Was this translation helpful? Give feedback.
-
Thank you Niki for digging more into it 🙏 I am responding in this dedicated discussion. (comment 1) #16 (comment) I definitely wanted to highlight that all of your feedback is spot on (thank you very much), and perhaps we should get back to it. We can easily swap the stress-ng mode or even remove it altogether. For example, I like your suggestion of using
Yes, confirmed we had mentioned this. I owe everyone a more detailed explanation of what prompted the rudimentary v1 synthetic workload ideas: At the beginning, I reached out to both the Falco team and individuals outside of it to gather insights on what might be the best approach for creating a realistic synthetic workload testbed. I believe it's a challenging task, perhaps even worthy of a full PhD study in itself. It's fair to say that no one has all the perfect answers today because real-life production systems are incredibly diverse and unique. Over 8 individuals rated the microservices as their top choice. Then, some of our team members listed stress-ng and redis as popular general benchmarking systems for stressing a server. That explains why we have what we have right now. In short, I don't believe that the current combination is optimal, but we can accept it as version 0.1.0-alpha until either I or others have time to research and experiment further. |
Beta Was this translation helpful? Give feedback.
-
Related to #2
Unclear yet is how to deploy the https://github.com/falcosecurity/cncf-green-review-testing/tree/main/kustomize/synthetic-workloads (hosted in this repo) so that enough replicas run on each knode for each driver. The current setup is just to get us going and not what will work in the end.
Plus we also still don't know how to truly make them more realistic.
Please note that deploying the Google microservices demo we all already agreed upon is best done over the CNCF TAG repo as we can already reference a ready to use setup https://github.com/GoogleCloudPlatform/microservices-demo/tree/main/kustomize.
CC @nikimanoledaki
Beta Was this translation helpful? Give feedback.
All reactions