Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad VMs leading to Inaccurate results #14

Open
exterkamp opened this issue Nov 16, 2019 · 4 comments
Open

Bad VMs leading to Inaccurate results #14

exterkamp opened this issue Nov 16, 2019 · 4 comments
Labels
enhancement New feature or request

Comments

@exterkamp
Copy link
Collaborator

Running this a few times with only 1 run on my personal site gh repo I got inconsistent results. This seemed to be due to the underlying VMs.

100 score & 1026 benchmark idx
98 score & 347 benchmark idx
88 score & 106 benchmark idx

Ideas:

  • Less than a 500 benchmark index, the run should be killed. These results won't be accurate and can't be asserted against.

  • We should run a pre-flight check with the benchmarker Lighthouse uses (it's some pretty simple js to run, seen here: https://benchmark.exterkamp.codes) and check if the VM we got for our action was DOA. If it was, then exit code 1 and print out something about re-running the action because of a bad VM.

@alekseykulikov alekseykulikov added the enhancement New feature or request label Nov 20, 2019
@alekseykulikov
Copy link
Member

alekseykulikov commented Nov 20, 2019

Thank you @exterkamp for a great research!
I hope it will go away as the Github Actions platform matures. Otherwise, we have to request a new VM to provide consistent performance testing results. Any ideas on how to force a new VM and not fail the workflow?

@paulirish
Copy link
Contributor

Reading https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops it doesn't provide many expectations around CPU load.. or any options for it.

But it does say "(Linux only) Run steps in a cgroup that offers 6 GB of physical memory and 13 GB of total memory" so that's something.

cc @connorjclark

@paulirish
Copy link
Contributor

Oh and much like azure pipelines... github actions apparently also allows self-hosted runners: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-self-hosted-runners

@alekseykulikov
Copy link
Member

alekseykulikov commented Dec 11, 2019

As a solution, we may collect Lighthouse reports using PageSpeed Insights. It's more consistent (https://treo.sh/demo/6) and will allow running audits in parallel to speed up a build.

- name: Audit URLs using Lighthouse
  uses: treosh/lighthouse-ci-action@v2
  with:
   urls: |
      https://example.com/
      https://example.com/demo
      https://example.com/dashboard
   psiToken: page-speed-insights-token
   runs: 3 # use median for performance runs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants