Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: use PageSpeed Insights API to collect results #251

Closed
alekseykulikov opened this issue Mar 20, 2020 · 9 comments
Closed

Proposal: use PageSpeed Insights API to collect results #251

alekseykulikov opened this issue Mar 20, 2020 · 9 comments
Labels
enhancement New feature or request P1

Comments

@alekseykulikov
Copy link

Problem: It's hard to ensure consistent performance results.
Burstable or uncontrollable CI VMs, like Github Actions treosh/lighthouse-ci-action#14, may lead to inaccurate results.
It significantly reduces the value of CI testing, since results can't be trusted.

Proposal: using a dedicated Lighthouse as a Service API, like PSI, may help to provide consistent results and improve the collection time:

yarn lhci collect --psiToken=secret-token --numberOfParallelRuns=5

Trade-off: it doesn't support any custom features or configs, and local testing is not possible. So it's only for public-facing URLs.

@patrickhulce
Copy link
Collaborator

Thanks @alekseykulikov! A few reasons I'm hesitant to proceed with PSI as a runner:

  • I'm not sure this is going to be all that generally applicable. The vast majority of everyone I've seen using lhci thusfar does not deploy their code publicly over the internet.
  • PSI caches results that would force us to wait a full minute between requests, so we could not actually make them in parallel.
  • Unable to support major features of Lighthouse CI as you point out.
  • I'm not all that convinced the variance will be significantly better. It will absolutely be better than certain underpowered CI environments, but it's not exactly rock solid either in a way that would solve all the variance problems.

@patrickhulce patrickhulce added enhancement New feature or request P3 labels Mar 20, 2020
@alekseykulikov
Copy link
Author

You are right, I totally forgot about the cache. In this case, parallelization will work only on a large number of URLs.

It will absolutely be better than certain underpowered CI environments, but it's not exactly rock solid either in a way that would solve all the variance problems.

It could become significantly better. PSI environment improves.
Having the same LH environment for testing on the web, extension, and CI is a big win.

@alekseykulikov
Copy link
Author

@patrickhulce made a good argument for PSI+LHCI: GoogleChrome/lighthouse#10511 (comment)

@ggennrich
Copy link

ggennrich commented Apr 9, 2020

@patrickhulce I'm a little bit confused by your statement. I could very well be misunderstanding:

I'm not sure this is going to be all that generally applicable. The vast majority of everyone I've seen using lhci thusfar does not deploy their code publicly over the internet.

Isn't the whole point of Lighthouse to ensure your web applications are performing well for users? We definitely use this project for our websites because we want to detect regressions with any new deployments. I guess I'm just surprised that we would be in the minority.

@patrickhulce
Copy link
Collaborator

Isn't the whole point of Lighthouse to ensure your web applications are performing well for users?

Of course! But Lighthouse CI is specifically designed to catch Lighthouse issues before they're deployed to your users. From conversations and interactions on most issues I've had to date, the larger use case is typically running lhci in their CI environment which does not deploy their assets to production but instead to an internal staging server or just use localhost. Indeed, all the documentation we currently have here is focused around how to prevent deploying your code to be universally web accessible until it has passed these checks.In those scenarios, the pre-production version of the site would not be accessible to PSI and thus the PSI API wouldn't be useful.

As @alekseykulikov points out though, I've noted that if we shift the focus of Lighthouse CI messaging to also be around the production monitoring use case (it sounds like you're already using LHCI for this, see #5 for more on our thoughts there), then PSI suddenly makes a lot more sense to avoid setting up a separate testing environment.

@ggennrich
Copy link

ggennrich commented Apr 27, 2020

@patrickhulce That makes sense. For us, we use a beta subdomain for our front facing products that we merge into before we merge into production. This subdomain is entirely meta no indexed to avoid duplicate content issues. When we merge into production we run the lhci checks on the beta subdomain. By having the public beta subdomain, we can also hook into other tools to validate things (structured data, mobile friendly test, etc.) that otherwise don't place nice with login walls.

@patrickhulce
Copy link
Collaborator

By having the public beta subdomain, we can also hook into other tools to validate things (structured data, mobile friendly test, etc.) that otherwise don't place nice with login walls.

Nice! That sounds like a great setup :) I've been thinking it might be nice to collect some of these different examples into a "Usage Patterns" doc with a showcase of how folks have used LHCI in different ways. Would you be interested sharing some of the details in such a doc?

@ggennrich
Copy link

@patrickhulce I'd be more than happy in sharing our details for that doc!

@patrickhulce
Copy link
Collaborator

this was fixed by #340 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request P1
Projects
None yet
Development

No branches or pull requests

3 participants