You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Let's Encrypt currently has two kinds of rate limits: very smart "new orders per day" or "duplicate certificates per week" style limits, backed by the database (deprecated) or redis (not yet ready); and very dumb "total requests per second" limits implemented by our load balancers in front of the WFEs.
The latter do not work well, and we'd like to replace them with more flexible code inside boulder itself.
Design decisions to be made include but are not limited to:
boulder go code, or a third-party system
inside the WFE (and exactly where), or as a sidecar in front of the WFE
limiting request arrival rate, or number of concurrent inflight requests
Upon discussion, the first approach here is going to be even simpler:
give the WFE the ability to configure something like {".*useragent.*": {"from": "14:00", "to": "14:20"}}
block all new-order requests which match a configured useragent regex during the corresponding time period
The config format is still up for discussion, but we're going to move forward with this simple useragent-and-timeslice blocking mechanism and see if it sufficiently resolves our current load-spike issues.
Let's Encrypt currently has two kinds of rate limits: very smart "new orders per day" or "duplicate certificates per week" style limits, backed by the database (deprecated) or redis (not yet ready); and very dumb "total requests per second" limits implemented by our load balancers in front of the WFEs.
The latter do not work well, and we'd like to replace them with more flexible code inside boulder itself.
Design decisions to be made include but are not limited to:
These decisions should be made with an eye towards optimizing for SRE deployability and quality of life.
The text was updated successfully, but these errors were encountered: