-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
if revision is inactive, scale to zero instead of waiting for last pod retention #15161
if revision is inactive, scale to zero instead of waiting for last pod retention #15161
Conversation
…etention Signed-off-by: eddy-oum <eddy.oum@kakaocorp.com>
Welcome @eddy-oum! It looks like this is your first PR to knative/serving 🎉 |
Hi @eddy-oum. Thanks for your PR. I'm waiting for a knative member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
Note with v1.14 coming out this week next release is in July so there's need to rush |
If I understand the discussion in #13812 correctly I think this is fine. Dave is now on PTO, as he has done the recent larger rework in that area, let's also wait for his opinion. |
Does this change respect the scale down delay or we are already after that period? |
edit: seems like current code doesn't respect the scale down delay, thx. |
edit: never mind the above approach doesn't work. I think another approach (instead of returning early in handleScaleZero) would be to return 0 in func lastPodRetention(pa *autoscalingv1alpha1.PodAutoscaler, cfg *autoscalerconfig.Config) time.Duration {
if pa.Spec.Reachability == autoscalingv1alpha1.ReachabilityUnreachable {
return 0
}
d, ok := pa.ScaleToZeroPodRetention()
if ok {
return d
}
return cfg.ScaleToZeroPodRetentionPeriod
} this approach lets the pa go through other checks, especially the scale to zero grace period. }, {
label: "scale to zero, if revision is unreachable do not wait for last pod retention",
startReplicas: 1,
scaleTo: 0,
wantReplicas: 0,
wantScaling: true,
paMutation: func(k *autoscalingv1alpha1.PodAutoscaler) {
paMarkInactive(k, time.Now().Add(-gracePeriod))
WithReachabilityUnreachable(k)
},
configMutator: func(c *config.Config) {
c.Autoscaler.ScaleToZeroPodRetentionPeriod = 10 * gracePeriod
},
}, {
label: "revision is unreachable, but before deadline",
startReplicas: 1,
scaleTo: 0,
wantReplicas: 0,
wantScaling: false,
paMutation: func(k *autoscalingv1alpha1.PodAutoscaler) {
paMarkInactive(k, time.Now().Add(-gracePeriod+time.Second))
WithReachabilityUnreachable(k)
},
configMutator: func(c *config.Config) {
c.Autoscaler.ScaleToZeroPodRetentionPeriod = 10 * gracePeriod
},
wantCBCount: 1, current implementation will fail the second test, but returning 0 in lastPodRetention for unreachable revisions would pass both tests. |
/retest |
/hold cancel |
Signed-off-by: eddy-oum <eddy.oum@kakaocorp.com>
/retest |
@eddy-oum: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dprotaso, eddy-oum The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #15161 +/- ##
=======================================
Coverage 84.76% 84.77%
=======================================
Files 218 218
Lines 13478 13480 +2
=======================================
+ Hits 11425 11428 +3
+ Misses 1686 1685 -1
Partials 367 367 ☔ View full report in Codecov by Sentry. |
Fixes #13812
Proposed Changes
Release Note