Skip to content

Commit 3e68d94

Browse files
dwelsch-esinate-double-u
authored andcommitted
- Added Reference section to table of contents, issue kedacore#1366
- Moved some content from KEDA concept topics to Reference. - Added a glossary, issue kedacore#1367 - Removed non-inclusive language, issue kedacore#1373 kedacore#1366 kedacore#1367 kedacore#1373 Umbrella issue for CNCF tech docs recommendations: kedacore#1361 Signed-off-by: David Welsch <dwelsch@expertsupport.com>
1 parent 6fdd89b commit 3e68d94

18 files changed

+705
-555
lines changed

content/docs/2.15/_index.md

+15-3
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,20 @@
11
+++
2-
title = "The KEDA Documentation"
2+
title = "Getting Started"
33
weight = 1
44
+++
55

6-
Welcome to the documentation for **KEDA**, the Kubernetes Event-driven Autoscaler. Use the navigation to the left to learn more about how to use KEDA and its components.
6+
Welcome to the documentation for **KEDA**, the Kubernetes Event-driven Autoscaler.
77

8-
Additions and contributions to these docs are managed on [the keda-docs GitHub repo](https://github.com/kedacore/keda-docs).
8+
Use the navigation bar on the left to learn more about KEDA's architecture and how to deploy and use KEDA.
9+
10+
Where to go
11+
===========
12+
13+
What is your involvement with KEDA?
14+
15+
| Role | Documentation |
16+
| --- | --- |
17+
| User | This documentation is for users who want to deploy KEDA to scale Kubernetes. |
18+
| Core Contributor | To contribute to the core KEDA project see the [KEDA GitHub repo](https://github.com/kedacore/keda). |
19+
| Documentation Contributor | To add or contribute to these docs, or to build and serve the documentation locally, see the [keda-docs GitHub repo](https://github.com/kedacore/keda-docs). |
20+
| Other Contributor | See the [KEDA project on GitHub](https://github.com/kedacore/) for other KEDA repos, including project governance, testing, and external scalers. |

content/docs/2.15/authentication-providers/aws.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -35,15 +35,15 @@ If you would like to use the same IAM credentials as your workload is currently
3535

3636
## AssumeRole or AssumeRoleWithWebIdentity?
3737

38-
This authentication uses automatically both, doing a fallback from [AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) to [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) if the first one fails. This extends the capabilities because KEDA doesn't need `sts:AssumeRole` permission if you are already working with [WebIdentities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html), you just need to add KEDA service account to the trusted relations of the role.
38+
This authentication automatically uses both, falling back from [AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) to [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) if the first one fails. This extends the capabilities because KEDA doesn't need `sts:AssumeRole` permission if you are already working with [WebIdentities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html); in this case, you can add a KEDA service account to the trusted relations of the role.
3939

4040
## Setting up KEDA role and policy
4141

4242
The [official AWS docs](https://aws.amazon.com/es/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/) explain how to set up a a basic configuration for an IRSA role. The policy changes depend if you are using the KEDA role (`podIdentity.roleArn` is not set) or workload role (`podIdentity.roleArn` sets a RoleArn or `podIdentity.identityOwner` sets to `workload`).
4343

4444
### Using KEDA role to access infrastructure
4545

46-
This is the easiest case and you just need to attach to KEDA's role the desired policy/policies, granting the access permissions that you want to provide. For example, this could be a policy to use with SQS:
46+
Attach the desired policies to KEDA's role, granting the access permissions that you want to provide. For example, this could be a policy to use with SQS:
4747

4848
```json
4949
{

content/docs/2.15/concepts/scaling-deployments.md

+24-263
Large diffs are not rendered by default.

content/docs/2.15/concepts/scaling-jobs.md

+15-230
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,10 @@ title = "Scaling Jobs"
33
weight = 300
44
+++
55

6+
This page describes the job scaling behavior of KEDA. See the [Scaled Job specification](../reference/scaledjob-spec.md) for details on how to set the behaviors described below.
67

7-
## Overview
8+
9+
# Overview
810

911
As an alternate to [scaling event-driven code as deployments](../scaling-deployments) you can also run and scale your code as Kubernetes Jobs. The primary reason to consider this option is to handle processing long-running executions. Rather than processing multiple events within a deployment, for each detected event a single Kubernetes Job is scheduled. That job will initialize, pull a single event from the message source, and process to completion and terminate.
1012

@@ -16,250 +18,33 @@ For example, if you wanted to use KEDA to run a job for each message that lands
1618
1. As additional messages arrive, additional jobs are created. Each job processes a single message to completion.
1719
1. Periodically remove completed/failed job by the `SuccessfulJobsHistoryLimit` and `FailedJobsHistoryLimit.`
1820

19-
## ScaledJob spec
20-
21-
This specification describes the `ScaledJob` custom resource definition which is used to define how KEDA should scale your application and what the triggers are.
22-
23-
[`scaledjob_types.go`](https://github.com/kedacore/keda/blob/main/apis/keda/v1alpha1/scaledjob_types.go)
24-
25-
```yaml
26-
apiVersion: keda.sh/v1alpha1
27-
kind: ScaledJob
28-
metadata:
29-
name: {scaled-job-name}
30-
labels:
31-
my-label: {my-label-value} # Optional. ScaledJob labels are applied to child Jobs
32-
annotations:
33-
autoscaling.keda.sh/paused: true # Optional. Use to pause autoscaling of Jobs
34-
my-annotation: {my-annotation-value} # Optional. ScaledJob annotations are applied to child Jobs
35-
spec:
36-
jobTargetRef:
37-
parallelism: 1 # [max number of desired pods](https://kubernetes.io/docs/concepts/workloads/controllers/job/#controlling-parallelism)
38-
completions: 1 # [desired number of successfully finished pods](https://kubernetes.io/docs/concepts/workloads/controllers/job/#controlling-parallelism)
39-
activeDeadlineSeconds: 600 # Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
40-
backoffLimit: 6 # Specifies the number of retries before marking this job failed. Defaults to 6
41-
template:
42-
# describes the [job template](https://kubernetes.io/docs/concepts/workloads/controllers/job)
43-
pollingInterval: 30 # Optional. Default: 30 seconds
44-
successfulJobsHistoryLimit: 5 # Optional. Default: 100. How many completed jobs should be kept.
45-
failedJobsHistoryLimit: 5 # Optional. Default: 100. How many failed jobs should be kept.
46-
envSourceContainerName: {container-name} # Optional. Default: .spec.JobTargetRef.template.spec.containers[0]
47-
minReplicaCount: 10 # Optional. Default: 0
48-
maxReplicaCount: 100 # Optional. Default: 100
49-
rolloutStrategy: gradual # Deprecated: Use rollout.strategy instead (see below).
50-
rollout:
51-
strategy: gradual # Optional. Default: default. Which Rollout Strategy KEDA will use.
52-
propagationPolicy: foreground # Optional. Default: background. Kubernetes propagation policy for cleaning up existing jobs during rollout.
53-
scalingStrategy:
54-
strategy: "custom" # Optional. Default: default. Which Scaling Strategy to use.
55-
customScalingQueueLengthDeduction: 1 # Optional. A parameter to optimize custom ScalingStrategy.
56-
customScalingRunningJobPercentage: "0.5" # Optional. A parameter to optimize custom ScalingStrategy.
57-
pendingPodConditions: # Optional. A parameter to calculate pending job count per the specified pod conditions
58-
- "Ready"
59-
- "PodScheduled"
60-
- "AnyOtherCustomPodCondition"
61-
multipleScalersCalculation : "max" # Optional. Default: max. Specifies how to calculate the target metrics when multiple scalers are defined.
62-
triggers:
63-
# {list of triggers to create jobs}
64-
```
65-
66-
You can find all supported triggers [here](../scalers).
67-
68-
## Details
69-
70-
```yaml
71-
jobTargetRef:
72-
parallelism: 1 # Optional. Max number of desired instances ([docs](https://kubernetes.io/docs/concepts/workloads/controllers/job/#controlling-parallelism))
73-
completions: 1 # Optional. Desired number of successfully finished instances ([docs](https://kubernetes.io/docs/concepts/workloads/controllers/job/#controlling-parallelism))
74-
activeDeadlineSeconds: 600 # Optional. Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
75-
backoffLimit: 6 # Optional. Specifies the number of retries before marking this job failed. Defaults to 6
76-
```
77-
78-
The `jobTargetRef` is a batch/v1 `JobSpec` object; refer to the Kubernetes API for [more details](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/job-v1/#JobSpec) about the fields. The `template` field is required.
79-
80-
---
81-
82-
```yaml
83-
pollingInterval: 30 # Optional. Default: 30 seconds
84-
```
85-
86-
This is the interval to check each trigger on. By default, KEDA will check each trigger source on every ScaledJob every 30 seconds.
87-
88-
---
89-
90-
```yaml
91-
successfulJobsHistoryLimit: 5 # Optional. Default: 100. How many completed jobs should be kept.
92-
failedJobsHistoryLimit: 5 # Optional. Default: 100. How many failed jobs should be kept.
93-
```
94-
95-
The `successfulJobsHistoryLimit` and `failedJobsHistoryLimit` fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 100.
96-
97-
This concept is similar to [Jobs History Limits](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits) allowing you to learn what the outcomes of your jobs are.
98-
99-
The actual number of jobs could exceed the limit in a short time. However, it is going to resolve in the cleanup period. Currently, the cleanup period is the same as the Polling interval.
100-
101-
---
102-
103-
104-
```yaml
105-
envSourceContainerName: {container-name} # Optional. Default: .spec.JobTargetRef.template.spec.containers[0]
106-
```
107-
108-
This optional property specifies the name of container in the Job, from which KEDA should try to get environment properties holding secrets etc. If it is not defined it, KEDA will try to get environment properties from the first Container, ie. from `.spec.JobTargetRef.template.spec.containers[0]`.
109-
110-
___
111-
```yaml
112-
minReplicaCount: 10 # Optional. Default: 0
113-
```
114-
115-
The min number of jobs that is created by default. This can be useful to avoid bootstrapping time of new jobs. If minReplicaCount is greater than maxReplicaCount, minReplicaCount will be set to maxReplicaCount.
116-
117-
New messages may create new jobs - within the limits imposed by maxReplicaCount - in order to reach the state where minReplicaCount jobs are always running. For example, if one sets minReplicaCount to 2 then there will be 2 jobs running permanently. Using a targetValue of 1, if 3 new messages are sent, 2 of those messages will be processed on the already running jobs but another 3 jobs will be created in order to fulfill the desired state dictated by the minReplicaCount parameter that is set to 2.
118-
___
119-
120-
---
121-
122-
```yaml
123-
maxReplicaCount: 100 # Optional. Default: 100
124-
```
125-
126-
The max number of pods that is created within a single polling period. If there are running jobs, the number of running jobs will be deducted. This table is an example of the scaling logic.
127-
128-
| Queue Length | Max Replica Count | Target Average Value | Running Job Count | Number of the Scale |
129-
| ------- | ------ | ------- | ------ | ----- |
130-
| 10 | 3 | 1 | 0 | 3 |
131-
| 10 | 3 | 2 | 0 | 3 |
132-
| 10 | 3 | 1 | 1 | 2 |
133-
| 10 | 100 | 1 | 0 | 10 |
134-
| 4 | 3 | 5 | 0 | 1 |
135-
136-
* **Queue Length:** The number of items in the queue.
137-
* **Target Average Value:** The number of messages that will be consumed on a job. It is defined on the scaler side. e.g. `queueLength` on `Azure Storage Queue` scaler.
138-
* **Running Job Count:** How many jobs are running.
139-
* **Number of the Scale:** The number of the job that is created.
140-
141-
---
142-
143-
```yaml
144-
rollout:
145-
strategy: gradual # Optional. Default: default. Which Rollout Strategy KEDA will use.
146-
propagationPolicy: foreground # Optional. Default: background. Kubernetes propagation policy for cleaning up existing jobs during
147-
```
148-
149-
The optional property rollout.strategy specifies the rollout strategy KEDA will use while updating an existing ScaledJob.
150-
Possible values are `default` or `gradual`. \
151-
When using the `default` rolloutStrategy, KEDA will terminate existing Jobs whenever a ScaledJob is being updated. Then, it will recreate those Jobs with the latest specs. The order in which this termination happens can be configured via the rollout.propagationPolicy property. By default, the kubernetes background propagation is used. To change this behavior specify set propagationPolicy to `foreground`. For further information see [Kubernetes Documentation](https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion).
152-
On the `gradual` rolloutStartegy, whenever a ScaledJob is being updated, KEDA will not delete existing Jobs. Only new Jobs will be created with the latest specs.
153-
154-
155-
---
156-
157-
```yaml
158-
scalingStrategy:
159-
strategy: "default" # Optional. Default: default. Which Scaling Strategy to use.
160-
```
16121

162-
Select a Scaling Strategy. Possible values are `default`, `custom`, or `accurate`. The default value is `default`.
22+
# Pausing autoscaling
16323

164-
> 💡 **NOTE:**
165-
>
166-
>`maxScale` is not the running Job count. It is measured as follows:
167-
>```go
168-
>maxScale = min(scaledJob.MaxReplicaCount(), divideWithCeil(queueLength, targetAverageValue))
169-
>```
170-
>That means it will use the value of `queueLength` divided by `targetAvarageValue` unless it is exceeding the `MaxReplicaCount`.
171-
>
172-
>`RunningJobCount` represents the number of jobs that are currently running or have not finished yet.
173-
>
174-
>It is measured as follows:
175-
>```go
176-
>if !e.isJobFinished(&job) {
177-
> runningJobs++
178-
>}
179-
>```
180-
>`PendingJobCount` provides an indication of the amount of jobs that are in pending state. Pending jobs can be calculated in two ways:
181-
> - Default behavior - Job that have not finished yet **and** the underlying pod is either not running or has not been completed yet
182-
> - Setting `pendingPodConditions` - Job that has not finished yet **and** all specified pod conditions of the underlying pod mark as `true` by kubernetes.
183-
>
184-
>It is measured as follows:
185-
>```go
186-
>if !e.isJobFinished(&job) {
187-
> if len(scaledJob.Spec.ScalingStrategy.PendingPodConditions) > 0 {
188-
> if !e.areAllPendingPodConditionsFulfilled(&job, scaledJob.Spec.ScalingStrategy.PendingPodConditions) {
189-
> pendingJobs++
190-
> }
191-
> } else {
192-
> if !e.isAnyPodRunningOrCompleted(&job) {
193-
> pendingJobs++
194-
> }
195-
> }
196-
>}
197-
>```
24+
It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads.
19825

199-
**default**
200-
This logic is the same as Job for V1. The number of the scale will be calculated as follows.
26+
This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling.
20127

202-
_The number of the scale_
203-
204-
```go
205-
maxScale - runningJobCount
206-
```
207-
208-
**custom**
209-
You can customize the default scale logic. You need to configure the following parameters. If you don't configure it, then the strategy will be `default.`
28+
You can pause autoscaling by adding this annotation to your `ScaledJob` definition:
21029

21130
```yaml
212-
customScalingQueueLengthDeduction: 1 # Optional. A parameter to optimize custom ScalingStrategy.
213-
customScalingRunningJobPercentage: "0.5" # Optional. A parameter to optimize custom ScalingStrategy.
214-
```
215-
216-
_The number of the scale_
217-
218-
```go
219-
min(maxScale-int64(*s.CustomScalingQueueLengthDeduction)-int64(float64(runningJobCount)*(*s.CustomScalingRunningJobPercentage)), maxReplicaCount)
220-
```
221-
222-
**accurate**
223-
If the scaler returns `queueLength` (number of items in the queue) that does not include the number of locked messages, this strategy is recommended. `Azure Storage Queue` is one example. You can use this strategy if you delete a message once your app consumes it.
224-
225-
```go
226-
if (maxScale + runningJobCount) > maxReplicaCount {
227-
return maxReplicaCount - runningJobCount
228-
}
229-
return maxScale - pendingJobCount
230-
```
231-
For more details, you can refer to [this PR](https://github.com/kedacore/keda/pull/1227).
232-
233-
---
234-
235-
```yaml
236-
scalingStrategy:
237-
multipleScalersCalculation : "max" # Optional. Default: max. Specifies how to calculate the target metrics (`queueLength` and `maxScale`) when multiple scalers are defined.
31+
metadata:
32+
annotations:
33+
autoscaling.keda.sh/paused: true
23834
```
239-
Select a behavior if you have multiple triggers. Possible values are `max`, `min`, `avg`, or `sum`. The default value is `max`.
240-
241-
* **max:** - Use metrics from the scaler that has the max number of `queueLength`. (default)
242-
* **min:** - Use metrics from the scaler that has the min number of `queueLength`.
243-
* **avg:** - Sum up all the active scalers metrics and divide by the number of active scalers.
244-
* **sum:** - Sum up all the active scalers metrics.
24535
246-
### Pause autoscaling
247-
248-
It can be useful to instruct KEDA to pause the autoscaling of objects, if you want to do to cluster maintenance or you want to avoid resource starvation by removing non-mission-critical workloads.
249-
250-
This is a great alternative to deleting the resource, because we do not want to touch the applications themselves but simply remove the instances it is running from an operational perspective. Once everything is good to go, we can enable it to scale again.
251-
252-
You can enable this by adding the below annotation to your `ScaledJob` definition:
36+
To reenable autoscaling, remove the annotation from the `ScaledJob` definition or set the value to `false`.
25337

25438
```yaml
25539
metadata:
25640
annotations:
257-
autoscaling.keda.sh/paused: true
41+
autoscaling.keda.sh/paused: false
25842
```
25943

260-
The above annotation will pause autoscaling. To enable autoscaling again, simply remove the annotation from the `ScaledJob` definition or set the value to `false`.
26144

262-
# Sample
45+
## Example
46+
47+
An example configuration for autoscaling jobs using a RabbitMQ scaler is given below.
26348

26449
```yaml
26550
apiVersion: v1

content/docs/2.15/deploy.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Don't see what you need? Feel free to [create an issue](https://github.com/kedac
1616

1717
### Install
1818

19-
Deploying KEDA with Helm is very simple:
19+
To deploy KEDA with Helm:
2020

2121
1. Add Helm repo
2222

@@ -147,15 +147,15 @@ VERSION=2.15.0 make undeploy
147147

148148
### Install
149149

150-
If you want to try KEDA v2 on [MicroK8s](https://microk8s.io/) from `1.20` channel, KEDA is included into MicroK8s addons.
150+
If you want to try KEDA v2 on [MicroK8s](https://microk8s.io/) from `1.20` channel, KEDA is included into MicroK8s add-ons.
151151

152152
```sh
153153
microk8s enable keda
154154
```
155155

156156
### Uninstall
157157

158-
To uninstall KEDA in MicroK8s, simply disable the addon as shown below.
158+
To uninstall KEDA in MicroK8s, disable the add-on as shown below.
159159

160160
```sh
161161
microk8s disable keda

content/docs/2.15/operate/_index.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
+++
22
title = "Operate"
3-
description = "Guidance & requirements for operating KEDA"
3+
description = "Guidance and requirements for operating KEDA"
44
weight = 1
55
+++
66

7-
We provide guidance & requirements around various areas to operate KEDA:
7+
We provide guidance and requirements around various areas to operate KEDA:
88

99
- Admission Webhooks ([link](./admission-webhooks))
1010
- Cluster ([link](./cluster))

0 commit comments

Comments
 (0)