Skip to content

Commit

Permalink
feat(eks-v2-alpha): support eks with k8s 1.32 (aws#33344)
Browse files Browse the repository at this point in the history
### Issue # (if applicable)

Closes #<issue number here>.

### Reason for this change



### Description of changes



### Describe any new or updated permissions being added




### Description of how you validated changes

```ts
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as iam from 'aws-cdk-lib/aws-iam';
import { App, Stack, StackProps } from 'aws-cdk-lib';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
import * as eks from '../lib';
import { Construct } from 'constructs';

export class EksClusterLatestVersion extends Stack {
  constructor(scope: Construct, id: string, props: StackProps) {
    super(scope, id, props);

    const vpc = new ec2.Vpc(this, 'Vpc', { natGateways: 1 });
    const mastersRole = new iam.Role(this, 'Role', {
      assumedBy: new iam.AccountRootPrincipal(),
    });

    new eks.Cluster(this, 'hello-eks', {
      vpc,
      mastersRole,
      version: eks.KubernetesVersion.V1_32,
      kubectlProviderOptions: {
        kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
      },
    });
  }
}

const app = new App();

new EksClusterLatestVersion(app, 'v32-stack', {
  env: {
    account: process.env.CDK_DEFAULT_ACCOUNT,
    region: process.env.CDK_DEFAULT_REGION,
  },
});

app.synth();
```

verify:

```
 % kubectl get no
NAME                          STATUS   ROLES    AGE   VERSION
ip-172-31-1-32.ec2.internal   Ready    <none>   10m   v1.32.0-eks-aeac579
ip-172-31-2-70.ec2.internal   Ready    <none>   10m   v1.32.0-eks-aeac579
pahud@MBP @aws-cdk % kubectl get po -n kube-system
NAME                       READY   STATUS    RESTARTS        AGE
aws-node-6lp5b             2/2     Running   2 (8m33s ago)   11m
aws-node-tckj8             2/2     Running   2 (8m47s ago)   11m
coredns-6b9575c64c-pntcb   1/1     Running   1 (8m33s ago)   15m
coredns-6b9575c64c-zsqw8   1/1     Running   1 (8m33s ago)   15m
kube-proxy-q7744           1/1     Running   1 (8m32s ago)   11m
kube-proxy-tfrmc           1/1     Running   1 (8m47s ago)   11m
```

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
  • Loading branch information
pahud authored Feb 10, 2025
1 parent 4e71675 commit 7175a04
Show file tree
Hide file tree
Showing 121 changed files with 6,946 additions and 389 deletions.
68 changes: 34 additions & 34 deletions packages/@aws-cdk/aws-eks-v2-alpha/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Here is the minimal example of defining an AWS EKS cluster

```ts
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

Expand Down Expand Up @@ -73,15 +73,15 @@ Creating a new cluster is done using the `Cluster` constructs. The only required

```ts
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

You can also use `FargateCluster` to provision a cluster that uses only fargate workers.

```ts
new eks.FargateCluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

Expand All @@ -90,20 +90,20 @@ be created by default. It will only be deployed when `kubectlProviderOptions`
property is used.**

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';

new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
}
});
```

### Managed node groups

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
With Amazon EKS managed node groups, you dont need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
With Amazon EKS managed node groups, you don't need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.

> For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).
Expand All @@ -115,7 +115,7 @@ At cluster instantiation time, you can customize the number of instances and the

```ts
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
defaultCapacity: 5,
defaultCapacityInstance: ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL),
});
Expand All @@ -127,7 +127,7 @@ Additional customizations are available post instantiation. To apply them, set t

```ts
const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
defaultCapacity: 0,
});

Expand Down Expand Up @@ -177,7 +177,7 @@ The following code defines an Amazon EKS cluster with a default Fargate Profile

```ts
const cluster = new eks.FargateCluster(this, 'MyCluster', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

Expand All @@ -196,7 +196,7 @@ You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/

```ts
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
endpointAccess: eks.EndpointAccess.PRIVATE, // No access outside of your VPC.
});
```
Expand All @@ -218,7 +218,7 @@ To deploy the controller on your EKS cluster, configure the `albController` prop

```ts
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
albController: {
version: eks.AlbControllerVersion.V2_8_2,
},
Expand Down Expand Up @@ -259,7 +259,7 @@ You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properti
declare const vpc: ec2.Vpc;

new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
vpc,
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }],
});
Expand Down Expand Up @@ -302,12 +302,12 @@ To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cl
`kubectlLayer` is the only required property in `kubectlProviderOptions`.

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';

new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
}
});
```
Expand All @@ -317,7 +317,7 @@ new eks.Cluster(this, 'hello-eks', {
If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';

const handlerRole = iam.Role.fromRoleArn(this, 'HandlerRole', 'arn:aws:iam::123456789012:role/lambda-role');
// get the serivceToken from the custom resource provider
Expand All @@ -338,12 +338,12 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';

const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
environment: {
'http_proxy': 'http://proxy.myproxy.com',
},
Expand All @@ -364,12 +364,12 @@ Depending on which version of kubernetes you're targeting, you will need to use
the `@aws-cdk/lambda-layer-kubectl-vXY` packages.

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';

const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
},
});
```
Expand All @@ -379,14 +379,14 @@ const cluster = new eks.Cluster(this, 'hello-eks', {
By default, the kubectl provider is configured with 1024MiB of memory. You can use the `memory` option to specify the memory size for the AWS Lambda function:

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';

new eks.Cluster(this, 'MyCluster', {
kubectlProviderOptions: {
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
memory: Size.gibibytes(4),
},
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

Expand Down Expand Up @@ -417,7 +417,7 @@ When you create a cluster, you can specify a `mastersRole`. The `Cluster` constr
```ts
declare const role: iam.Role;
new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
mastersRole: role,
});
```
Expand All @@ -438,7 +438,7 @@ You can use the `secretsEncryptionKey` to configure which key the cluster will u
const secretsKey = new kms.Key(this, 'SecretsKey');
const cluster = new eks.Cluster(this, 'MyCluster', {
secretsEncryptionKey: secretsKey,
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

Expand All @@ -448,7 +448,7 @@ You can also use a similar configuration for running a cluster built using the F
const secretsKey = new kms.Key(this, 'SecretsKey');
const cluster = new eks.FargateCluster(this, 'MyFargateCluster', {
secretsEncryptionKey: secretsKey,
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
});
```

Expand Down Expand Up @@ -489,7 +489,7 @@ eks.AccessPolicy.fromAccessPolicyName('AmazonEKSAdminPolicy', {
Use `grantAccess()` to grant the AccessPolicy to an IAM principal:

```ts
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
declare const vpc: ec2.Vpc;

const clusterAdminRole = new iam.Role(this, 'ClusterAdminRole', {
Expand All @@ -503,9 +503,9 @@ const eksAdminRole = new iam.Role(this, 'EKSAdminRole', {
const cluster = new eks.Cluster(this, 'Cluster', {
vpc,
mastersRole: clusterAdminRole,
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
memory: Size.gibibytes(4),
},
});
Expand Down Expand Up @@ -690,7 +690,7 @@ when a cluster is defined:

```ts
new eks.Cluster(this, 'MyCluster', {
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
prune: false,
});
```
Expand Down Expand Up @@ -1003,7 +1003,7 @@ property. For example:
```ts
const cluster = new eks.Cluster(this, 'Cluster', {
// ...
version: eks.KubernetesVersion.V1_31,
version: eks.KubernetesVersion.V1_32,
clusterLogging: [
eks.ClusterLoggingTypes.API,
eks.ClusterLoggingTypes.AUTHENTICATOR,
Expand Down
9 changes: 9 additions & 0 deletions packages/@aws-cdk/aws-eks-v2-alpha/lib/cluster.ts
Original file line number Diff line number Diff line change
Expand Up @@ -630,6 +630,15 @@ export class KubernetesVersion {
*/
public static readonly V1_31 = KubernetesVersion.of('1.31');

/**
* Kubernetes version 1.32
*
* When creating a `Cluster` with this version, you need to also specify the
* `kubectlLayer` property with a `KubectlV32Layer` from
* `@aws-cdk/lambda-layer-kubectl-v32`.
*/
public static readonly V1_32 = KubernetesVersion.of('1.32');

/**
* Custom cluster version
* @param version custom version number
Expand Down
2 changes: 2 additions & 0 deletions packages/@aws-cdk/aws-eks-v2-alpha/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@
"@aws-cdk/lambda-layer-kubectl-v29": "^2.1.0",
"@aws-cdk/lambda-layer-kubectl-v30": "^2.0.1",
"@aws-cdk/lambda-layer-kubectl-v31": "^2.0.0",
"@aws-cdk/lambda-layer-kubectl-v32": "^2.0.0",
"@types/jest": "^29.5.1",
"aws-sdk": "^2.1379.0",
"aws-cdk-lib": "0.0.0",
Expand Down Expand Up @@ -134,6 +135,7 @@
"jsiiRosetta": {
"exampleDependencies": {
"@aws-cdk/lambda-layer-kubectl-v31": "^2.0.0",
"@aws-cdk/lambda-layer-kubectl-v32": "^2.0.0",
"cdk8s-plus-25": "^2.7.0"
}
}
Expand Down
Loading

0 comments on commit 7175a04

Please sign in to comment.