Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use EKS Addon to manage kube-proxy and coredns #1261

Closed
flostadler opened this issue Jul 17, 2024 · 2 comments · Fixed by #1357
Closed

Use EKS Addon to manage kube-proxy and coredns #1261

flostadler opened this issue Jul 17, 2024 · 2 comments · Fixed by #1357
Assignees
Labels
area/fargate kind/enhancement Improvements or new features resolution/fixed This issue was fixed size/S Estimated effort to complete (1-2 days).
Milestone

Comments

@flostadler
Copy link
Contributor

flostadler commented Jul 17, 2024

Hello!

  • Vote on this issue by adding a 👍 reaction
  • If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)

Issue details

Right now kube-proxy and coredns are self-managed. This has the drawback that you cannot keep those cluster components updated via pulumi.

Additionally, this requires us to do a complex process of shelling out to kubectl to patch the coredns deployment. This is required so coredns can run on fargate:

pulumi.all([result.id, selectors, kubeconfig]).apply(([_, sels, kconfig]) => {
// Only patch CoreDNS if there is a selector in the FargateProfile which causes
// `kube-system` pods to launch in Fargate.
if (sels.findIndex((s) => s.namespace === "kube-system") !== -1) {
// Only do the imperative patching during deployments, not previews.
if (!pulumi.runtime.isDryRun()) {
// Write the kubeconfig to a tmp file and use it to patch the `coredns`
// deployment that AWS deployed already as part of cluster creation.
const tmpKubeconfig = tmp.fileSync();
fs.writeFileSync(tmpKubeconfig.fd, JSON.stringify(kconfig));
// Determine if the CoreDNS deployment has a computeType annotation.
const cmdGetAnnos = `kubectl get deployment coredns -n kube-system -o jsonpath='{.spec.template.metadata.annotations}'`;
const getAnnosOutput = childProcess.execSync(cmdGetAnnos, {
env: {
...process.env,
KUBECONFIG: tmpKubeconfig.name,
},
});
const getAnnosOutputStr = getAnnosOutput.toString();
// See if getAnnosOutputStr contains the annotation we're looking for.
if (!getAnnosOutputStr.includes("eks.amazonaws.com/compute-type")) {
// No need to patch the deployment object since the annotation is not present. However, we need to re-create the CoreDNS pods since
// the existing pods were created before the FargateProfile was created, and therefore will not have been scheduled by fargate-scheduler.
// See: https://github.com/pulumi/pulumi-eks/issues/1030.
const cmd = `kubectl rollout restart deployment coredns -n kube-system`;
childProcess.execSync(cmd, {
env: {
...process.env,
KUBECONFIG: tmpKubeconfig.name,
},
});
return;
}
const patch = [
{
op: "remove",
path: "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type",
},
];
const cmd = `kubectl patch deployment coredns -n kube-system --type json -p='${JSON.stringify(
patch,
)}'`;
childProcess.execSync(cmd, {
env: {
...process.env,
KUBECONFIG: tmpKubeconfig.name,
},
});
}
}
});
}
return result;
});

Instead of doing that, we can manage those components as EKS addons and configure them with the pulumi resource aws.eks.Addon.

E.g. for enabling coredns to run on Fargate we should set:

const example = new aws.eks.Addon("example", {
    clusterName: "mycluster",
    addonName: "coredns",
    ...
    configurationValues: JSON.stringify({
        computeType: "Fargate"
    })
};

Affected area/feature

@flostadler flostadler added size/S Estimated effort to complete (1-2 days). kind/enhancement Improvements or new features area/fargate labels Jul 17, 2024
@flostadler flostadler changed the title Use EKS Addon to manage kube-proxy and coredns Use EKS Addon to manage aws-node (vpc-cni) and coredns Aug 9, 2024
@flostadler
Copy link
Contributor Author

This will also solve #1294

@flostadler flostadler changed the title Use EKS Addon to manage aws-node (vpc-cni) and coredns Use EKS Addon to manage kube-proxy and coredns Aug 9, 2024
@flostadler flostadler assigned t0yv0 and unassigned flostadler Sep 9, 2024
@t0yv0 t0yv0 assigned corymhall and unassigned t0yv0 Sep 10, 2024
@mjeffryes mjeffryes added this to the 0.110 milestone Sep 12, 2024
corymhall added a commit that referenced this issue Sep 12, 2024
corymhall added a commit that referenced this issue Sep 17, 2024
corymhall added a commit that referenced this issue Sep 18, 2024
<!--Thanks for your contribution. See [CONTRIBUTING](CONTRIBUTING.md)
    for Pulumi's contribution guidelines.

    Help us merge your changes more quickly by adding more details such
    as labels, milestones, and reviewers.-->

### Proposed changes

<!--Give us a brief description of what you've done and what it solves.
-->

This PR switches the `coredns` and `kube-proxy` addons from self-managed
to managed. By default the latest compatible version will be used.

This also introduces two new top level arguments to `ClusterOptions` for
configuring these new addons.

- `corednsAddonOptions`
- `kubeProxyAddonOptions`

BREAKING CHANGE: creating an `eks.Cluster` will now also create the
`coredns` and `kube-proxy` addons. If you are currently already managing
these you will need to disable the creation of these through the new
arguments `ClusterOptions.corednsAddonOptions.enabled = false` and
`ClusterOptions.kubeProxyAddonOptions.enabled = false`

### Related issues (optional)

closes #1261, closes #1254
@corymhall corymhall added the resolution/fixed This issue was fixed label Sep 19, 2024
@corymhall
Copy link
Contributor

merged into release branch

flostadler pushed a commit that referenced this issue Oct 17, 2024
<!--Thanks for your contribution. See [CONTRIBUTING](CONTRIBUTING.md)
    for Pulumi's contribution guidelines.

    Help us merge your changes more quickly by adding more details such
    as labels, milestones, and reviewers.-->

### Proposed changes

<!--Give us a brief description of what you've done and what it solves.
-->

This PR switches the `coredns` and `kube-proxy` addons from self-managed
to managed. By default the latest compatible version will be used.

This also introduces two new top level arguments to `ClusterOptions` for
configuring these new addons.

- `corednsAddonOptions`
- `kubeProxyAddonOptions`

BREAKING CHANGE: creating an `eks.Cluster` will now also create the
`coredns` and `kube-proxy` addons. If you are currently already managing
these you will need to disable the creation of these through the new
arguments `ClusterOptions.corednsAddonOptions.enabled = false` and
`ClusterOptions.kubeProxyAddonOptions.enabled = false`

### Related issues (optional)

closes #1261, closes #1254
flostadler pushed a commit that referenced this issue Oct 17, 2024
<!--Thanks for your contribution. See [CONTRIBUTING](CONTRIBUTING.md)
    for Pulumi's contribution guidelines.

    Help us merge your changes more quickly by adding more details such
    as labels, milestones, and reviewers.-->

### Proposed changes

<!--Give us a brief description of what you've done and what it solves.
-->

This PR switches the `coredns` and `kube-proxy` addons from self-managed
to managed. By default the latest compatible version will be used.

This also introduces two new top level arguments to `ClusterOptions` for
configuring these new addons.

- `corednsAddonOptions`
- `kubeProxyAddonOptions`

BREAKING CHANGE: creating an `eks.Cluster` will now also create the
`coredns` and `kube-proxy` addons. If you are currently already managing
these you will need to disable the creation of these through the new
arguments `ClusterOptions.corednsAddonOptions.enabled = false` and
`ClusterOptions.kubeProxyAddonOptions.enabled = false`

### Related issues (optional)

closes #1261, closes #1254
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/fargate kind/enhancement Improvements or new features resolution/fixed This issue was fixed size/S Estimated effort to complete (1-2 days).
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

4 participants