Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gp3 storageclass #2941

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions helm-charts/support/templates/storageclass/gp3.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: storage.k8s.io/v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
apiVersion: storage.k8s.io/v1
# By default EKS clusters doesn't provide a StorageClass for gp3 based storage, that is
# reported to be more cost efficient than gp2 based storage. Due to that we install a gp3
# StorageClass that can be explicitly referenced in AWS clusters.
apiVersion: storage.k8s.io/v1

kind: StorageClass
metadata:
name: gp3
# Use the AWS EBS CSI provisioner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Use the AWS EBS CSI provisioner

provisioner: ebs.csi.aws.com
parameters:
# gp3 is cheaper than gp2
type: gp3
fsType: ext4
# Do not delete the underlying EBS volume whenever this PV & PVC
# are deleted! This allows us recovery in case of accidental deletion
reclaimPolicy: Retain
# Create the EBS volume only when the pod consuming it tries to use it.
# This ensures the EBS volume is created in the correct *zone*, so we
# don't end up with issues where the volume is in a different zone (but
# same region) as our pods.
volumeBindingMode: WaitForFirstConsumer
Comment on lines +1 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provisioner value?

This is from a gp2 StorageClass resource in a recent AWS cluster (catalystproject-africa), and it looked the same in the older 2i2c-aws-us cluster.

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    storageclass.kubernetes.io/is-default-class: "true"
  name: gp2
parameters:
  fsType: ext4
  type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Note that provisioner is set to something else than proposed in this PR. Should we go for whats observed in our clusters?

Default storage class?

Reading up about StorageClass defaults, it seems that the annotation controls it, and having two with the same annotation is not good. It would be good if we could have this a default storage class, but it seems out of control for the Helm chart to make it so.

I guess we can't make it the default without also having a pre-setup step for new clusters to make the gp2 StorageClass no longer the default storage class. So, let's not think about this now.

Installable in GKE/AKS?

Is it okay to create a storageclass that references a provisioner that isn't available? Yes maybe, but its also a bit confusing that we create a resource of no relevance.

Can we make this install only in AWS/EKS based clusters?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

provisioner: kubernetes.io/aws-ebs is deprecated since 1.17 - see https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons/, search for the word 'deprecated'. I'm annoyed EKS still ships with that.

I agree that we should not make this the default, don't want to fiddle with that.

Let me think about the optionality question. We have been shipping a GKE SSD provisioner (with a deprecated provisioner even!) for all our clusters apparently.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, making this not be present by default but explicitly be set is the right thing to do! I'll do that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, making this not be present by default but explicitly be set is the right thing to do! I'll do that.

I think making the choice to install a StorageClass resource explicit is a great!