Skip to content

Latest commit

 

History

History
1224 lines (1051 loc) · 57.5 KB

cs_storage_file.md

File metadata and controls

1224 lines (1051 loc) · 57.5 KB
copyright lastupdated
years
2014, 2018
2018-11-13

{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download}

Storing data on IBM File Storage for IBM Cloud

{: #file_storage}

Deciding on the file storage configuration

{: #predefined_storageclass}

{{site.data.keyword.containerlong}} provides pre-defined storage classes for file storage that you can use to provision file storage with a specific configuration. {: shortdesc}

Every storage class specifies the type of file storage that you provision, including available size, IOPS, file system, and the retention policy.

Make sure to choose your storage configuration carefully to have enough capacity to store your data. After you provision a specific type of storage by using a storage class, you cannot change the size, type, IOPS, or retention policy for the storage device. If you need more storage or storage with a different configuration, you must create a new storage instance and copy the data from the old storage instance to your new one. {: important}

Before you begin: Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster.

To decide on a storage configuration:

  1. List available storage classes in {{site.data.keyword.containerlong}}.

    kubectl get storageclasses | grep file
    

    {: pre}

    Example output:

    $ kubectl get storageclasses
    NAME                         TYPE
    ibmc-file-bronze (default)   ibm.io/ibmc-file
    ibmc-file-custom             ibm.io/ibmc-file
    ibmc-file-gold               ibm.io/ibmc-file
    ibmc-file-retain-bronze      ibm.io/ibmc-file
    ibmc-file-retain-custom      ibm.io/ibmc-file
    ibmc-file-retain-gold        ibm.io/ibmc-file
    ibmc-file-retain-silver      ibm.io/ibmc-file
    ibmc-file-silver             ibm.io/ibmc-file
    

    {: screen}

  2. Review the configuration of a storage class.

    kubectl describe storageclass <storageclass_name>
    

    {: pre}

    For more information about each storage class, see the storage class reference. If you do not find what you are looking for, consider creating your own customized storage class. To get started, check out the customized storage class samples. {: tip}

  3. Choose the type of file storage that you want to provision.

    • Bronze, silver, and gold storage classes: These storage classes provision Endurance storage. Endurance storage lets you choose the size of the storage in gigabytes at predefined IOPS tiers.
    • Custom storage class: This storage class provisions Performance storage. With performance storage, you have more control over the size of the storage and the IOPS.
  4. Choose the size and IOPS for your file storage. The size and the number of IOPS define the total number of IOPS (input/ output operations per second) that serves as an indicator for how fast your storage is. The more total IOPS your storage has, the faster it processes read and write operations.

    • Bronze, silver, and gold storage classes: These storage classes come with a fixed number of IOPS per gigabyte and are provisioned on SSD hard disks. The total number of IOPS depends on the size of the storage that you choose. You can select any whole number of gigabyte within the allowed size range, such as 20 Gi, 256 Gi, or 11854 Gi. To determine the total number of IOPS, you must multiply the IOPS with the selected size. For example, if you select a 1000Gi file storage size in the silver storage class that comes with 4 IOPS per GB, your storage has a total of 4000 IOPS. Table of storage class size ranges and IOPS per gigabyte
      Storage class IOPS per gigabyte Size range in gigabytes
      Bronze 2 IOPS/GB 20-12000 Gi
      Silver 4 IOPS/GB 20-12000 Gi
      Gold 10 IOPS/GB 20-4000 Gi
    • Custom storage class: When you choose this storage class, you have more control over the size and IOPS that you want. For the size, you can select any whole number of gigabyte within the allowed size range. The size that you choose determines the IOPS range that is available to you. You can choose an IOPS that is a multiple of 100 that is in the specified range. The IOPS that you choose is static and does not scale with the size of the storage. For example, if you choose 40Gi with 100 IOPS, your total IOPS remains 100.

      The IOPS to gigabyte ratio also determines the type of hard disk that is provisioned for you. For example, if you have 500Gi at 100 IOPS, your IOPS to gigabyte ratio is 0.2. Storage with a ratio of less than or equal to 0.3 is provisioned on SATA hard disks. If your ratio is greater than 0.3, then your storage is provisioned on SSD hard disks. Table of custom storage class size ranges and IOPS
      Size range in gigabytes IOPS range in multiples of 100
      20-39 Gi 100-1000 IOPS
      40-79 Gi 100-2000 IOPS
      80-99 Gi 100-4000 IOPS
      100-499 Gi 100-6000 IOPS
      500-999 Gi 100-10000 IOPS
      1000-1999 Gi 100-20000 IOPS
      2000-2999 Gi 200-40000 IOPS
      3000-3999 Gi 200-48000 IOPS
      4000-7999 Gi 300-48000 IOPS
      8000-9999 Gi 500-48000 IOPS
      10000-12000 Gi 1000-48000 IOPS
  5. Choose if you want to keep your data after the cluster or the persistent volume claim (PVC) is deleted.

    • If you want to keep your data, then choose a retain storage class. When you delete the PVC, only the PVC is deleted. The PV, the physical storage device in your IBM Cloud infrastructure (SoftLayer) account, and your data still exist. To reclaim the storage and use it in your cluster again, you must remove the PV and follow the steps for using existing file storage.
    • If you want the PV, the data, and your physical file storage device to be deleted when you delete the PVC, choose a storage class without retain. Note: If you have a Dedicated account, choose a storage class without retain to prevent orphaned volumes in IBM Cloud infrastructure (SoftLayer).
  6. Choose if you want to be billed hourly or monthly. Check the pricing External link icon for more information. By default, all file storage devices are provisioned with an hourly billing type. If you choose a monthly billing type, when you remove the persistent storage, you still pay the monthly charge for it, even if you used it only for a short amount of time. {: note}


Adding file storage to apps

{: #add_file}

Create a persistent volume claim (PVC) to dynamically provision file storage for your cluster. Dynamic provisioning automatically creates the matching persistent volume (PV) and orders the physical storage device in your IBM Cloud infrastructure (SoftLayer) account. {:shortdesc}

Before you begin:

Looking to deploy file storage in a stateful set? See Using file storage in a stateful set for more information. {: tip}

To add file storage:

  1. Create a configuration file to define your persistent volume claim (PVC) and save the configuration as a .yaml file.

    • Example for bronze, silver, gold storage classes: The following .yaml file creates a claim that is named mypvc of the "ibmc-file-silver" storage class, billed "monthly", with a gigabyte size of 24Gi.

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc
        annotations:
          volume.beta.kubernetes.io/storage-class: "ibmc-file-silver"
        labels:
          billingType: "monthly"
          region: us-south
          zone: dal13
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 24Gi
      

      {: codeblock}

    • Example for using the custom storage class: The following .yaml file creates a claim that is named mypvc of the storage class ibmc-file-retain-custom, billed "hourly", with a gigabyte size of 45Gi and IOPS of "300".

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc
        annotations:
          volume.beta.kubernetes.io/storage-class: "ibmc-file-retain-custom"
        labels:
          billingType: "hourly"
          region: us-south
          zone: dal13
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 45Gi
            iops: "300"
      

      {: codeblock}

      Understanding the YAML file components
      Idea icon Understanding the YAML file components
      metadata.name Enter the name of the PVC.
      metadata.annotations.
      volume.beta.kubernetes.io/
      storage-class
      The name of the storage class that you want to use to provision file storage.
      If you do not specify a storage class, the PV is created with the default storage class ibmc-file-bronze.

      Tip: If you want to change the default storage class, run kubectl patch storageclass <storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' and replace <storageclass> with the name of the storage class.
      metadata.labels.billingType Specify the frequency for which your storage bill is calculated, "monthly" or "hourly". If you do not specify a billing type, the storage is provisioned with an hourly billing type.
      metadata.labels.region Optional: Specify the region where you want to provision your file storage. To connect to your storage, create the storage in the same region that your cluster is in. If you specify the region, you must also specify a zone. If you do not specify a region, or the specified region is not found, the storage is created in the same region as your cluster.

      Tip: Instead of specifying the region and zone in the PVC, you can also specify these values in a [customized storage class](#multizone_yaml). Then, use your storage class in the metadata.annotations.volume.beta.kubernetes.io/storage-class section of your PVC. If the region and zone are specified in the storage class and the PVC, the values in the PVC take precedence.
      metadata.labels.zone Optional: Specify the zone where you want to provision your file storage. To use your storage in an app, create the storage in the same zone that your worker node is in. To view the zone of your worker node, run ibmcloud ks workers --cluster <cluster_name_or_ID> and review the Zone column of your CLI output. If you specify the zone, you must also specify a region. If you do not specify a zone or the specified zone is not found in a multizone cluster, the zone is selected on a round-robin basis.

      Tip: Instead of specifying the region and zone in the PVC, you can also specify these values in a [customized storage class](#multizone_yaml). Then, use your storage class in the metadata.annotations.volume.beta.kubernetes.io/storage-class section of your PVC. If the region and zone are specified in the storage class and the PVC, the values in the PVC take precedence.
      spec.accessMode Specify one of the following options:
      • ReadWriteMany: The PVC can be mounted by multiple pods. All pods can read from and write to the volume.
      • ReadOnlyMany: The PVC can be mounted by multiple pods. All pods have read-only access.
      • ReadWriteOnce: The PVC can be mounted by one pod only. This pod can read from and write to the volume.
      spec.resources.requests.storage Enter the size of the file storage, in gigabytes (Gi). After your storage is provisioned, you cannot change the size of your file storage. Make sure to specify a size that matches the amount of data that you want to store.
      spec.resources.requests.iops This option is available for the custom storage classes only (`ibmc-file-custom / ibmc-file-retain-custom`). Specify the total IOPS for the storage, selecting a multiple of 100 within the allowable range. If you choose an IOPS other than one that is listed, the IOPS is rounded up.
      If you want to use a customized storage class, create your PVC with the corresponding storage class name, a valid IOPS and size.   
      {: tip}
      
      1. Create the PVC.

        kubectl apply -f mypvc.yaml
        

        {: pre}

      2. Verify that your PVC is created and bound to the PV. This process can take a few minutes.

        kubectl describe pvc mypvc
        

        {: pre}

        Example output:

        Name:		mypvc
        Namespace:	default
        StorageClass:	""
        Status:		Bound
        Volume:		pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
        Labels:		<none>
        Capacity:	20Gi
        Access Modes:	RWX
        Events:
          FirstSeen	LastSeen	Count	From								SubObjectPath	Type		Reason			Message
          ---------	--------	-----	----								-------------	--------	------			-------
          3m		3m		1	{ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 }			Normal		Provisioning		External provisioner is provisioning volume for claim "default/my-persistent-volume-claim"
          3m		1m		10	{persistentvolume-controller }							Normal		ExternalProvisioning	cannot find provisioner "ibm.io/ibmc-file", expecting that a volume for the claim is provisioned either manually or via external software
          1m		1m		1	{ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 }			Normal		ProvisioningSucceeded	Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
        
        

        {: screen}

      3. {: #app_volume_mount}To mount the storage to your deployment, create a configuration .yaml file and specify the PVC that binds the PV.

        If you have an app that requires a non-root user to write to the persistent storage, or an app that requires that the mount path is owned by the root user, see Adding non-root user access to NFS file storage or Enabling root permission for NFS file storage. {: tip}

        apiVersion: apps/v1beta1
        kind: Deployment
        metadata:
          name: <deployment_name>
          labels:
            app: <deployment_label>
        spec:
          selector:
            matchLabels:
              app: <app_name>
          template:
            metadata:
              labels:
                app: <app_name>
            spec:
              containers:
              - image: <image_name>
                name: <container_name>
                volumeMounts:
                - name: <volume_name>
                  mountPath: /<file_path>
              volumes:
              - name: <volume_name>
                persistentVolumeClaim:
                  claimName: <pvc_name>
        

        {: codeblock}

        Understanding the YAML file components
        Idea icon Understanding the YAML file components
        metadata.labels.app A label for the deployment.
        spec.selector.matchLabels.app
        spec.template.metadata.labels.app
        A label for your app.
        template.metadata.labels.app A label for the deployment.
        spec.containers.image The name of the image that you want to use. To list available images in your {{site.data.keyword.registryshort_notm}} account, run `ibmcloud cr image-list`.
        spec.containers.name The name of the container that you want to deploy to your cluster.
        spec.containers.volumeMounts.mountPath The absolute path of the directory to where the volume is mounted inside the container. Data that is written to the mount path is stored under the root directory in your physical file storage instance. To create directories in your physical file storage instance, you must create subdirectories in your mount path.
        spec.containers.volumeMounts.name The name of the volume to mount to your pod.
        volumes.name The name of the volume to mount to your pod. Typically this name is the same as volumeMounts.name.
        volumes.persistentVolumeClaim.claimName The name of the PVC that binds the PV that you want to use.
      4. Create the deployment.

        kubectl apply -f <local_yaml_path>
        

        {: pre}

      5. Verify that the PV is successfully mounted.

        kubectl describe deployment <deployment_name>
        

        {: pre}

        The mount point is in the Volume Mounts field and the volume is in the Volumes field.

         Volume Mounts:
              /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro)
              /volumemount from myvol (rw)
        ...
        Volumes:
          myvol:
            Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:	mypvc
            ReadOnly:	false
        

        {: screen}


      Using existing file storage in your cluster

      {: #existing_file}

      If you have an existing physical storage device that you want to use in your cluster, you can manually create the PV and PVC to statically provision the storage.

      Before you begin:

      Step 1: Preparing your existing storage.

      Before you can start to mount your existing storage to an app, you must retrieve all necessary information for your PV and prepare the storage to be accessible in your cluster.

      For storage that was provisioned with a retain storage class:
      If you provisioned storage with a retain storage class and you remove the PVC, the PV and the physical storage device are not automatically removed. To reuse the storage in your cluster, you must remove the remaining PV first.

      To use existing storage in a different cluster than the one where you provisioned it, follow the steps for storage that was created outside of the cluster to add the storage to the subnet of your worker node. {: tip}

      1. List existing PVs.

        kubectl get pv
        

        {: pre}

        Look for the PV that belongs to your persistent storage. The PV is in a released state.

      2. Get the details of the PV.

        kubectl describe pv <pv_name>
        

        {: pre}

      3. Note the CapacityGb, storageClass, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, server, and path.

      4. Remove the PV.

        kubectl delete pv <pv_name>
        

        {: pre}

      5. Verify that the PV is removed.

        kubectl get pv
        

        {: pre}


      For persistent storage that was provisioned outside the cluster:
      If you want to use existing storage that you provisioned earlier, but never used in your cluster before, you must make the storage available in the same subnet as your worker nodes.

      If you have a Dedicated account, you must open a support case. {: note}

      1. {: #external_storage}From the IBM Cloud infrastructure (SoftLayer) portal External link icon, click Storage.
      2. Click File Storage and from the Actions menu, select Authorize Host.
      3. Select Subnets.
      4. From the drop-down list, select the private VLAN subnet that your worker node is connected to. To find the subnet of your worker node, run ibmcloud ks workers <cluster_name> and compare the Private IP of your worker node with the subnet that you found in the drop-down list.
      5. Click Submit.
      6. Click the name of the file storage.
      7. Note the Mount Point, the size, and the Location field. The Mount Point field is displayed as <nfs_server>:<file_storage_path>.

      Step 2: Creating a persistent volume (PV) and a matching persistent volume claim (PVC)

      1. Create a storage configuration file for your PV. Include the values that you retrieved earlier.

        apiVersion: v1
        kind: PersistentVolume
        metadata:
         name: mypv
         labels:
            failure-domain.beta.kubernetes.io/region: <region>
            failure-domain.beta.kubernetes.io/zone: <zone>
        spec:
         capacity:
           storage: "<size>"
         accessModes:
           - ReadWriteMany
         nfs:
           server: "<nfs_server>"
           path: "<file_storage_path>"
        

        {: codeblock}

        Understanding the YAML file components
        Idea icon Understanding the YAML file components
        name Enter the name of the PV object to create.
        metadata.labels Enter the region and the zone that you retrieved earlier. You must have at least one worker node in the same region and zone as your persistent storage to mount the storage in your cluster. If a PV for your storage already exists, [add the zone and region label](cs_storage_basics.html#multizone) to your PV.
        spec.capacity.storage Enter the storage size of the existing NFS file share that you retrieved earlier. The storage size must be written in gigabytes, for example, 20Gi (20 GB) or 1000Gi (1 TB), and the size must match the size of the existing file share.
        spec.accessMode Specify one of the following options:
        • ReadWriteMany: The PVC can be mounted by multiple pods. All pods can read from and write to the volume.
        • ReadOnlyMany: The PVC can be mounted by multiple pods. All pods have read-only access.
        • ReadWriteOnce: The PVC can be mounted by one pod only. This pod can read from and write to the volume.
        spec.nfs.server Enter the NFS file share server ID that you retrieved earlier.
        path Enter the path to the NFS file share that you retrieved earlier.
      2. Create the PV in your cluster.

        kubectl apply -f mypv.yaml
        

        {: pre}

      3. Verify that the PV is created.

        kubectl get pv
        

        {: pre}

      4. Create another configuration file to create your PVC. In order for the PVC to match the PV that you created earlier, you must choose the same value for storage and accessMode. The storage-class field must be empty. If any of these fields do not match the PV, then a new PV, and a new physical storage instance is dynamically provisioned.

        kind: PersistentVolumeClaim
        apiVersion: v1
        metadata:
         name: mypvc
         annotations:
           volume.beta.kubernetes.io/storage-class: ""
        spec:
         accessModes:
           - ReadWriteMany
         resources:
           requests:
             storage: "<size>"
        

        {: codeblock}

      5. Create your PVC.

        kubectl apply -f mypvc.yaml
        

        {: pre}

      6. Verify that your PVC is created and bound to the PV. This process can take a few minutes.

        kubectl describe pvc mypvc
        

        {: pre}

        Example output:

        Name: mypvc
        Namespace: default
        StorageClass:	""
        Status: Bound
        Volume: pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
        Labels: <none>
        Capacity: 20Gi
        Access Modes: RWX
        Events:
          FirstSeen LastSeen Count From        SubObjectPath Type Reason Message
          --------- -------- ----- ----        ------------- -------- ------ -------
          3m 3m 1 {ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal Provisioning External provisioner is provisioning volume for claim "default/my-persistent-volume-claim"
          3m 1m	 10 {persistentvolume-controller } Normal ExternalProvisioning cannot find provisioner "ibm.io/ibmc-file", expecting that a volume for the claim is provisioned either manually or via external software
          1m 1m 1 {ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal ProvisioningSucceeded	Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
        

        {: screen}

      You successfully created a PV and bound it to a PVC. Cluster users can now mount the PVC to their deployments and start reading from and writing to the PV object.


      Using file storage in a stateful set

      {: #file_statefulset}

      If you have a stateful app such as a database, you can create stateful sets that use file storage to store your app's data. Alternatively, you can use an {{site.data.keyword.Bluemix_notm}} database-as-a-service and store your data in the cloud. {: shortdesc}

      What do I need to be aware of when adding file storage to a stateful set?
      To add storage to a stateful set, you specify your storage configuration in the volumeClaimTemplates section of your stateful set YAML. The volumeClaimTemplates is the basis for your PVC and can include the storage class and the size or IOPS of your file storage that you want to provision. However, if you want to include labels in your volumeClaimTemplates, Kubernetes does not include these labels when creating the PVC. Instead, you must add the labels directly to your stateful set.

      You cannot deploy two stateful sets at the same time. If you try to create a stateful set before a different one is fully deployed, then the deployment of your stateful set might lead to unexpected results. {: important}

      How can I create my stateful set in a specific zone?
      In a multizone cluster, you can specify the zone and region where you want to create your stateful set in the spec.selector.matchLabels and spec.template.metadata.labels section of your stateful set YAML. Alternatively, you can add those labels to a customized storage class and use this storage class in the volumeClaimTemplates section of your stateful set.

      What options do I have to add file storage to a stateful set?
      If you want to automatically create your PVC when you create the stateful set, use dynamic provisioning. You can also choose to pre-provision your PVCs or use existing PVCs with your stateful set.

      Dynamically provision the PVC when you create a stateful set

      {: #dynamic_statefulset}

      Use this option if you want to automatically create the PVC when you create the stateful set. {: shortdesc}

      Before you begin: Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster.

      1. Verify that all existing stateful sets in your cluster are fully deployed. If a stateful set is still being deployed, you cannot start creating your stateful set. You must wait until all stateful sets in your cluster are fully deployed to avoid unexpected results.

        1. List existing stateful sets in your cluster.

          kubectl get statefulset --all-namespaces
          

          {: pre}

          Example output:

          NAME              DESIRED   CURRENT   AGE
          mystatefulset     3         3         6s
          

          {: screen}

        2. View the Pods Status of each stateful set to ensure that the deployment of the stateful set is finished.

          kubectl describe statefulset <statefulset_name>
          

          {: pre}

          Example output:

          Name:               nginx
          Namespace:          default
          CreationTimestamp:  Fri, 05 Oct 2018 13:22:41 -0400
          Selector:           app=nginx,billingType=hourly,region=us-south,zone=dal10
          Labels:             app=nginx
                              billingType=hourly
                              region=us-south
                              zone=dal10
          Annotations:        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"StatefulSet","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"podManagementPolicy":"Par...
          Replicas:           3 desired | 3 total
          Pods Status:        0 Running / 3 Waiting / 0 Succeeded / 0 Failed
          Pod Template:
            Labels:  app=nginx
                     billingType=hourly
                     region=us-south
                     zone=dal10
          ...
          

          {: screen}

          A stateful set is fully deployed when the number of replicas that you find in the Replicas section of your CLI output equals the number of Running pods in the Pods Status section. If a stateful set is not fully deployed yet, wait until the deployment is finished before you proceed.

      2. Create a configuration file for your stateful set and the service that you use to expose the stateful set. The following example shows how to deploy nginx as a stateful set with 3 replicas. For each replica, a 20 gigabyte file storage device is provisioned based on the specifications defined in the ibmc-file-retain-bronze storage class. All storage devices are provisioned in the dal10 zone. Because file storage cannot be accessed from other zones, all replicas of the stateful set are also deployed onto a worker node that is located in dal10.

        apiVersion: v1
        kind: Service
        metadata:
         name: nginx
         labels:
           app: nginx
        spec:
         ports:
         - port: 80
           name: web
         clusterIP: None
         selector:
           app: nginx
        ---
        apiVersion: apps/v1beta1
        kind: StatefulSet
        metadata:
         name: nginx
        spec:
         serviceName: "nginx"
         replicas: 3
         podManagementPolicy: Parallel
         selector:
           matchLabels:
             app: nginx
             billingType: "hourly"
             region: "us-south"
             zone: "dal10"
         template:
           metadata:
             labels:
               app: nginx
               billingType: "hourly"
               region: "us-south"
               zone: "dal10"
           spec:
             containers:
             - name: nginx
               image: k8s.gcr.io/nginx-slim:0.8
               ports:
               - containerPort: 80
                 name: web
               volumeMounts:
               - name: myvol
                 mountPath: /usr/share/nginx/html
         volumeClaimTemplates:
         - metadata:
             annotations:
               volume.beta.kubernetes.io/storage-class: ibmc-file-retain-bronze
             name: myvol
           spec:
             accessModes:
             - ReadWriteOnce
             resources:
               requests:
                 storage: 20Gi
                 iops: "300" #required only for performance storage
        

        {: codeblock}

        Understanding the stateful set YAML file components
        Idea icon Understanding the stateful set YAML file components
        metadata.name Enter a name for your stateful set. The name that you enter is used to create the name for your PVC in the format: <volume_name>-<statefulset_name>-<replica_number>.
        spec.serviceName Enter the name of the service that you want to use to expose your stateful set.
        spec.replicas Enter the number of replicas for your stateful set.
        spec.podManagementPolicy Enter the pod management policy that you want to use for your stateful set. Choose between the following options:
        • OrderedReady: With this option, stateful set replicas are deployed one after another. For example, if you specified 3 replicas, then Kubernetes creates the PVC for your first replica, waits until the PVC is bound, deploys the stateful set replica, and mounts the PVC to the replica. After the deployment is finished, the second replica is deployed. For more information about this option, see [OrderedReady Pod Management ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management).
        • Parallel: With this option, the deployment of all stateful set replicas is started at the same time. If your app supports parallel deployment of replicas, then use this option to save deployment time for your PVCs and stateful set replicas.
        spec.selector.matchLabels Enter all labels that you want to include in your stateful set and your PVC. Labels that you include in the volumeClaimTemplates of your stateful set are not recognized by Kubernetes. Sample labels that you might want to include are:
        • region and zone: If you want all your stateful set replicas and PVCs to be created in one specific zone, add both labels. You can also specify the zone and region in the storage class that you use. If you do not specify a zone and region and you have a multizone cluster, the zone in which your storage is provisioned is selected on a round-robin basis to balance volume requests evenly across all zones.
        • billingType: Enter the billing type that you want to use for your PVCs. Choose between hourly or monthly. If you do not specify this label, all PVCs are created with an hourly billing type.
        spec.template.metadata.labels Enter the same labels that you added to the spec.selector.matchLabels section.
        spec.volumeClaimTemplates.metadata.
        annotations.volume.beta.
        kubernetes.io/storage-class
        Enter the storage class that you want to use. To list existing storage classes, run kubectl get storageclasses | grep file. If you do not specify a storage class, the PVC is created with the default storage class that is set in your cluster. Make sure that the default storage class uses the ibm.io/ibmc-file provisioner so that your stateful set is provisioned with file storage.
        spec.volumeClaimTemplates.metadata.name Enter a name for your volume. Use the same name that you defined in the spec.containers.volumeMount.name section. The name that you enter here is used to create the name for your PVC in the format: <volume_name>-<statefulset_name>-<replica_number>.
        spec.volumeClaimTemplates.spec.resources.
        requests.storage
        Enter the size of the file storage in gigabytes (Gi).
        spec.volumeClaimTemplates.spec.resources.
        requests.iops
        If you want to provision [performance storage](#predefined_storageclass), enter the number of IOPS. If you use an endurance storage class and specify a number of IOPS, the number of IOPS is ignored. Instead, the IOPS that is specified in your storage class is used.
      3. Create your stateful set.

        kubectl apply -f statefulset.yaml
        

        {: pre}

      4. Wait for your stateful set to be deployed.

        kubectl describe statefulset <statefulset_name>
        

        {: pre}

        To see the current status of your PVCs, run kubectl get pvc. The name of your PVC is formatted as <volume_name>-<statefulset_name>-<replica_number>. {: tip}

      Pre-provisioning the PVC before creating the stateful set

      {: #static_statefulset}

      You can pre-provision your PVCs before creating your stateful set or use existing PVCs with your stateful set. {: shortdesc}

      When you dynamically provision your PVCs when creating the stateful set, the name of the PVC is assigned based on the values that you used in the stateful set YAML file. In order for the stateful set to use existing PVCs, the name of your PVCs must match the name that would automatically be created when using dynamic provisioning.

      Before you begin: Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster.

      1. Follow steps 1-3 in Adding file storage to apps to create a PVC for each stateful set replica. Make sure that you create your PVC with a name that follows the following format: <volume_name>-<statefulset_name>-<replica_number>.

        • <volume_name>: Use the name that you want to specify in the spec.volumeClaimTemplates.metadata.name section of your stateful set, such as nginxvol.
        • <statefulset_name>: Use the name that you want to specify in the metadata.name section of your stateful set, such as nginx_statefulset.
        • <replica_number>: Enter the number of your replica starting with 0.

        For example, if you must create 3 stateful set replicas, create 3 PVCs with the following names: nginxvol-nginx_statefulset-0, nginxvol-nginx_statefulset-1, and nginxvol-nginx_statefulset-2.

      2. Follow the steps in Dynamically provision the PVC when you create a stateful set to create your stateful set. Make sure to use the values from your PVC names in the stateful set specification:

        • spec.volumeClaimTemplates.metadata.name: Enter the <volume_name> that you used in the previous step.
        • metadata.name: Enter the <statefulset_name> that you used in the previous step.
        • spec.replicas: Enter the number of replicas that you want to create for your stateful set. The number of replicas must equal the number of PVCs that you created earlier.

        If you created your PVCs in different zones, do not include a region or zone label in your stateful set. {: note}

      3. Verify that the PVCs are used in your stateful set replica pods.

        1. List the pods in your cluster. Identify the pods that belong to your stateful set.

          kubectl get pods
          

          {: pre}

        2. Verify that your existing PVC is mounted to your stateful set replica. Review the ClaimName in the Volumes section of your CLI output.

          kubectl describe pod <pod_name>
          

          {: pre}

          Example output:

          Name:           nginx-0
          Namespace:      default
          Node:           10.xxx.xx.xxx/10.xxx.xx.xxx
          Start Time:     Fri, 05 Oct 2018 13:24:59 -0400
          ...
          Volumes:
            myvol:
              Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
              ClaimName:  myvol-nginx-0
          

        ...

        {: screen}
        
        

      Changing the default NFS version

      {: #nfs_version}

      The version of the file storage determines the protocol that is used to communicate with the {{site.data.keyword.Bluemix_notm}} file storage server. By default, all file storage instances are set up with NFS version 4. You can change your existing PV to an older NFS version if your app requires a specific version to properly function. {: shortdesc}

      To change the default NFS version, you can either create a new storage class to dynamically provision file storage in your cluster, or choose to change an existing PV that is mounted to your pod.

      To apply the latest security updates and for a better performance, use the default NFS version and do not change to an older NFS version. {: important}

      To create a customized storage class with the desired NFS version:

      1. Create a customized storage class with the NFS version that you want to provision.

      2. Create the storage class in your cluster.

        kubectl apply -f nfsversion_storageclass.yaml
        

        {: pre}

      3. Verify that the customized storage class was created.

        kubectl get storageclasses
        

        {: pre}

      4. Provision file storage with your customized storage class.

      To change your existing PV to use a different NFS version:

      1. Get the PV of the file storage where you want to change the NFS version and note the name of the PV.

        kubectl get pv
        

        {: pre}

      2. Add an annotation to your PV. Replace <version_number> with the NFS version that you want to use. For example, to change to NFS version 3.0, enter 3.

        kubectl patch pv <pv_name> -p '{"metadata": {"annotations":{"volume.beta.kubernetes.io/mount-options":"vers=<version_number>"}}}'
        

        {: pre}

      3. Delete the pod that uses the file storage and re-create the pod.

        1. Save the pod yaml to your local machine.

          kubect get pod <pod_name> -o yaml > <filepath/pod.yaml>
          

          {: pre}

        2. Delete the pod.

          kubectl deleted pod <pod_name>
          

          {: pre}

        3. Re-create the pod.

          kubectl apply -f pod.yaml
          

          {: pre}

      4. Wait for the pod to deploy.

        kubectl get pods
        

        {: pre}

        The pod is fully deployed when the status changes to Running.

      5. Log in to your pod.

        kubectl exec -it <pod_name> sh
        

        {: pre}

      6. Verify that the file storage was mounted with the NFS version that you specified earlier.

        mount | grep "nfs" | awk -F" |," '{ print $5, $8 }'
        

        {: pre}

        Example output:

        nfs vers=3.0
        

        {: screen}


      Backing up and restoring data

      {: #backup_restore}

      File storage is provisioned into the same location as the worker nodes in your cluster. The storage is hosted on clustered servers by IBM to provide availability in case a server goes down. However, file storage is not backed up automatically and might be inaccessible if the entire location fails. To protect your data from being lost or damaged, you can set up periodic backups that you can use to restore your data when needed. {: shortdesc}

      Review the following backup and restore options for your file storage:

      Set up periodic snapshots

      You can [set up periodic snapshots for your file storage](/docs/infrastructure/FileStorage/snapshots.html), which is a read-only image that captures the state of the instance at a point in time. To store the snapshot, you must request snapshot space on your file storage. Snapshots are stored on the existing storage instance within the same zone. You can restore data from a snapshot if a user accidentally removes important data from the volume.

      : If you have a Dedicated account, you must [open a support case](/docs/get-support/howtogetsupport.html#getting-customer-support).


      To create a snapshot for your volume:
      1. [Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster](cs_cli_install.html#cs_cli_configure).
      2. Log in to the `ibmcloud sl` CLI.
        ibmcloud sl init
      3. List existing PVs in your cluster.
        kubectl get pv
      4. Get the details for the PV for which you want to create snapshot space and note the volume ID, the size and the IOPS.
        kubectl describe pv <pv_name>
        The volume ID, the size and the IOPS can be found in the Labels section of your CLI output.
      5. Create the snapshot size for your existing volume with the parameters that you retrieved in the previous step.
        ibmcloud sl file snapshot-order <volume_ID> --size <size> --tier <iops>
      6. Wait for the snapshot size to create.
        ibmcloud sl file volume-detail <volume_ID>
        The snapshot size is successfully provisioned when the Snapshot Size (GB) in your CLI output changes from 0 to the size that you ordered.
      7. Create the snapshot for your volume and note the ID of the snapshot that is created for you.
        ibmcloud sl file snapshot-create <volume_ID>
      8. Verify that the snapshot is created successfully.
        ibmcloud sl file snapshot-list <volume_ID>

      To restore data from a snapshot to an existing volume:
      ibmcloud sl file snapshot-restore <volume_ID> <snapshot_ID>

      Replicate snapshots to another zone

      To protect your data from a zone failure, you can [replicate snapshots](/docs/infrastructure/FileStorage/replication.html#replicating-data) to a file storage instance that is set up in another zone. Data can be replicated from the primary storage to the backup storage only. You cannot mount a replicated file storage instance to a cluster. When your primary storage fails, you can manually set your replicated backup storage to be the primary one. Then, you can mount it to your cluster. After your primary storage is restored, you can restore the data from the backup storage. Note: If you have a Dedicated account, you cannot replicate snapshots to another zone.

      Duplicate storage

      You can [duplicate your file storage instance](/docs/infrastructure/FileStorage/how-to-create-duplicate-volume.html#creating-a-duplicate-file-storage) in the same zone as the original storage instance. A duplicate has the same data as the original storage instance at the point in time that you create the duplicate. Unlike replicas, use the duplicate as an independent storage instance from the original. To duplicate, first [set up snapshots for the volume](/docs/infrastructure/FileStorage/snapshots.html). Note: If you have a Dedicated account, you must open a support case.

      Back up data to {{site.data.keyword.cos_full}}

      You can use the [**ibm-backup-restore image**](/docs/services/RegistryImages/ibm-backup-restore/index.html#ibmbackup_restore_starter) to spin up a backup and restore pod in your cluster. This pod contains a script to run a one-time or periodic backup for any persistent volume claim (PVC) in your cluster. Data is stored in your {{site.data.keyword.cos_full}} instance that you set up in a zone.

      To make your data even more highly available and protect your app from a zone failure, set up a second {{site.data.keyword.cos_full}} instance and replicate data across zones. If you need to restore data from your {{site.data.keyword.cos_full}} instance, use the restore script that is provided with the image.

      Copy data to and from pods and containers

      You can use the `kubectl cp` [command![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/kubectl/overview/#cp) to copy files and directories to and from pods or specific containers in your cluster.

      Before you begin: [Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster](cs_cli_install.html#cs_cli_configure). If you do not specify a container with -c, the command uses to the first available container in the pod.

      You can use the command in various ways:

      • Copy data from your local machine to a pod in your cluster:
        kubectl cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath>
      • Copy data from a pod in your cluster to your local machine:
        kubectl cp <namespace>/<pod>:<pod_filepath>/<filename> <local_filepath>/<filename>
      • Copy data from your local machine to a specific container that runs in a pod in your cluster:
        kubectl cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath> -c <container>

      Storage class reference

      {: #storageclass_reference}

      Bronze

      {: #bronze}

      File storage class: bronze
      Characteristics Setting
      Name ibmc-file-bronze
      ibmc-file-retain-bronze
      Type [Endurance storage](/docs/infrastructure/FileStorage/index.html#provisioning-with-endurance-tiers)
      File system NFS
      IOPS per gigabyte 2
      Size range in gigabytes 20-12000 Gi
      Hard disk SSD
      Billing Hourly
      Pricing [Pricing info ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.ibm.com/cloud/file-storage/pricing)

      Silver

      {: #silver}

      File storage class: silver
      Characteristics Setting
      Name ibmc-file-silver
      ibmc-file-retain-silver
      Type [Endurance storage](/docs/infrastructure/FileStorage/index.html#provisioning-with-endurance-tiers)
      File system NFS
      IOPS per gigabyte 4
      Size range in gigabytes 20-12000 Gi
      Hard disk SSD
      Billing Hourly
      Pricing [Pricing info ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.ibm.com/cloud/file-storage/pricing)

      Gold

      {: #gold}

      File storage class: gold
      Characteristics Setting
      Name ibmc-file-gold
      ibmc-file-retain-gold
      Type [Endurance storage](/docs/infrastructure/FileStorage/index.html#provisioning-with-endurance-tiers)
      File system NFS
      IOPS per gigabyte 10
      Size range in gigabytes 20-4000 Gi
      Hard disk SSD
      Billing Hourly
      Pricing [Pricing info ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.ibm.com/cloud/file-storage/pricing)

      Custom

      {: #custom}

      File storage class: custom
      Characteristics Setting
      Name ibmc-file-custom
      ibmc-file-retain-custom
      Type [Performance](/docs/infrastructure/FileStorage/index.html#provisioning-with-performance)
      File system NFS
      IOPS and size

      Size range in gigabytes / IOPS range in multiples of 100

      • 20-39 Gi / 100-1000 IOPS
      • 40-79 Gi / 100-2000 IOPS
      • 80-99 Gi / 100-4000 IOPS
      • 100-499 Gi / 100-6000 IOPS
      • 500-999 Gi / 100-10000 IOPS
      • 1000-1999 Gi / 100-20000 IOPS
      • 2000-2999 Gi / 200-40000 IOPS
      • 3000-3999 Gi / 200-48000 IOPS
      • 4000-7999 Gi / 300-48000 IOPS
      • 8000-9999 Gi / 500-48000 IOPS
      • 10000-12000 Gi / 1000-48000 IOPS
      Hard disk The IOPS to gigabyte ratio determines the type of hard disk that is provisioned. To determine your IOPS to gigabyte ratio, you divide the IOPS by the size of your storage.

      Example:
      You chose 500Gi of storage with 100 IOPS. Your ratio is 0.2 (100 IOPS/500Gi).

      Overview of hard disk types per ratio:
      • Less than or equal to 0.3: SATA
      • Greater than 0.3: SSD
      Billing Hourly
      Pricing [Pricing info ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.ibm.com/cloud/file-storage/pricing)

      Sample customized storage classes

      {: #custom_storageclass}

      You can create a customized storage class and use the storage class in your PVC. {: shortdesc}

      {{site.data.keyword.containerlong_notm}} provides pre-defined storage classes to provision file storage with a particular tier and configuration. In some cases, you might want to provision storage with a different configuration that is not covered in the pre-defined storage classes. You can use the examples in this topic to find sample customized storage classes.

      To create your customized storage class, see Customizing a storage class. Then, use your customized storage class in your PVC.

      Specifying the zone for multizone clusters

      {: #multizone_yaml}

      The following .yaml file customizes a storage class that is based on the ibm-file-silver non-retaining storage class: the type is "Endurance", the iopsPerGB is 4, the sizeRange is "[20-12000]Gi", and the reclaimPolicy is set to "Delete". The zone is specified as dal12. You can review the previous information on ibmc storage classes to help you choose acceptable values for these

      apiVersion: storage.k8s.io/v1beta1
      kind: StorageClass
      metadata:
        name: ibmc-file-silver-mycustom-storageclass
        labels:
          kubernetes.io/cluster-service: "true"
      provisioner: ibm.io/ibmc-file
      parameters:
        zone: "dal12"
        region: "us-south"
        type: "Endurance"
        iopsPerGB: "4"
        sizeRange: "[20-12000]Gi"
        reclaimPolicy: "Delete"
        classVersion: "2"
      

      {: codeblock}

      Changing the default NFS version

      {: #nfs_version_class}

      The following customized storage class is based on the ibmc-file-bronze storage class and lets you define the NFS version that you want to provision. For example, to provision NFS version 3.0, replace <nfs_version> with 3.0.

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: ibmc-file-mount
        #annotations:
        #  storageclass.beta.kubernetes.io/is-default-class: "true"
        labels:
          kubernetes.io/cluster-service: "true"
      provisioner: ibm.io/ibmc-file
      parameters:
        type: "Endurance"
        iopsPerGB: "2"
        sizeRange: "[1-12000]Gi"
        reclaimPolicy: "Delete"
        classVersion: "2"
        mountOptions: nfsvers=<nfs_version>
      

      {: codeblock}