Skip to content

Latest commit

 

History

History
51 lines (43 loc) · 7.64 KB

cs_storage_backup_restore.md

File metadata and controls

51 lines (43 loc) · 7.64 KB
copyright lastupdated
years
2014, 2018
2018-11-13

{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download}

Backing up and restoring data in persistent volumes

{: #backup_restore}

File shares and block storage are provisioned into the same zone as your cluster. The storage is hosted on clustered servers by {{site.data.keyword.IBM_notm}} to provide availability in case a server goes down. However, file shares and block storage are not backed up automatically and might be inaccessible if the entire zone fails. To protect your data from being lost or damaged, you can set up periodic backups that you can use to restore your data when needed. {: shortdesc}

Review the following backup and restore options for your NFS file shares and block storage:

Set up periodic snapshots

You can set up periodic snapshots for your NFS file share or block storage, which is a read-only image that captures the state of the instance at a point in time. To store the snapshot, you must request snapshot space on your NFS file share or block storage. Snapshots are stored on the existing storage instance within the same zone. You can restore data from a snapshot if a user accidentally removes important data from the volume.

To create a snapshot for your volume:

  1. List existing PVs in your cluster.
    kubectl get pv
  2. Get the details for the PV for which you want to create snapshot space and note the volume ID, the size and the IOPS.
    kubectl describe pv <pv_name>
    For file storage, the volume ID, the size and the IOPS can be found in the Labels section of your CLI output. For block storage, the size and IOPS are shown in the Labels section of your CLI output. To find the volume ID, review the ibm.io/network-storage-id annotation of your CLI output.
  3. Create the snapshot size for your existing volume with the parameters that you retrieved in the previous step.
    slcli file snapshot-order --capacity <size> --tier <iops> <volume_id>
    slcli block snapshot-order --capacity <size> --tier <iops> <volume_id>
  4. Wait for the snapshot size to create.
    slcli file volume-detail <volume_id>
    slcli block volume-detail <volume_id>
    The snapshot size is successfully provisioned when the Snapshot Capacity (GB) in your CLI output changes from 0 to the size that you ordered.
  5. Create the snapshot for your volume and note the ID of the snapshot that is created for you.
    slcli file snapshot-create <volume_id>
    slcli block snapshot-create <volume_id>
  6. Verify that the snapshot is created successfully.
    slcli file volume-detail <snapshot_id>
    slcli block volume-detail <snapshot_id>

To restore data from a snapshot to an existing volume:
slcli file snapshot-restore -s <snapshot_id> <volume_id>
slcli block snapshot-restore -s <snapshot_id> <volume_id>

For more information, see:
  • [NFS periodic snapshots](/docs/infrastructure/FileStorage/snapshots.html)
  • [Block periodic snapshots](/docs/infrastructure/BlockStorage/snapshots.html#snapshots)

Replicate snapshots to another zone

To protect your data from a zone failure, you can [replicate snapshots](/docs/infrastructure/FileStorage/replication.html#replicating-data) to an NFS file share or block storage instance that is set up in another zone. Data can be replicated from the primary storage to the backup storage only. You cannot mount a replicated NFS file share or block storage instance to a cluster. When your primary storage fails, you can manually set your replicated backup storage to be the primary one. Then, you can mount it to your cluster. After your primary storage is restored, you can restore the data from the backup storage.

For more information, see:

  • [Replicate snapshots for NFS](/docs/infrastructure/FileStorage/replication.html)
  • [Replicate snapshots for Block](/docs/infrastructure/BlockStorage/replication.html#replicating-data)

Duplicate storage

You can duplicate your NFS file share or block storage instance in the same zone as the original storage instance. A duplicate has the same data as the original storage instance at the point in time that you create the duplicate. Unlike replicas, use the duplicate as an independent storage instance from the original. To duplicate, first set up snapshots for the volume.

For more information, see:

  • [NFS duplicate snapshots](/docs/infrastructure/FileStorage/how-to-create-duplicate-volume.html#creating-a-duplicate-file-storage)
  • [Block duplicate snapshots](/docs/infrastructure/BlockStorage/how-to-create-duplicate-volume.html#creating-a-duplicate-block-volume)

Back up data to Object Storage

You can use the [**ibm-backup-restore image**](/docs/services/RegistryImages/ibm-backup-restore/index.html#ibmbackup_restore_starter) to spin up a backup and restore pod in your cluster. This pod contains a script to run a one-time or periodic backup for any persistent volume claim (PVC) in your cluster. Data is stored in your {{site.data.keyword.objectstoragefull}} instance that you set up in a zone.

To make your data even more highly available and protect your app from a zone failure, set up a second {{site.data.keyword.objectstoragefull}} instance and replicate data across zones. If you need to restore data from your {{site.data.keyword.objectstoragefull}} instance, use the restore script that is provided with the image.

Copy data to and from pods and containers

You can use the `kubectl cp` [command![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/kubectl/overview/#cp) to copy files and directories to and from pods or specific containers in your cluster.

Before you begin: [Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster](cs_cli_install.html#cs_cli_configure). If you do not specify a container with -c, the command uses to the first available container in the pod.

You can use the command in various ways:

  • Copy data from your local machine to a pod in your cluster:
    kubectl cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath>
  • Copy data from a pod in your cluster to your local machine:
    kubectl cp <namespace>/<pod>:<pod_filepath>/<filename> <local_filepath>/<filename>
  • Copy data from a pod in your cluster to a specific container in another pod another:
    kubectl cp <namespace>/<pod>:<pod_filepath> <namespace>/<other_pod>:<pod_filepath> -c <container>