Releases: vmware-tanzu/velero
v1.16.0
v1.16
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0
Container Image
velero/velero:v1.16.0
Documentation
Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
Highlights
Windows cluster support
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
- Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/multiple-arch-build-with-windows.md
- Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
- Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue #8289 for more information.
Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the --item-block-worker-count
Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue #8334.
Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (#8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag ignoreDelayBinding
in node-agent configuration (#8242).
Data mover enhancements in observability
In 1.16, some observability enhancements are added:
- Output various statuses of intermediate objects for failures of data mover backup/restore (#8267)
- Output the errors when Velero fails to delete intermediate objects during clean up (#8125)
The outputs are in the same node-agent log and enabled automatically.
CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue #8725.
Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
- A new backup repository maintenance history section, called
RecentMaintenance
, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (#7810) - Running maintenance jobs are now recaptured after Velero server restarts. (#7753)
- The maintenance job will not be launched for readOnly BackupStorageLocation. (#8238)
- The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (#8091)
- Users now are allowed to configure the intervals of an effective maintenance in the way of
normalGC
,fastGC
andeagerGC
, through thefullMaintenanceInterval
parameter in backupRepository configuration. (#8364)
Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (#8256).
Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation velero.io/restore-status
set on the object. (#8204).
Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (#8484).
Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
Limitations/Known issues
Limitations of Windows support
- fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
- Backup/restore of Security Descriptors and NTFS extended attributes are not supported. As a result, backup/restore of workloads running with non-administrative privileges are not supported.
All Changes
- Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
- Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
- Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
- ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
- host_pods should not be mandatory to node-agent (#8774, @mpryc)
- Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
- Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
- Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
- Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
- Add doc for maintenance history (#8747, @Lyndon-Li)
- Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
- Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
- Fix issue #8475, refactor build-from-source doc for hybrid image build (#8729, @Lyndon-Li)
- Return directly if no pod volme backup are tracked (#8728, @ywk253100)
- Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
- Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
- Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
- Support pushing images to an insecure registry (#8703, @ywk253100)
- Modify golangci configuration to make it work. (#8695, @blackpiglet)
- Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
- Add docs for object level status restore (#8693, @shubham-pampattiwar)
- Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
- Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
- Fix #8657: WaitGroup panic issue (#8679, @ywk253100)
- Fixes issue #8214, validate
--from-schedule
flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000) - Implement parallel ItemBlock processing via backup_controller goroutines (#8659, @sseago)
- Clean up leaked CSI snapshot for incomplete backup (#8637, @raesonerjt)
- Handle update conflict when restoring the status (#8630, @ywk253100)
- Fix issue #8419, support repo maintenance job to run on Windows nodes (#8626, @Lyndon-Li)
- Always create DataUpload configmap in restore namespace (#8621, @sseago)
- Fix issue #8091, avoid to create new repo when BSL is readonly (#8615, @Lyndon-Li)
- Fix issue #8242, distribute dd evenly across nodes (#8611, @Lyndon-Li)
- Fix issue #8497, update du/dd progress on completion (#8608, @Lyndon-Li)
- Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
- Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
- Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
- Data mover restore for Windows (#8594, @Lyndon-Li)
- Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
- Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
- Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable
fullMaintenanceInterval
where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai) - Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
- Clear validation errors when schedule is valid (#8575, @ywk253100)
- Merge restore helper image into Velero serv...
v1.16.0-rc.2
v1.16
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0-rc.2
Container Image
velero/velero:v1.16.0-rc.2
Documentation
Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
Highlights
Windows cluster support
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
- Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/multiple-arch-build-with-windows.md
- Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
- Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue #8289 for more information.
Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the --item-block-worker-count
Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue #8334.
Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (#8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag ignoreDelayBinding
in node-agent configuration (#8242).
Data mover enhancements in observability
In 1.16, some observability enhancements are added:
- Output various statuses of intermediate objects for failures of data mover backup/restore (#8267)
- Output the errors when Velero fails to delete intermediate objects during clean up (#8125)
The outputs are in the same node-agent log and enabled automatically.
CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue #8725.
Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
- A new backup repository maintenance history section, called
RecentMaintenance
, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (#7810) - Running maintenance jobs are now recaptured after Velero server restarts. (#7753)
- The maintenance job will not be launched for readOnly BackupStorageLocation. (#8238)
- The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (#8091)
- Users now are allowed to configure the intervals of an effective maintenance in the way of
normalGC
,fastGC
andeagerGC
, through thefullMaintenanceInterval
parameter in backupRepository configuration. (#8364)
Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (#8256).
Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation velero.io/restore-status
set on the object. (#8204).
Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (#8484).
Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
Limitations/Known issues
Limitations of Windows support
- fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
- Backup/restore of NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc.
All Changes
- Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
- Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
- Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
- ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
- host_pods should not be mandatory to node-agent (#8774, @mpryc)
- Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
- Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
- Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
- Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
- Add doc for maintenance history (#8747, @Lyndon-Li)
- Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
- Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
- Fix issue #8475, refactor build-from-source doc for hybrid image build (#8729, @Lyndon-Li)
- Return directly if no pod volme backup are tracked (#8728, @ywk253100)
- Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
- Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
- Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
- Support pushing images to an insecure registry (#8703, @ywk253100)
- Modify golangci configuration to make it work. (#8695, @blackpiglet)
- Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
- Add docs for object level status restore (#8693, @shubham-pampattiwar)
- Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
- Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
- Fix #8657: WaitGroup panic issue (#8679, @ywk253100)
- Fixes issue #8214, validate
--from-schedule
flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000) - Implement parallel ItemBlock processing via backup_controller goroutines (#8659, @sseago)
- Clean up leaked CSI snapshot for incomplete backup (#8637, @raesonerjt)
- Handle update conflict when restoring the status (#8630, @ywk253100)
- Fix issue #8419, support repo maintenance job to run on Windows nodes (#8626, @Lyndon-Li)
- Always create DataUpload configmap in restore namespace (#8621, @sseago)
- Fix issue #8091, avoid to create new repo when BSL is readonly (#8615, @Lyndon-Li)
- Fix issue #8242, distribute dd evenly across nodes (#8611, @Lyndon-Li)
- Fix issue #8497, update du/dd progress on completion (#8608, @Lyndon-Li)
- Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
- Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
- Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
- Data mover restore for Windows (#8594, @Lyndon-Li)
- Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
- Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
- Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable
fullMaintenanceInterval
where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai) - Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
- Clear validation errors when schedule is valid (#8575, @ywk253100)
- Merge restore helper image into Velero server image (#8574...
v1.16.0-rc.1
v1.16
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0-rc.1
Container Image
velero/velero:v1.16.0-rc.1
Documentation
Upgrading
https://velero.io/docs/v1.16/upgrade-to-1.16/
Highlights
Windows cluster support
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
- Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/multiple-arch-build-with-windows.md
- Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
- Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue #8289 for more information.
Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the --item-block-worker-count
Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue #8334.
Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (#8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag ignoreDelayBinding
in node-agent configuration (#8242).
Data mover enhancements in observability
In 1.16, some observability enhancements are added:
- Output various statuses of intermediate objects for failures of data mover backup/restore (#8267)
- Output the errors when Velero fails to delete intermediate objects during clean up (#8125)
The outputs are in the same node-agent log and enabled automatically.
CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue #8725.
Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
- A new backup repository maintenance history section, called
RecentMaintenance
, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (#7810) - Running maintenance jobs are now recaptured after Velero server restarts. (#7753)
- The maintenance job will not be launched for readOnly BackupStorageLocation. (#8238)
- The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (#8091)
- Users now are allowed to configure the intervals of an effective maintenance in the way of
normalGC
,fastGC
andeagerGC
, through thefullMaintenanceInterval
parameter in backupRepository configuration. (#8364)
Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (#8256).
Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation velero.io/restore-status
set on the object. (#8204).
Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (#8484).
Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
Limitations/Known issues
Limitations of Windows support
- fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
- Backup/restore of NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc.
All Changes
- Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
- Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
- Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
- ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
- host_pods should not be mandatory to node-agent (#8774, @mpryc)
- Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
- Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
- Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
- Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
- Add doc for maintenance history (#8747, @Lyndon-Li)
- Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
- Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
- Fix issue #8475, refactor build-from-source doc for hybrid image build (#8729, @Lyndon-Li)
- Return directly if no pod volme backup are tracked (#8728, @ywk253100)
- Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
- Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
- Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
- Support pushing images to an insecure registry (#8703, @ywk253100)
- Modify golangci configuration to make it work. (#8695, @blackpiglet)
- Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
- Add docs for object level status restore (#8693, @shubham-pampattiwar)
- Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
- Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
- Fix #8657: WaitGroup panic issue (#8679, @ywk253100)
- Fixes issue #8214, validate
--from-schedule
flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000) - Implement parallel ItemBlock processing via backup_controller goroutines (#8659, @sseago)
- Clean up leaked CSI snapshot for incomplete backup (#8637, @raesonerjt)
- Handle update conflict when restoring the status (#8630, @ywk253100)
- Fix issue #8419, support repo maintenance job to run on Windows nodes (#8626, @Lyndon-Li)
- Always create DataUpload configmap in restore namespace (#8621, @sseago)
- Fix issue #8091, avoid to create new repo when BSL is readonly (#8615, @Lyndon-Li)
- Fix issue #8242, distribute dd evenly across nodes (#8611, @Lyndon-Li)
- Fix issue #8497, update du/dd progress on completion (#8608, @Lyndon-Li)
- Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
- Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
- Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
- Data mover restore for Windows (#8594, @Lyndon-Li)
- Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
- Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
- Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable
fullMaintenanceInterval
where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai) - Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
- Clear validation errors when schedule is valid (#8575, @ywk253100)
- Merge restore helper image into Velero server image (#8574...
v1.15.2
v1.15.2
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.2
Container Image
velero/velero:v1.15.2
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
All Changes
v1.15.2-rc.1
v1.15.2
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.2-rc.1
Container Image
velero/velero:v1.15.2-rc.1
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
All Changes
v1.15.1
v1.15.1
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.1
Container Image
velero/velero:v1.15.1
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
All Changes
- Fix backup post hook issue #8159 (caused by #7571): always execute backup post hooks after PVBs are handled (#8517, @ywk253100)
- Fix issue #8125, log diagnostic info for data mover exposers when expose timeout (#8511, @Lyndon-Li)
- Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go (#8505, @kaovilai)
- Fix issue #8433, add third party labels to data mover pods when the same labels exist in node-agent pods (#8501, @Lyndon-Li)
- Fix issue #8485, add an accepted time so as to count the prepare timeout (#8496, @Lyndon-Li)
- Add SecurityContext to restore-helper (#8495, @reasonerjt)
- Add nil check for updating DataUpload VolumeInfo in finalizing phase. (#8465, @blackpiglet)
- Fix issue #8391, check ErrCancelled from suffix of data mover pod's termination message (#8404, @Lyndon-Li)
- Fix issue #8394, don't call closeDataPath in VGDP callbacks, otherwise, the VGDP cleanup will hang (#8402, @Lyndon-Li)
- Reduce minimum required go toolchain in release-1.15 go.mod (#8399, @kaovilai)
- Fix issue #8539, validate uploader types when o.CRDsOnly is set to false only since CRD installation doesn't rely on uploader types (#8540, @Lyndon-Li)
v1.15.1-rc.1
v1.15.1
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.1
Container Image
velero/velero:v1.15.1
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
Known Issues
There are known issues that when the repository maintenance jobs continuously fail, the repository may undergo significant performance downgrade or even deny of service. Therefore, you need to monitor the status of maintenance jobs and make sure the failures are fixed to avoid impact to your backup/restore.
All Changes
- Fix backup post hook issue #8159 (caused by #7571): always execute backup post hooks after PVBs are handled (#8517, @ywk253100)
- Fix issue #8125, log diagnostic info for data mover exposers when expose timeout (#8511, @Lyndon-Li)
- Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go (#8505, @kaovilai)
- Fix issue #8433, add third party labels to data mover pods when the same labels exist in node-agent pods (#8501, @Lyndon-Li)
- Fix issue #8485, add an accepted time so as to count the prepare timeout (#8496, @Lyndon-Li)
- Add SecurityContext to restore-helper (#8495, @reasonerjt)
- Add nil check for updating DataUpload VolumeInfo in finalizing phase. (#8465, @blackpiglet)
- Fix issue #8391, check ErrCancelled from suffix of data mover pod's termination message (#8404, @Lyndon-Li)
- Fix issue #8394, don't call closeDataPath in VGDP callbacks, otherwise, the VGDP cleanup will hang (#8402, @Lyndon-Li)
- Reduce minimum required go toolchain in release-1.15 go.mod (#8399, @kaovilai)
- Fix issue #8539, validate uploader types when o.CRDsOnly is set to false only since CRD installation doesn't rely on uploader types (#8540, @Lyndon-Li)
v1.15.0
v1.15
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.0
Container Image
velero/velero:v1.15.0
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
Highlights
Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue #7925
- The
client-burst/client-qps
parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue #7806 - Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue #7510
Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
Limitations/Known issues
Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured pod security admissions and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
Breaking changes
Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to Velero deprecation policy, for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When
--uploader-type=restic
is used in Velero installation - When Restic path is used to create backup/restore of fs-backup
node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter node-agent-configmap
.
Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
Changing PVC selected-node feature is deprecated
In 1.15, the Changing PVC selected-node feature enters deprecation process and will be removed in future releases according to Velero deprecation policy. Usage of this feature for any purpose is not recommended.
All Changes
- add no-relabeling option to backupPVC configmap (#8288, @sseago)
- only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
- Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
- Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
- Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
- Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
- Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
- Partially fix issue #8138, add doc for node-agent memory preserve (#8167, @Lyndon-Li)
- Pass Velero server command args to the plugins (#8166, @ywk253100)
- Fix issue #8155, Merge Kopia upstream commits for critical issue fixes and performance improvements (#8158, @Lyndon-Li)
- Implement the Repo maintenance Job configuration. (#8145, @blackpiglet)
- Add document for data mover micro service (#8144, @Lyndon-Li)
- Fix issue #8134, allow to config resource request/limit for data mover micro service pods (#8143, @Lyndon-Li)
- Apply backupPVCConfig to backupPod volume spec (#8141, @shubham-pampattiwar)
- Add resource modifier for velero restore describe CLI (#8139, @blackpiglet)
- Fix issue #7620, add doc for backup repo config (#8131, @Lyndon-Li)
- Modify E2E and perf test report generated directory (#8129, @blackpiglet)
- Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
- Delete generated k8s client and informer. (#8114, @blackpiglet)
- Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
- ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
- Fix issue #8032, make node-agent configMap name configurable (#8097, @Lyndon-Li)
...
v1.15.0-rc.2
v1.15
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.0-rc.2
Container Image
velero/velero:v1.15.0-rc.2
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
Highlights
Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue #7925
- The
client-burst/client-qps
parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue #7806 - Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue #7510
Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
Limitations/Known issues
Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured pod security admissions and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
Breaking changes
Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to Velero deprecation policy, for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When
--uploader-type=restic
is used in Velero installation - When Restic path is used to create backup/restore of fs-backup
node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter node-agent-configmap
.
Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
Changing PVC selected-node feature is deprecated
In 1.15, the Changing PVC selected-node feature enters deprecation process and will be removed in future releases according to Velero deprecation policy. Usage of this feature for any purpose is not recommended.
All Changes
- add no-relabeling option to backupPVC configmap (#8288, @sseago)
- only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
- Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
- Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
- Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
- Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
- Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
- Partially fix issue #8138, add doc for node-agent memory preserve (#8167, @Lyndon-Li)
- Pass Velero server command args to the plugins (#8166, @ywk253100)
- Fix issue #8155, Merge Kopia upstream commits for critical issue fixes and performance improvements (#8158, @Lyndon-Li)
- Implement the Repo maintenance Job configuration. (#8145, @blackpiglet)
- Add document for data mover micro service (#8144, @Lyndon-Li)
- Fix issue #8134, allow to config resource request/limit for data mover micro service pods (#8143, @Lyndon-Li)
- Apply backupPVCConfig to backupPod volume spec (#8141, @shubham-pampattiwar)
- Add resource modifier for velero restore describe CLI (#8139, @blackpiglet)
- Fix issue #7620, add doc for backup repo config (#8131, @Lyndon-Li)
- Modify E2E and perf test report generated directory (#8129, @blackpiglet)
- Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
- Delete generated k8s client and informer. (#8114, @blackpiglet)
- Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
- ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
- Fix issue #8032, make node-agent configMap name configurable (#8097, @ly...
v1.15.0-rc.1
v1.15
Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.15.0-rc.1
Container Image
velero/velero:v1.15.0-rc.1
Documentation
Upgrading
https://velero.io/docs/v1.15/upgrade-to-1.15/
Highlights
Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue #7925
- The
client-burst/client-qps
parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue #7806 - Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue #7510
Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
Limitations/Known issues
Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured pod security admissions and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
Breaking changes
Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to Velero deprecation policy, for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When
--uploader-type=restic
is used in Velero installation - When Restic path is used to create backup/restore of fs-backup
node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter node-agent-configmap
.
Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
Changing PVC selected-node feature is deprecated
In 1.15, the Changing PVC selected-node feature enters deprecation process and will be removed in future releases according to Velero deprecation policy. Usage of this feature for any purpose is not recommended.
All Changes
- add no-relabeling option to backupPVC configmap (#8288, @sseago)
- only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
- Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
- Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
- Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
- Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
- Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
- Partially fix issue #8138, add doc for node-agent memory preserve (#8167, @Lyndon-Li)
- Pass Velero server command args to the plugins (#8166, @ywk253100)
- Fix issue #8155, Merge Kopia upstream commits for critical issue fixes and performance improvements (#8158, @Lyndon-Li)
- Implement the Repo maintenance Job configuration. (#8145, @blackpiglet)
- Add document for data mover micro service (#8144, @Lyndon-Li)
- Fix issue #8134, allow to config resource request/limit for data mover micro service pods (#8143, @Lyndon-Li)
- Apply backupPVCConfig to backupPod volume spec (#8141, @shubham-pampattiwar)
- Add resource modifier for velero restore describe CLI (#8139, @blackpiglet)
- Fix issue #7620, add doc for backup repo config (#8131, @Lyndon-Li)
- Modify E2E and perf test report generated directory (#8129, @blackpiglet)
- Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
- Delete generated k8s client and informer. (#8114, @blackpiglet)
- Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
- ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
- Fix issue #8032, make node-agent configMap name configurable (#8097, @ly...