Skip to content

Commit

Permalink
add doc for node-agent memory preserve
Browse files Browse the repository at this point in the history
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
  • Loading branch information
Lyndon-Li committed Aug 30, 2024
1 parent 3408ffe commit c540104
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 0 deletions.
1 change: 1 addition & 0 deletions changelogs/unreleased/8167-Lyndon-Li
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Partially fix issue #8138, add doc for node-agent memory preserve
10 changes: 10 additions & 0 deletions site/content/docs/main/file-system-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -641,6 +641,16 @@ Both the uploader and repository consume remarkable CPU/memory during the backup
Velero node-agent uses [BestEffort as the QoS][14] for node-agent pods (so no CPU/memory request/limit is set), so that backups/restores wouldn't fail due to resource throttling in any cases.
If you want to constraint the CPU/memory usage, you need to [customize the resource limits][15]. The CPU/memory consumption is always related to the scale of data to be backed up/restored, refer to [Performance Guidance][16] for more details, so it is highly recommended that you perform your own testing to find the best resource limits for your data.

For Kopia path, some memory is preserved by the node-agent to avoid frequent memory allocations, therefore, after you run a file-system backup/restore, you won't see node-agent releases all the memory. There is a limit for the memory preservation, so the memory won't increase all the time. The limit varies from the number of CPU cores in the cluster nodes, as calculated below:
```
preservedMemoryInOneNode = 128M + 24M * numOfCPUCores
```
The memory perservation only happens in the nodes where backups/restores ever occur. Assuming file-system backups/restores occur in ever worker node and you have equal CPU cores in each node, the maxmum possibly preserved memory in your cluster is:
```
totalPreservedMemory = (128M + 24M * numOfCPUCores) * numOfWorkerNodes
```
However, whether and when this limit is reached is related to the data you are backing up/restoring.

During the restore, the repository may also cache data/metadata so as to reduce the network footprint and speed up the restore. The repository uses its own policy to store and clean up the cache.
For Kopia repository, the cache is stored in the node-agent pod's root file system. Velero allows you to configure a limit of the cache size so that the node-agent pod won't be evicted due to running out of the ephemeral storage. For more details, check [Backup Repository Configuration][18].

Expand Down

0 comments on commit c540104

Please sign in to comment.