Skip to content

Conversation

@javak87
Copy link
Contributor

@javak87 javak87 commented Oct 27, 2025

Description

In order to save memory and improve computational efficiency, and since gradient checkpointing is used in several parts of the code, having the ability to enable or disable gradient checkpointing in specific sections can provide flexibility to maximize performance while also conserving memory.

Issue Number

Refs #1141

Checklist before asking for review

  • I have performed a self-review of my code
  • My changes comply with basic sanity checks:
    • I have fixed formatting issues with ./scripts/actions.sh lint
    • I have run unit tests with ./scripts/actions.sh unit-test
    • I have documented my code and I have updated the docstrings.
    • I have added unit tests, if relevant
  • I have tried my changes with data and code:
    • I have run the integration tests with ./scripts/actions.sh integration-test
    • (bigger changes) I have run a full training and I have written in the comment the run_id(s): launch-slurm.py --time 60
    • (bigger changes and experiments) I have shared a hegdedoc in the github issue with all the configurations and runs for this experiments
  • I have informed and aligned with people impacted by my change:
    • for config changes: the MatterMost channels and/or a design doc
    • for changes of dependencies: the MatterMost software development channel

Comparing the baseline and current PR performance

When embed_gradient_checkpoint_mode is set to false, the performance and GPU memory peak are as follows:

default_config.yml:

../WeatherGenerator-private/hpc/launch-slurm.py --time 60
embed_gradient_checkpoint_mode

mixed.yml:

../WeatherGenerator-private/hpc/launch-slurm.py --time 60 --config ./config/mixed.yml

embed_gradient_checkpoint_mode_mixed_dataset

For those GPUs with memory more than 25 GiB, it is recommended to set embed_gradient_checkpoint_mode to False

@javak87 javak87 changed the title Add flexibility to enable or disable gradient checkpointing in the embed transformer Make gradient checkpointing configurable in the embed transformer Oct 27, 2025
@clessig
Copy link
Collaborator

clessig commented Oct 27, 2025

Do we have evidence of the effect of changing the gradient check pointing in terms of performance and what models we can run (without running out of memory)?

@javak87
Copy link
Contributor Author

javak87 commented Oct 27, 2025

In terms of checkpointing, we have evidence that checkpointing can hurt performance.

Here is what happened when I activated checkpointing for the embed transformer:

checkpoiting_embed_trasformer_true

The red block shows that during the backward pass, to save memory, weathergen.model.attention.MultiSelfAttentionHead.forward is recalculated, which hurts the performance.

When checkpointing is switched off (avoiding recomputation but using more GPU memory), here are the results:

nsys_logs_rank0_embed_gradient_checkpoint_mode_false

It shows that in the last section of backward, weathergen.model.attention.MultiSelfAttentionHead.forward is not called.

Regarding which models can benefit from checkpointing while remaining efficient, I’m preparing some PRs to show the results for each section individually. At the end, I will test which checkpoints should be switched on or off to avoid OOM errors while maintaining an efficient backward pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants