TL;DR: Attention sinks in Diffusion Language Models are transient, not stable anchors β so the AR heuristic of "always keep sinks" breaks. We identify and prune them instead, beating strong pruning baselines at matched compute.
Diffusion Language Models (DLMs) generate text through iterative denoising over multiple timesteps β a fundamentally different paradigm from autoregressive (AR) models. Yet existing pruning methods blindly inherit AR assumptions, including the popular heuristic of preserving attention sink tokens.
We show this assumption does not transfer to DLMs:
| Property | AR LLMs | Diffusion LLMs |
|---|---|---|
| Sink spatial concentration | β High | β Low (distributed) |
| Sink temporal stability | β Near-zero variance | β High variance |
| Sink positions across steps | π Fixed (prefix tokens) | π Shift progressively as denoising advances |
| "Always keep sinks" heuristic | β Beneficial | β Suboptimal |
Sink-Aware Pruning is a diffusion-native pruning strategy that:
- π Measures sink variance over the full denoising trajectory
- π― Identifies unstable sinks whose positions shift significantly across timesteps
- βοΈ Prunes them β reducing redundant global attention without hurting quality
Figure: Overview of the Sink-Aware Pruning pipeline. (1) Compute attention mass to identify sink tokens and derive per-token down-weighting factors. (2) Update activations by zeroing out sink-token rows. (3) Apply standard pruning metrics (Wanda or SparseGPT) using the modified activations. (4) Make final pruning decisions based on the updated importance scores.
AR LLMs: sink position βββββββββββββββββββββββββββ (stable)
DLMs: sink position β±β² β±β²β± β²β±β² β±β²β± β²β±β² (drifts!)
early denoising β late denoising
Sinks in DLMs are ephemeral β they matter at certain timesteps (high-noise global structure formation) and fade later. Preserving them wastes the sparsity budget on positions that won't persist.
Sink-Aware Pruning consistently matches or outperforms Wanda and SparseGPT baselines across 8 benchmarks, with gains growing under aggressive compression
Gains are most pronounced at higher sparsity, where avoiding mispriced sink weights has the highest impact on model utility.
| Sparsity | Method | Avg | MMLU | ARC-C | PIQA | WinoG | GSM8K | HellaSwag |
|---|---|---|---|---|---|---|---|---|
| β | Dense | 57.93 | 65.97 | 43.00 | 74.10 | 69.30 | 69.29 | 72.70 |
| 50% | Wanda | 52.70 | 61.43 | 39.08 | 72.63 | 64.56 | 57.01 | 67.52 |
| 50% | Sink-Aware | 53.18 | 62.16 | 41.38 | 73.18 | 65.27 | 55.88 | 67.18 |
| 50% | SparseGPT | 52.34 | 60.97 | 39.68 | 72.20 | 64.64 | 53.53 | 66.90 |
| 50% | Sink-Aware | 52.36 | 60.79 | 39.59 | 72.95 | 65.82 | 52.11 | 67.35 |
| Pruning Ratio | Method | PIQA | WinoG | ARC-E | ARC-C |
|---|---|---|---|---|---|
| 0.3 | Baseline | 0.6834 | 0.6630 | 0.6907 | 0.3780 |
| 0.3 | Sink-Aware | 0.6955 | 0.6740 | 0.7175 | 0.3820 |
| 0.5 | Baseline | 0.5898 | 0.5572 | 0.4853 | 0.2039 |
| 0.5 | Sink-Aware | 0.6037 | 0.5724 | 0.5279 | 0.2362 |
Full results for Dream 7B, LLaDA-1.5, and MMaDA are available in the paper.
β οΈ Code coming soon! Star β the repo to get notified.
If you find this work useful, please consider citing:
@article{myrzakhan2025sinkawarepruning,
title = {Sink-Aware Pruning for Diffusion Language Models},
author = {Myrzakhan, Aidar and Li, Tianyi and Guo, Bowei and Tang, Shengkun and Shen, Zhiqiang},
journal = {arXiv preprint arXiv:2602.17664},
year = {2026}
}