An Empirical Study of Attention and Diversity for Adaptive Visual Token Pruning in Large Vision-Language Models [ICLR 2026]
Changwoo Baek, Jouwon Song, Sohyeon Kim*, Kyeongbo Kongβ
*Equal contribution, β Corresponding author
π Project Page | π Paper (Coming Soon)
- [2026/01] π₯ Our paper has been accepted to ICLR 2026! π
- [2026/02] π Project page is now live!
Large Vision-Language Models (LVLMs) have adopted visual token pruning strategies to mitigate substantial computational overhead incurred by extensive visual token sequences. While prior works primarily focus on either attention-based or diversity-based pruning methods, in-depth analysis of these approaches' characteristics and limitations remains largely unexplored.
In this work, we conduct thorough empirical analysis using effective rank (erank) as a measure of feature diversity and attention score entropy to investigate visual token processing mechanisms and analyze the strengths and weaknesses of each approach.
Our analysis reveals two key insights:
- Diversity aware hybrid pruning methods preserve less feature diversity than intended, and the diversity they do retain is closely tied to increased hallucination frequency compared to attention-based pruning.
- Attention-based approaches are more effective on simple images where visual evidence is concentrated, while diversity-based methods better handle complex images with distributed features.
Building on these empirical insights, we show that incorporating image-aware adjustments into existing hybrid pruning strategies consistently improves their performance. We also provide a minimal instantiation of our empirical findings through a simple adaptive pruning mechanism.
Detailed implementation code is coming soon. π§
Stay tuned for updates! β³
For questions or collaborations, please contact:
- Changwoo Baek
- Jouwon Song
- Sohyeon Kim
- Kyeongbo Kong (Corresponding author)
We thank LLaVA and FasterVLM for their excellent work and open-source contributions.
This project is licensed under the Apache License 2.0

