Official implementation of RefAM: Attention Magnets for Zero-Shot Referral Segmentation.
Most existing approaches to referring segmentation achieve strong performance only through fine-tuning or by composing multiple pre-trained models, often at the cost of additional training and architectural modifications. Meanwhile, large-scale generative diffusion models encode rich semantic information, making them attractive as general-purpose feature extractors. In this work, we introduce a new method that directly exploits features, attention scores, from diffusion transformers for downstream tasks, requiring neither architectural modifications nor additional training. To systematically evaluate these features, we extend benchmarks with vision–language grounding tasks spanning both images and videos. Our key insight is that stop words act as attention magnets: they accumulate surplus attention and can be filtered to reduce noise. Moreover, we identify global attention sinks (GAS) emerging in deeper layers and show that they can be safely suppressed or redirected onto auxiliary tokens, leading to sharper and more accurate grounding maps. We further propose an attention redistribution strategy, where appended stop words partition background activations into smaller clusters, yielding sharper and more localized heatmaps. Building on these findings, we develop RefAM, a simple training-free grounding framework that combines cross-attention maps, GAS handling, and redistribution. Across zero-shot referring image and video segmentation benchmarks, our approach consistently outperforms prior methods, establishing a new state of the art without fine-tuning or additional components.
- 🚀 Training-free: No fine-tuning required
- 🎯 Zero-shot: Works directly with pre-trained diffusion models
- 🎬 Multi-domain: Supports both image and video referring segmentation
- 🔍 Attention-based: Exploits attention mechanisms in diffusion transformers
- ⚡ Efficient: Direct feature extraction without architectural modifications
- Python 3.10+
- CUDA 12.1+
- Conda (recommended)
conda create -n refam-env python=3.10
conda activate refam-env
pip install -q torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install -q diffusers==0.32.2 transformers accelerate einops timm xformers
pip install -q matplotlib pillow numpy opencv-python spacy
pip install -q git+https://github.com/facebookresearch/segment-anything.git
python -m spacy download en_core_web_lgDownload SAM checkpoint (RIOS)
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth -O checkpoints/sam_vit_h_4b8939.pthRVOS setup and eval here.
This project is based on ConceptAttention, HybridGL and AL-Ref-SAM2. Many thanks to the authors for their great works!
If you find this work useful, please cite:
@article{kukleva2025refam,
title={RefAM: Attention Magnets for Zero-Shot Referral Segmentation},
author={Kukleva, Anna and Simsar, Enis and Tonioni, Alessio and Naeem, Muhammad Ferjad and Tombari, Federico and Lenssen, Jan Eric and Schiele, Bernt},
journal={arXiv preprint arXiv:2509.22650},
year={2025}
}