Code for the paper Story-Iter: A Training-free Iterative Paradigm for Long Story Visualization
Note: This code base is still not complete.
The repository contains the official implementation of "Story-Iter".
Story visualization, the task of generating coherent images based on a narrative, has seen significant advancements with the emergence of text-to-image models, particularly diffusion models. However, maintaining semantic consistency, generating high-quality fine-grained interactions, and ensuring computational feasibility remain challenging, especially in long story visualization (i.e., up to 100 frames). In this work, we introduce Story-Iter, a new training-free iterative paradigm to enhance long-story generation. Unlike existing methods that rely on fixed reference images to construct a complete story, our approach features a novel external iterative paradigm, extending beyond the internal iterative denoising steps of diffusion models, to continuously refine each generated image by incorporating all reference images from the previous round. To achieve this, we propose a plug-and-play, training-free global reference cross-attention (GRCA) module, modeling all reference frames with global embeddings, ensuring semantic consistency in long sequences. By progressively incorporating holistic visual context and text constraints, our iterative paradigm enables precise generation with fine-grained interactions, optimizing the story visualization step-by-step. Extensive experiments in the official story visualization dataset and our long story benchmark demonstrate that Story-Iter's state-of-the-art performance in long-story visualization (up to 100 frames) excels in both semantic consistency and fine-grained interactions.
- 2024.10.10: Paper is released on ArXiv.
- 2024.10.04: Code released.
- 2026.01.27: Fast version released, visualizing a 100-frame story over 10 iterations takes only 20 minutes.
- 2026.01.27: ControlNet version released, supporting openPose skeletons as control signals.
Story-Iter framework. Illustration of the proposed iterative paradigm, which consists of initialization, iterations in Story-Iter, and implementation of Global Reference Cross-Attention (GRCA). Story-Iter first visualizes each image only based on the text prompt of the story and uses all results as reference images for the future round. In the iterative paradigm, Story-Iter inserts GRCA into SD. For the ith iteration of each image visualization, GRCA will aggregate the information flow of all reference images during the denoising process through cross-attention. All results from this iteration will be used as a reference image to guide the dynamic update of the story visualization in the next iteration.
The project is built with Python 3.10.14, PyTorch 2.2.2. CUDA 12.1, cuDNN 8.9.02 For installing, follow these instructions:
# git clone this repository
git clone https://github.com/UCSC-VLAA/Story-Iter.git
cd Story-Iter
# create new anaconda env
conda create -n StoryAdapter python=3.10
conda activate StoryAdapter
# install packages
pip install -r requirements.txt
- downloading RealVisXL_V4.0 put it into "./RealVisXL_V4.0"
- downloading clip_image_encoder put it into "./IP-Adapter/sdxl_models/image_encoder"
- downloading ip-adapter_sdxl put it into "./IP-Adapter/sdxl_models/ip-adapter_sdxl.bin"
python run.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin
python run.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin
--story "your prompt1" "your prompt2" "your prompt3" ... "your promptN"
Note: Regarding custom stories, we suggest the template [Character Definition + Interaction Definition + Scene Definition] for better story visualization performance. For example, the Character Definition is "One man wearing yellow robe," the Interaction Definition is "dancing," and the Scene Definition is "the palace hall." So, the input prompt is "One man wearing yellow robe dancing in the palace hall."
- downloading the StorySalon test set."
| GIF1 | GIF2 | GIF3 |
|---|---|---|
![]() |
![]() |
![]() |
| GIF4 | GIF5 | GIF6 |
|---|---|---|
![]() |
![]() |
![]() |
| GIF7 | GIF8 | GIF9 |
|---|---|---|
![]() |
![]() |
![]() |
comic style:
python run.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin --style comic
film style:
python run.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin --style film
realistic style:
python run.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin --style realistic
python run_fast.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin
python run_controlnet.py --base_model_path your_path/RealVisXL_V4.0 --image_encoder_path your_path/IP-Adapter/sdxl_models/image_encoder --ip_ckpt your_path/IP-Adapter/sdxl_models/ip-adapter_sdxl.bin --openpose_path your_path/openpose_root
Deeply appreciate these wonderful open source projects: stablediffusion, clip, ip-adapter, storygen, storydiffusion, theatergen, timm.
If you find this repository useful, please consider giving a star β and citation π:
@misc{mao2024story_adapter,
title={{Story-Adapter: A Training-free Iterative Framework for Long Story Visualization}},
author={Mao, Jiawei and Huang, Xiaoke and Xie, Yunfei and Chang, Yuanqi and Hui, Mude and Xu, Bingjie and Zhou, Yuyin},
journal={arXiv},
volume={abs/2410.06244},
year={2024},
}



















