Skip to content

czg1225/dParallel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ dParallel: Learnable Parallel Decoding for dLLMs

dParallel_demo.mp4

dParallel: Learnable Parallel Decoding for dLLMs
Zigeng Chen, Gongfan Fang, Xinyin Ma, Ruonan Yu, Xinchao Wang
xML Lab, National University of Singapore

πŸ’‘ Introduction

We introduce dParallel, a simple and effective method that unlocks the inherent parallelism of dLLMs for fast sampling. We identify that the key bottleneck to parallel decoding arises from the sequential certainty convergence for masked tokens. Building on this insight, we introduce the core of our approach: certainty-forcing distillation, a novel training strategy that distills the model to follow its original sampling trajectories while enforcing it to achieve high certainty on masked tokens more rapidly and in parallel. Extensive experiments across various benchmarks demonstrate that our method can dramatically reduce the number of decoding steps while maintaining performance. When applied to the LLaDA-8B-Instruct model, dParallel reduces decoding steps from 256 to 30 on GSM8K, achieving an 8.5Γ— speedup without performance degradation. On the MBPP benchmark, it cuts decoding steps from 256 to 24, resulting in a 10.5Γ— speedup while maintaining accuracy.


Overview of proposed certainty-forcing distillation.

πŸ’» Model and Datasets

πŸ“„ Paper ArXiv-Link
πŸ€– LLaDA Model dParallel-LLaDA-8B-instruct
πŸ€– Dream Model dParallel-Dream-7B-instruct
πŸ“Š LLaDA Data dParallel-LLaDA-Distill Dataset
πŸ“Š Dream Data dParallel-Dream-Distill Dataset

πŸ”₯Updates

  • πŸ”₯ [Oct 1, 2025]: Our arxiv paper is available.
  • πŸ”₯ [Oct 1, 2025]: Code, model and dataset are released.

πŸ”§ Installation:

conda create -n dparallel python==3.10
conda activate dparallel
pip3 install -r requirements.txt

πŸš€ Quick Start:

# cd LLaDA
from transformers import AutoTokenizer
from model.modeling_llada import LLaDAModelLM
from generate import generate
import torch

device = 'cuda'
model = LLaDAModelLM.from_pretrained('Zigeng/dParallel-LLaDA-8B-instruct', trust_remote_code=True, torch_dtype=torch.bfloat16).to(device).eval()
tokenizer = AutoTokenizer.from_pretrained('Zigeng/dParallel-LLaDA-8B-instruct', trust_remote_code=True)

prompt = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? Please reason step by step, and put your final answer within \\boxed{}."

m = [{"role": "user", "content": prompt}, ]
prompt = tokenizer.apply_chat_template(m, add_generation_prompt=True, tokenize=False)

input_ids = tokenizer(prompt)['input_ids']
input_ids = torch.tensor(input_ids).to(device).unsqueeze(0)

out = generate(model, input_ids, steps=256, gen_length=256, block_length=32, temperature=0., threshold=0.5,remasking='low_confidence')
print("Response:",tokenizer.batch_decode(out[0][:, input_ids.shape[1]:], skip_special_tokens=True)[0])
print("NFE:",out[1])

⚑ Evaluation:

We provide evaluation scripts covering GSM8K, Minerva_MATH, HumanEval, and MBPP benchmarks. Importantly, both our reported results and the accompanying code are obtained without using caching or sparse attention techniques. Nevertheless, our method is fully compatible with these optimizations, and integrating them can yield even greater speedups.

For dParallel LLaDA Evaluation:

cd LLaDA
sh eval.sh

For dParallel Dream Evaluation:

cd Dream/eval_instruct
sh eval.sh

πŸ”₯ Training

1. Certainty-Forcing Distillation with LoRA:

We provide training scripts for our proposed Certainty-Forcing Distillation process. The implementation utilizes LoRA during the training process, with the configuration details specified in config_lora_llada.yaml and config_lora_dream.yaml. The training can be completed with 24 GB memory GPUs.

For LLaDA Distillation:

cd LLaDA
deepspeed --master_port 29501 --include localhost:0,1,2,3,4,5,6,7 llada_train.py

For Dream Distillation:

cd Dream
deepspeed --master_port 29501 --include localhost:0,1,2,3,4,5,6,7 dream_train.py

2. LoRA Merge:

After training, merge the LoRA weights to get the dParallel-dLLM.

For LLaDA Model:

cd LLaDA
python merge_lora.py

For Dream Model:

cd Dream
python merge_lora.py

πŸ“– Experimental Results

Results on LLaDA-8B-Instruct:

llada-exp

Results on Dream-7B-Instruct:

dream-exp

Better Speed-Accuracy Trade-off:

trade-off

β˜€οΈ Acknowledgement

Our code builds on LLaDA, Dream, Fast-dLLM, and dKV-Cache, and we acknowledge these great works for laying the groundwork that made our approach possible.

Citation

If our research assists your work, please give us a star ⭐ or cite us using:

@article{chen2025dparallel,
  title={dParallel: Learnable Parallel Decoding for dLLMs},
  author={Chen, Zigeng and Fang, Gongfan and Ma, Xinyin and Yu, Ruonan and Wang, Xinchao},
  journal={arXiv preprint arXiv:2509.26488},
  year={2025}
}

Releases

No releases published

Packages

No packages published