A curated collection of papers, models, and resources for top-tier researches of diffusion large language models.
Note
This repository is proudly maintained by the frontline research mentors at QuenithAI (应达学术). It aims to provide the most comprehensive and cutting-edge map of papers and technologies in the field of Diffusion Large Language Models.
Your contributions are also vital—feel free to open an issue or submit a pull request to become a collaborator of this repository. We expect your participation!
If you require expert 1-on-1 guidance on your submissions to top-tier conferences and journals, we invite you to contact us via WeChat or E-mail.
本仓库由 「应达学术」(QuenithAI) 的一线科研导师团队倾力打造并持续维护,旨在为您呈现文生图领域最全面、最前沿的论文。
您的贡献对我们和社区来说至关重要——我们诚邀有志之士通过 open an issue 或 submit a pull request 来成为这个项目的合作者之一,期待您的加入!
⚡ Latest Updates
- (Sep 17th, 2025): Initial update of the repository.
- 📚 Table of Contents
- ✍️ Survey Papers
- 📜 Papers & Models
- 🎓 About Us
- 🤝 Contributing
- 💬 Join the Community
- Discrete Diffusion in Large Language and Multimodal Models: A Survey
- A Survey on Diffusion Language Models
- dParallel: Learnable Parallel Decoding for dLLMs
- Sequential Diffusion Language Models
- LLaDA-MoE: A Sparse MoE Diffusion Language Model
- Sparse-dLLM: Accelerating Diffusion LLMs with Dynamic Cache Eviction
- Diffusion Language Models Know the Answer Before Decoding
- Learning to Parallel: Accelerating Diffusion Large Language Models via Adaptive Parallel Decoding
- d^2Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching
- DPad: Efficient Diffusion Language Models with Suffix Dropout
- SparseD: Sparse Attention for Diffusion Language Models
- dKV-Cache: The Cache for Diffusion Language Models
- Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding
- Accelerating Diffusion Language Model Inference via Efficient KV Caching and Guided Diffusion
- Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models
- Set Block Decoding is a Language Model Inference Accelerator
- Quantization Meets dLLMs: A Systematic Study of Post-training Quantization for Diffusion LLMs
- CtrlDiff: Boosting Large Diffusion Language Models with Dynamic Block Prediction and Controllable Generation
- Dream 7B: Diffusion Large Language Models
- DreamOn: Diffusion Language Models For Code Infilling Beyond Fixed-Size Canvas
- Beyond Fixed: Variable-Length Denoising for Diffusion Large Language Models
- Any-Order Flexible Length Masked Diffusion
- d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning
- Reinforcing the Diffusion Chain of Lateral Thought with Diffusion Language Models
- LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models
- wd1: Weighted Policy Optimization for Reasoning in Diffusion Language Models
- Inpainting-Guided Policy Optimization for Diffusion Large Language Models
- Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models
- MDPO: Overcoming the Training-Inference Divide of Masked Diffusion Language Models
- DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation
- LLaDA-MedV: Exploring Large Language Diffusion Models for Biomedical Image Understanding
- DIFFA: Large Language Diffusion Models Can Listen and Understand
- The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs
- Whisfusion: Parallel ASR Decoding via a Diffusion Transformer
- LLaDA-VLA: Vision Language Diffusion Action Models
- Large Language Diffusion Models
- LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning
QuenithAI is a professional organization composed of top researchers, dedicated to providing high-quality 1-on-1 research mentoring for university students worldwide. Our mission is to help students bridge the gap from theoretical knowledge to cutting-edge research and publish their work in top-tier conferences and journals.
Maintaining this Awesome Text-to-Image Generation
list requires significant effort, just as completing a high-quality paper requires focused dedication and expert guidance. If you're looking for one-on-one support from top scholars on your own research project, to quickly identify innovative ideas and make publications, we invite you to contact us ASAP.
➡️ Contact us via WeChat or E-mail to start your research journey.
「应达学术」(QuenithAI) 是一家由顶尖研究者组成,致力于为全球高校学生提供高质量1V1科研辅导的专业机构。我们的使命是帮助学生培养出色卓越的科研技能,在顶级会议和期刊上发表自己的成果。
维护一个GitHub调研仓库需要巨大的精力,正如完成一篇高质量的论文一样,离不开专注的投入和专业的指导。如果您希望在自己的研究项目中,获得来自顶尖学者的一对一支持,我们诚邀您与我们取得联系。
➡️ 欢迎通过 微信 或 邮件 联系我们,开启您的科研之旅。
Contributions are welcome! Please see our Contribution Guidelines for details on how to add new papers, correct information, or improve the repository.
Join our community to stay up-to-date with the latest advancements, share your work, and collaborate with other researchers and developers in the field of video generation, diffusion large language models, and more!