Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DDT: Decoupled Diffusion Transformer
|
| 2 |
+
<div style="text-align: center;">
|
| 3 |
+
<a href="https://arxiv.org/abs/2504.05741"><img src="https://img.shields.io/badge/arXiv-2504.05741-b31b1b.svg" alt="arXiv"></a>
|
| 4 |
+
<a href="https://huggingface.co/papers/2504.05741"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-sm.svg" alt="Paper page"></a>
|
| 5 |
+
</div>
|
| 6 |
+
|
| 7 |
+
<div style="text-align: center;">
|
| 8 |
+
<a href="https://paperswithcode.com/sota/image-generation-on-imagenet-256x256?p=ddt-decoupled-diffusion-transformer"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ddt-decoupled-diffusion-transformer/image-generation-on-imagenet-256x256" alt="PWC"></a>
|
| 9 |
+
|
| 10 |
+
<a href="https://paperswithcode.com/sota/image-generation-on-imagenet-512x512?p=ddt-decoupled-diffusion-transformer"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ddt-decoupled-diffusion-transformer/image-generation-on-imagenet-512x512" alt="PWC"></a>
|
| 11 |
+
</div>
|
| 12 |
+
|
| 13 |
+
## Introduction
|
| 14 |
+
We decouple diffusion transformer into encoder-decoder design, and surpresingly that a **more substantial encoder yields performance improvements as model size increases**.
|
| 15 |
+

|
| 16 |
+
* We achieves **1.26 FID** on ImageNet256x256 Benchmark with DDT-XL/2(22en6de).
|
| 17 |
+
* We achieves **1.28 FID** on ImageNet512x512 Benchmark with DDT-XL/2(22en6de).
|
| 18 |
+
* As a byproduct, our DDT can reuse encoder among adjacent steps to accelerate inference.
|
| 19 |
+
## Visualizations
|
| 20 |
+

|
| 21 |
+
## Checkpoints
|
| 22 |
+
We take the off-shelf [VAE](https://huggingface.co/stabilityai/sd-vae-ft-ema) to encode image into latent space, and train the decoder with DDT.
|
| 23 |
+
|
| 24 |
+
| Dataset | Model | Params | FID | HuggingFace |
|
| 25 |
+
|-------------|-------------------|-----------|------|----------------------------------------------------------|
|
| 26 |
+
| ImageNet256 | DDT-XL/2(22en6de) | 675M | 1.26 | [🤗](https://huggingface.co/MCG-NJU/DDT-XL-22en6de-R256) |
|
| 27 |
+
| ImageNet512 | DDT-XL/2(22en6de) | 675M | 1.28 | [🤗](https://huggingface.co/MCG-NJU/DDT-XL-22en6de-R512) |
|
| 28 |
+
## Online Demos
|
| 29 |
+
Coming soon.
|
| 30 |
+
|
| 31 |
+
## Usages
|
| 32 |
+
We use ADM evaluation suite to report FID.
|
| 33 |
+
```bash
|
| 34 |
+
# for installation
|
| 35 |
+
pip install -r requirements.txt
|
| 36 |
+
```
|
| 37 |
+
```bash
|
| 38 |
+
# for inference
|
| 39 |
+
python main.py predict -c configs/repa_improved_ddt_xlen22de6_256.yaml --ckpt_path=XXX.ckpt
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
# for training
|
| 44 |
+
# extract image latent (optional)
|
| 45 |
+
python3 tools/cache_imlatent4.py
|
| 46 |
+
# train
|
| 47 |
+
python main.py fit -c configs/repa_improved_ddt_xlen22de6_256.yaml
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
## Reference
|
| 52 |
+
```bibtex
|
| 53 |
+
@ARTICLE{ddt,
|
| 54 |
+
title = "DDT: Decoupled Diffusion Transformer",
|
| 55 |
+
author = "Wang, Shuai and Tian, Zhi and Huang, Weilin and Wang, Limin",
|
| 56 |
+
month = apr,
|
| 57 |
+
year = 2025,
|
| 58 |
+
archivePrefix = "arXiv",
|
| 59 |
+
primaryClass = "cs.CV",
|
| 60 |
+
eprint = "2504.05741"
|
| 61 |
+
}
|
| 62 |
+
```
|