File size: 2,126 Bytes
76647a5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | ---
pretty_name: Figma2Code
language:
- en
license: mit
configs:
- config_name: default
data_files:
- split: test
path: data_test/*.parquet
- split: rest
path: data_rest/*.parquet
dataset_info:
features: null
splits:
- name: test
- name: rest
---
# Figma2Code: Automating Multimodal Design to Code in the Wild <span style="font-size:14px;color:gray;">[ICLR 2026](https://openreview.net/forum?id=CaXZB6bI31)</span>
Figma2Code is a multimodal design-to-code benchmark, built from community Figma designs and integrating screenshots, structured metadata, and design assets, enabling models to move beyond image-only inputs and better capture practical UI development scenarios.
---
## Features
Each sample contains:
- root: UI screenshot (PIL Image)
- filekey: Figma file identifier
- node_id: root node id
- page_url: source URL
- annotation: annotation text
- statistics: precomputed statistics
- raw_metadata: raw Figma node tree (JSON string)
- processed_metadata: processed node tree (JSON string)
- image_refs: bitmap assets (path + image)
- svg_assets: SVG assets (path + content)
---
## Reconstructed Structure
Each sample can be reconstructed as:
{filekey}_{safe_node_id(node_id)}/
├── raw.json
├── processed_metadata.json
├── report.json
├── root.png
└── assets/
├── image_refs/
└── svg_assets/
---
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("xcodemind/Figma2Code")
test_set = dataset["test"]
rest_set = dataset["rest"]
```
---
## Citation
If you use this dataset, please cite:
```text
@inproceedings{gui2026figma2code,
title={Figma2Code: Automating Multimodal Design to Code in the Wild},
author={Gui, Yi and Zhang, Jiawan and Wang, Yina and Ma, Tianran and Wan, Yao and He, Shilin and Chen, Dongping and Zhao, Zhou and Jiang, Wenbin and Shi, Xuanhua and Jin, Hai and Yu, Philip S.},
booktitle={International Conference on Learning Representations (ICLR)},
year={2026},
url={https://openreview.net/forum?id=CaXZB6bI31}
}
``` |