--- pretty_name: Figma2Code language: - en license: mit configs: - config_name: default data_files: - split: test path: data_test/*.parquet - split: rest path: data_rest/*.parquet dataset_info: features: null splits: - name: test - name: rest --- # Figma2Code: Automating Multimodal Design to Code in the Wild [ICLR 2026](https://openreview.net/forum?id=CaXZB6bI31) Figma2Code is a multimodal design-to-code benchmark, built from community Figma designs and integrating screenshots, structured metadata, and design assets, enabling models to move beyond image-only inputs and better capture practical UI development scenarios. --- ## Features Each sample contains: - root: UI screenshot (PIL Image) - filekey: Figma file identifier - node_id: root node id - page_url: source URL - annotation: annotation text - statistics: precomputed statistics - raw_metadata: raw Figma node tree (JSON string) - processed_metadata: processed node tree (JSON string) - image_refs: bitmap assets (path + image) - svg_assets: SVG assets (path + content) --- ## Reconstructed Structure Each sample can be reconstructed as: {filekey}_{safe_node_id(node_id)}/ ├── raw.json ├── processed_metadata.json ├── report.json ├── root.png └── assets/     ├── image_refs/     └── svg_assets/ --- ## Usage ```python from datasets import load_dataset dataset = load_dataset("xcodemind/Figma2Code") test_set = dataset["test"] rest_set = dataset["rest"] ``` --- ## Citation If you use this dataset, please cite: ```text @inproceedings{gui2026figma2code, title={Figma2Code: Automating Multimodal Design to Code in the Wild}, author={Gui, Yi and Zhang, Jiawan and Wang, Yina and Ma, Tianran and Wan, Yao and He, Shilin and Chen, Dongping and Zhao, Zhou and Jiang, Wenbin and Shi, Xuanhua and Jin, Hai and Yu, Philip S.}, booktitle={International Conference on Learning Representations (ICLR)}, year={2026}, url={https://openreview.net/forum?id=CaXZB6bI31} } ```