Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
92
468
images
listlengths
1
1
<|vision_start|><|image_pad|><|vision_end|> Represent the given image for classification plane, carpenter's plane, woodworking plane
[ [ 255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 255, 219, 0, 67, 0, 8, 6, 6, 7, 6, 5, 8, 7, 7, 7, 9, 9, 8, 10, 12, 20, 13...
<|vision_start|><|image_pad|><|vision_end|> Represent the given image for classification comic book
[ [ 255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 255, 219, 0, 67, 0, 8, 6, 6, 7, 6, 5, 8, 7, 7, 7, 9, 9, 8, 10, 12, 20, 13...
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n junco, (...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n zucchin(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n stingra(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n scuba d(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n miniski(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n grand p(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n paralle(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD//gAmRmlsZSB3cml0dGVuIGJ5IEFkb2JlIFBob3Rvc2hvcKggNC4w/9sAQwAIBgYHBgUIBw(...TRUNCATED)
"<|vision_start|><|image_pad|><|vision_end|>\nRepresent the given image for classification\n syringe(...TRUNCATED)
["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLD(...TRUNCATED)
End of preview. Expand in Data Studio

MMEB train split used in MoCa Continual Pre-training

🏠 Homepage | 💻 Code | 🤖 MoCa-Qwen25VL-7B | 🤖 MoCa-Qwen25VL-3B | 📚 Datasets | 📄 Paper

Introduction

This is a interleaved multimodal pre-training dataset used in the modality-aware continual pre-training of MoCa models. It is adapted from the train split of MMEB by concatenating queries and positive documents.

The dataset consists of interleaved multimodal examples. text is a string containing text while images are image binaries that can be loaded with the following code snippet:

import PIL.Image
from io import BytesIO

image_bytes = example['images'][0]
image = PIL.Image.open(BytesIO(image_bytes))

Citation

MoCa

@article{chen2025moca,
  title={MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings},
  author={Chen, Haonan and Liu, Hong and Luo, Yuping and Wang, Liang and Yang, Nan and Wei, Furu and Dou, Zhicheng},
  journal={arXiv preprint arXiv:2506.23115},
  year={2025}
}

MMEB

@article{jiang2024vlm2vec,
  title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
  author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
  journal={arXiv preprint arXiv:2410.05160},
  year={2024}
}
Downloads last month
6

Papers for moca-embed/MMEB-train-fixed