The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text string | meta string | red_pajama_subset string |
|---|---|---|
\section{Introduction}
Let $G$ be a simple undirected graph with the \textit{vertex set} $V(G)$ and the \textit{edge set} $E(G)$. A vertex with degree one is called a \textit{pendant vertex}. The distance between the vertices $u$ and $v$ in graph $G$ is denoted by $d_G(u,v)$. A cycle $C$ is called \textit{chordless} if... | {'timestamp': '2016-07-19T02:04:55', 'yymm': '1607', 'arxiv_id': '1607.04768', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04768'} | arxiv |
\section{Principle of nano strain-amplifier}
\begin{figure*}[t!]
\centering
\includegraphics[width=5.4in]{Fig1}
\vspace{-0.5em}
\caption{Schematic sketches of nanowire strain sensors. (a)(b) Conventional non-released and released NW structure;
(c)(d) The proposed nano strain-amplifier and its simplified... | {'timestamp': '2016-07-18T02:07:38', 'yymm': '1607', 'arxiv_id': '1607.04531', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04531'} | arxiv |
\section{Introduction}\label{intro}
Gas has a fundamental role in shaping the evolution of galaxies,
through its accretion on to massive haloes, cooling and subsequent
fuelling of star formation, to the triggering of extreme luminous
activity around super massive black holes. Determining how the
physical state of gas ... | {'timestamp': '2016-08-09T02:04:29', 'yymm': '1607', 'arxiv_id': '1607.04828', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04828'} | arxiv |
\section{Introduction}
Given $\rho>0$, we consider the problem
\begin{equation}\label{eq:main_prob_U}
\begin{cases}
-\Delta U + \lambda U = |U|^{p-1}U & \text{in }\Omega,\smallskip\\
\int_\Omega U^2\,dx = \rho, \quad U=0 & \text{on }\partial\Omega,
\end{cases}
\end{equation}
where $\Omega\subset{\mathbb{R}}^N$ is a ... | {'timestamp': '2016-07-18T02:07:27', 'yymm': '1607', 'arxiv_id': '1607.04520', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04520'} | arxiv |
"\\section{Introduction}\n\\label{sec:intro}\nDespite the immense popularity and availability of onl(...TRUNCATED) | "{'timestamp': '2016-07-20T02:04:30', 'yymm': '1607', 'arxiv_id': '1607.04648', 'language': 'en', 'u(...TRUNCATED) | arxiv |
"\\section{Introduction}\n\nThe wave-particle duality is an alternative statement of the complementa(...TRUNCATED) | "{'timestamp': '2017-01-26T02:07:48', 'yymm': '1607', 'arxiv_id': '1607.04617', 'language': 'en', 'u(...TRUNCATED) | arxiv |
"\\section{#1}\\setcounter{equation}{0}}\n\\newcommand{\\subsect}[1]{\\subsection{#1}}\n\\renewcomma(...TRUNCATED) | "{'timestamp': '2016-07-19T02:06:15', 'yymm': '1607', 'arxiv_id': '1607.04824', 'language': 'en', 'u(...TRUNCATED) | arxiv |
"\\section{Introduction}\n\\label{sec:introduction} \nIn recent years there has been a resurgen(...TRUNCATED) | "{'timestamp': '2016-12-01T02:04:39', 'yymm': '1607', 'arxiv_id': '1607.04724', 'language': 'en', 'u(...TRUNCATED) | arxiv |
"\\section{Introduction} \\label{sec:intro}\n\nThe production of hadrons and jets at a future Electr(...TRUNCATED) | "{'timestamp': '2016-10-28T02:00:49', 'yymm': '1607', 'arxiv_id': '1607.04921', 'language': 'en', 'u(...TRUNCATED) | arxiv |
"\\section{Introduction}\n\\label{sec:intro}\n\n\nCompact stars have a large number of pulsation mod(...TRUNCATED) | "{'timestamp': '2016-07-19T02:02:47', 'yymm': '1607', 'arxiv_id': '1607.04707', 'language': 'en', 'u(...TRUNCATED) | arxiv |
Getting Started
The dataset consists of 2084 jsonl files. You can download the dataset using HuggingFace:
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
Or you can directly download the files using the following command:
wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
while read line; do
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
mkdir -p $(dirname $dload_loc)
wget "$line" -O "$dload_loc"
done < urls.txt
After downloading the files, you can load the dataset from disk by setting the RED_PAJAMA_DATA_DIR environment variable to the directory containing the files:
import os
from datasets import load_dataset
os.environ["RED_PAJAMA_DATA_DIR"] = "/path/to/download"
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
A smaller 1B-token sample of the dataset can be found here.
A full set of scripts to recreate the dataset from scratch can be found here.
Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
| Dataset | Token Count |
|---|---|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
Languages
Primarily English, though the Wikipedia slice contains multiple languages.
Dataset Structure
The dataset structure is as follows:
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
"red_pajama_subset": "common_crawl" | "c4" | "github" | "arxiv" | "wikipedia" | "stackexchange"
}
Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
Source Data
Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official cc_net pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality files and only keep projects that are distributed under the MIT, BSD, or Apache license.
Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other formatting boilerplate has been removed.
Gutenberg and Books3
Defunct: The 'book' config is defunct and no longer accessible due to reported copyright infringement for the Book3 dataset contained in this config.
ArXiv
ArXiv data is downloaded from Amazon S3 in the arxiv requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
Stackexchange
The Stack Exchange split of the dataset is download from the Internet Archive. Here we only keep the posts from the 28 largest sites, remove html tags, group the posts into question-answer pairs, and order answers by their score.
SHA256 Checksums
SHA256 checksums for the dataset files for each data source are available here:
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/arxiv_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/c4_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/common_crawl_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/github_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/stackexchange_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/wikipedia_SHA256SUMS.txt
To cite RedPajama, please use:
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
License
Please refer to the licenses of the data subsets you use.
- Common Crawl Foundation Terms of Use
- C4 license
- GitHub was limited to MIT, BSD, or Apache licenses only
- ArXiv Terms of Use
- Wikipedia License
- StackExchange license on the Internet Archive
- Downloads last month
- 2,105