The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
NPset-2 (Python-Edu)
A normalized semi-synthetic Python dataset for training small language models on code logic without the overhead of raw code syntax.
Why
Small language models trained on natural language corpora develop latent representations of logical constructs -- iteration, conditionals, data flow, function composition -- yet struggle to apply this reasoning to source code, where syntactic overhead (delimiters, indentation conventions, language-specific idioms) occupies a disproportionate share of the token budget, requires a vocabulary of code-specific tokens, and introduces a surface-form distribution shift relative to the model's prior knowledge. NPset-2 addresses this by normalizing Python source through an AST-based converter that strips syntactic noise while preserving the full logical structure of each program, producing a pseudocode representation composed entirely of natural language tokens that aligns more directly with the semantic representations already present in small models, allowing them to reason about what code does rather than expending capacity learning what it looks like.
The v2 Specification
NPset-2 introduces significant improvements over v1, trading some relative token compression for far lower semantic overhead.
- Explicit Block Scoping: All indented blocks (if, for, while, try, with) now use numbered/named anchors:
begin if 1...end if 1. This provides unambiguous attention anchors for small models. - Natural Language Phrasing:
- Functions:
function find_max with input numbers - Calls:
call fibonacci with n - 1 - Loops:
exit loopandnext loopinstead ofbreakandcontinue.
- Functions:
- Slicing: Replaced symbol-heavy
[0:10]withstarting from index 0 to 10. - Semantic Normalization:
isinstance(x, int)->type of x is intlambda x: x+1->function taking x returning x + 1async for->async for(removing forced underscores).
- Strict English Filtering: Documents with >0.5% Chinese characters are dropped, and all remaining text is scrubbed of non-ASCII characters to maintain a clean, English-only training distribution.
Performance (Context Capacity)
When tested against standard tokenizers, TinyDSL v2 significantly expands the effective context window for logic-heavy training with natural language tokenizers:
| Tokenizer | Reduction (Tokens) | Context Capacity (2048 window) |
|---|---|---|
| GPTX (Custom 32k) | 13.7% | 7.1 -> 8.3 examples (+15.9%) |
| GPT-2 | 16.6% | 7.4 -> 8.9 examples (+19.9%) |
| Qwen 2.5 | 8.1% | 10.1 -> 9.3 examples (-7.5%) |
| Llama 3 | 2.2% | 8.3 -> 8.1 examples (-2.2%) |
Note: While raw character counts increase by ~17%, the "Token Tax" for logical constructs is drastically reduced for models not pre-specialized for code syntax.
Format
Parquet format with the following schema:
| Field | Type | Description |
|---|---|---|
code |
string | Normalized TinyDSL v2 pseudocode |
original_code |
string | Original Python source |
original_language |
string | Always python |
score |
float | Quality/Difficulty score (if available from source) |
Sources
HuggingFaceTB/stack-edu (python)
- Downloads last month
- 38

