Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 693110045 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Axiomic Banner

NPset-2 (Python-Edu)

A normalized semi-synthetic Python dataset for training small language models on code logic without the overhead of raw code syntax.

Tokenizer chart

Why

Small language models trained on natural language corpora develop latent representations of logical constructs -- iteration, conditionals, data flow, function composition -- yet struggle to apply this reasoning to source code, where syntactic overhead (delimiters, indentation conventions, language-specific idioms) occupies a disproportionate share of the token budget, requires a vocabulary of code-specific tokens, and introduces a surface-form distribution shift relative to the model's prior knowledge. NPset-2 addresses this by normalizing Python source through an AST-based converter that strips syntactic noise while preserving the full logical structure of each program, producing a pseudocode representation composed entirely of natural language tokens that aligns more directly with the semantic representations already present in small models, allowing them to reason about what code does rather than expending capacity learning what it looks like.

The v2 Specification

NPset-2 introduces significant improvements over v1, trading some relative token compression for far lower semantic overhead.

  1. Explicit Block Scoping: All indented blocks (if, for, while, try, with) now use numbered/named anchors: begin if 1 ... end if 1. This provides unambiguous attention anchors for small models.
  2. Natural Language Phrasing:
    • Functions: function find_max with input numbers
    • Calls: call fibonacci with n - 1
    • Loops: exit loop and next loop instead of break and continue.
  3. Slicing: Replaced symbol-heavy [0:10] with starting from index 0 to 10.
  4. Semantic Normalization:
    • isinstance(x, int) -> type of x is int
    • lambda x: x+1 -> function taking x returning x + 1
    • async for -> async for (removing forced underscores).
  5. Strict English Filtering: Documents with >0.5% Chinese characters are dropped, and all remaining text is scrubbed of non-ASCII characters to maintain a clean, English-only training distribution.

Performance (Context Capacity)

When tested against standard tokenizers, TinyDSL v2 significantly expands the effective context window for logic-heavy training with natural language tokenizers:

Tokenizer Reduction (Tokens) Context Capacity (2048 window)
GPTX (Custom 32k) 13.7% 7.1 -> 8.3 examples (+15.9%)
GPT-2 16.6% 7.4 -> 8.9 examples (+19.9%)
Qwen 2.5 8.1% 10.1 -> 9.3 examples (-7.5%)
Llama 3 2.2% 8.3 -> 8.1 examples (-2.2%)

Note: While raw character counts increase by ~17%, the "Token Tax" for logical constructs is drastically reduced for models not pre-specialized for code syntax.

Format

Parquet format with the following schema:

Field Type Description
code string Normalized TinyDSL v2 pseudocode
original_code string Original Python source
original_language string Always python
score float Quality/Difficulty score (if available from source)

Sources

  • HuggingFaceTB/stack-edu (python)
Downloads last month
38