omarkamali commited on
Commit
80cfa45
·
verified ·
1 Parent(s): 1f01638

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. 20250701/af/README.md +252 -0
  2. 20250701/af/dataset/10000k.parquet +3 -0
  3. 20250701/af/dataset/1000k.parquet +3 -0
  4. 20250701/af/dataset/5000k.parquet +3 -0
  5. 20250701/af/dataset/full/full_000.parquet +3 -0
  6. 20250701/af/dataset/full/full_001.parquet +3 -0
  7. 20250701/af/dataset/full/full_002.parquet +3 -0
  8. 20250701/af/dataset/full/full_003.parquet +3 -0
  9. 20250701/af/dataset/full/full_004.parquet +3 -0
  10. 20250701/af/dataset/full/full_005.parquet +3 -0
  11. 20250701/af/dataset/full/full_007.parquet +3 -0
  12. 20250701/af/dataset/full/full_008.parquet +3 -0
  13. 20250701/af/dataset/full/full_009.parquet +3 -0
  14. 20250701/af/dataset/full/full_010.parquet +3 -0
  15. 20250701/af/dataset/full/full_011.parquet +3 -0
  16. 20250701/af/dataset/full/full_012.parquet +3 -0
  17. 20250701/af/dataset/full/full_013.parquet +3 -0
  18. 20250701/af/dataset/full/full_014.parquet +3 -0
  19. 20250701/af/dataset/full/full_015.parquet +3 -0
  20. 20250701/af/dataset/full/full_016.parquet +3 -0
  21. 20250701/af/dataset/full/full_017.parquet +3 -0
  22. 20250701/af/dataset/full/full_018.parquet +3 -0
  23. 20250701/af/dataset/full/full_019.parquet +3 -0
  24. 20250701/af/dataset/full/full_020.parquet +3 -0
  25. 20250701/af/dataset/full/full_021.parquet +3 -0
  26. 20250701/af/dataset/full/full_022.parquet +3 -0
  27. 20250701/af/dataset/full/full_024.parquet +3 -0
  28. 20250701/af/dataset/train/train_010.parquet +3 -0
  29. 20250701/af/dataset/train/train_011.parquet +3 -0
  30. 20250701/af/dataset/train/train_018.parquet +3 -0
  31. 20250701/af/metadata.json +457 -0
  32. 20250701/af/models/subword_markov/af_markov1_metadata.json +8 -0
  33. 20250701/af/models/subword_markov/af_markov2_metadata.json +8 -0
  34. 20250701/af/models/subword_markov/af_markov3_metadata.json +8 -0
  35. 20250701/af/models/subword_ngram/af_2gram_metadata.json +9 -0
  36. 20250701/af/models/subword_ngram/af_3gram_metadata.json +9 -0
  37. 20250701/af/models/subword_ngram/af_4gram_metadata.json +9 -0
  38. 20250701/af/models/tokenizer/af_tokenizer_16k.vocab +0 -0
  39. 20250701/af/models/tokenizer/af_tokenizer_32k.vocab +0 -0
  40. 20250701/af/models/tokenizer/af_tokenizer_64k.vocab +0 -0
  41. 20250701/af/models/tokenizer/af_tokenizer_8k.vocab +0 -0
  42. 20250701/af/models/vocabulary/af_dictionary_metadata.json +40 -0
  43. 20250701/af/models/word_markov/af_markov1_metadata.json +8 -0
  44. 20250701/af/models/word_markov/af_markov2_metadata.json +8 -0
  45. 20250701/af/models/word_markov/af_markov3_metadata.json +8 -0
  46. 20250701/af/models/word_ngram/af_2gram_metadata.json +9 -0
  47. 20250701/af/models/word_ngram/af_3gram_metadata.json +9 -0
  48. 20250701/af/models/word_ngram/af_3gram_model.parquet +3 -0
  49. 20250701/af/models/word_ngram/af_4gram_metadata.json +9 -0
  50. 20250701/af/models/word_ngram/af_4gram_model.parquet +3 -0
20250701/af/README.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Wikipedia AF Dataset (20250701)
2
+
3
+ ## Overview
4
+
5
+ This dataset contains processed Wikipedia articles for the **af** language, extracted from the Wikipedia dump dated **20250701**. The dataset has been processed through a comprehensive 9-stage pipeline that includes text normalization, tokenization, n-gram analysis, article scoring, and representative sampling.
6
+
7
+ ## Dataset Statistics
8
+
9
+ ## Dataset Structure
10
+
11
+ This dataset is organized into the following components:
12
+
13
+ ### 📰 Full Articles Dataset
14
+ - **Complete Dataset**: All Wikipedia articles in multiple Parquet files for HuggingFace compatibility
15
+ - **Location**: `/dataset/full_part_XXX.parquet`
16
+ - **Schema**: id, title, url, text, namespace, raw_mediawiki
17
+ - **Optimization**: Small row groups (1000 rows) for efficient multi-reading
18
+
19
+ ### 🤖 Tokenizer Models
20
+ - **Location**: `/models/tokenizer/`
21
+ - **Multiple Sizes**: 8k, 16k, 32k, and 64k vocabulary sizes
22
+ - **SentencePiece Models**: Trained subword tokenizers for different use cases
23
+ - **Vocabulary Files**: Complete vocabularies with token mappings for each size
24
+
25
+ ### 📊 Subword N-gram Models (Parquet Format)
26
+ - **Location**: `/models/subword_ngram/`
27
+ - **2-gram Model**: Subword bigram frequencies and IDF scores in Parquet format
28
+ - **3-gram Model**: Subword trigram frequencies and IDF scores in Parquet format
29
+ - **4-gram Model**: Subword 4-gram frequencies and IDF scores in Parquet format
30
+ - **Top N-grams**: Most frequent subword n-grams in separate Parquet files
31
+ - **Optimization**: Small row groups for efficient querying and multi-reading
32
+
33
+ ### 📝 Word N-gram Models (Parquet Format)
34
+ - **Location**: `/models/word_ngram/`
35
+ - **2-gram Model**: Word-level bigram frequencies and IDF scores in Parquet format
36
+ - **3-gram Model**: Word-level trigram frequencies and IDF scores in Parquet format
37
+ - **4-gram Model**: Word-level 4-gram frequencies and IDF scores in Parquet format
38
+ - **Top N-grams**: Most frequent word n-grams in separate Parquet files
39
+ - **Tokenization**: Simple whitespace and punctuation-based word splitting
40
+
41
+ ### 🔗 Subword Markov Chain Models (Parquet Format)
42
+ - **Location**: `/models/subword_markov/`
43
+ - **2-gram Context**: Subword transition probabilities for text generation
44
+ - **3-gram Context**: Higher-order subword context for better text generation
45
+ - **4-gram Context**: Maximum subword context for sophisticated text generation
46
+ - **Schema**: context (JSON), next_token, probability, context_count
47
+
48
+ ### 🔗 Word Markov Chain Models (Parquet Format)
49
+ - **Location**: `/models/word_markov/`
50
+ - **2-gram Context**: Word-level transition probabilities for text generation
51
+ - **3-gram Context**: Higher-order word context for better text generation
52
+ - **4-gram Context**: Maximum word context for sophisticated text generation
53
+ - **Schema**: context (JSON), next_token, probability, context_count
54
+
55
+ ### 📚 Vocabulary Models
56
+ - **Location**: `/models/vocabulary/`
57
+ - **Language Dictionary**: Vocabulous-based language detection model
58
+ - **Word-Language Frequencies**: Statistical language identification data
59
+
60
+ ### 📈 Statistics & Reports
61
+ - **Comprehensive Statistics**: Detailed corpus analysis in JSON format
62
+ - **Human-Readable Summary**: Key statistics and insights
63
+
64
+ ### 🎯 Representative Sample Datasets
65
+ - **Location**: `/dataset/`
66
+ - **Sample Sizes**: 1k.parquet, 5k.parquet, 10k.parquet (only created if enough articles available)
67
+ - **Coverage-Optimized**: Samples maximize n-gram coverage of the full corpus
68
+ - **Schema**: id, title, url, text, tokens (JSON), scores (JSON), features (JSON), individual score columns
69
+ - **Optimization**: Small row groups for efficient filtering and analysis
70
+ - **Note**: Sample sizes larger than available articles are automatically skipped
71
+
72
+ ## Processing Pipeline
73
+
74
+ This dataset was created using a 9-stage processing pipeline:
75
+
76
+ 1. **Data Acquisition**: Download and parse Wikipedia XML dumps
77
+ 2. **Text Normalization**: Clean and normalize text using unscript
78
+ 3. **Tokenizer Training**: Train SentencePiece subword tokenizers
79
+ 4. **Dictionary Building**: Build language detection models with vocabulous
80
+ 5. **N-gram Analysis**: Generate comprehensive n-gram models
81
+ 6. **Article Scoring**: Score articles for representativeness and quality
82
+ 7. **Sample Generation**: Create coverage-optimized representative samples
83
+ 8. **Statistics Generation**: Generate comprehensive corpus statistics
84
+ 9. **Publication**: Upload all artifacts to Hugging Face Hub
85
+
86
+ ## Usage Examples
87
+
88
+ ### Loading Full Articles Dump
89
+
90
+ ```python
91
+ import pandas as pd
92
+
93
+ # Load all articles
94
+ df = pd.read_parquet('articles/af_articles.parquet')
95
+
96
+ # Access article data
97
+ print(f"Total articles: {len(df)}")
98
+ print(df[['id', 'title', 'text']].head())
99
+ ```
100
+
101
+ ### Loading the Tokenizer
102
+
103
+ ```python
104
+ import sentencepiece as spm
105
+
106
+ # Load the trained tokenizer
107
+ sp = spm.SentencePieceProcessor()
108
+ sp.load('af_tokenizer.model')
109
+
110
+ # Tokenize text
111
+ text = "Your text here"
112
+ tokens = sp.encode_as_pieces(text)
113
+ token_ids = sp.encode_as_ids(text)
114
+ ```
115
+
116
+ ### Loading N-gram Models (Parquet Format)
117
+
118
+ ```python
119
+ import pandas as pd
120
+ import json
121
+
122
+ # Load subword 2-gram model
123
+ subword_df = pd.read_parquet('models/subword_ngram/af_2gram_model.parquet')
124
+
125
+ # Access subword n-gram data
126
+ print("Subword 2-grams:")
127
+ for _, row in subword_df.head().iterrows():
128
+ ngram = json.loads(row['ngram']) # Convert back from JSON
129
+ frequency = row['frequency']
130
+ idf_score = row['idf_score']
131
+ print(f"N-gram: {ngram}, Freq: {frequency}, IDF: {idf_score:.3f}")
132
+
133
+ # Load word 2-gram model
134
+ word_df = pd.read_parquet('models/word_ngram/af_2gram_model.parquet')
135
+
136
+ # Access word n-gram data
137
+ print("\nWord 2-grams:")
138
+ for _, row in word_df.head().iterrows():
139
+ ngram = json.loads(row['ngram']) # Convert back from JSON
140
+ frequency = row['frequency']
141
+ idf_score = row['idf_score']
142
+ print(f"N-gram: {ngram}, Freq: {frequency}, IDF: {idf_score:.3f}")
143
+
144
+ # Load top subword n-grams
145
+ subword_top_df = pd.read_parquet('models/subword_ngram/af_2gram_top1000.parquet')
146
+ print("\nTop 10 subword bigrams:")
147
+ print(subword_top_df.head(10))
148
+
149
+ # Load top word n-grams
150
+ word_top_df = pd.read_parquet('models/word_ngram/af_2gram_top1000.parquet')
151
+ print("\nTop 10 word bigrams:")
152
+ print(word_top_df.head(10))
153
+ ```
154
+
155
+ ### Loading Markov Chain Models (Parquet Format)
156
+
157
+ ```python
158
+ import pandas as pd
159
+ import json
160
+
161
+ # Load subword Markov chain (2-gram context)
162
+ subword_markov_df = pd.read_parquet('models/subword_markov/af_markov1_transitions.parquet')
163
+
164
+ # Access subword Markov transitions
165
+ print("Subword Markov transitions:")
166
+ for _, row in subword_markov_df.head().iterrows():
167
+ context = json.loads(row['context']) # Convert back from JSON
168
+ next_token = row['next_token']
169
+ probability = row['probability']
170
+ context_count = row['context_count']
171
+ print(f"Context: {context} -> Next: '{next_token}' (p={probability:.3f}, count={context_count})")
172
+
173
+ # Load word Markov chain (2-gram context)
174
+ word_markov_df = pd.read_parquet('models/word_markov/af_markov1_transitions.parquet')
175
+
176
+ # Access word Markov transitions
177
+ print("\nWord Markov transitions:")
178
+ for _, row in word_markov_df.head().iterrows():
179
+ context = json.loads(row['context']) # Convert back from JSON
180
+ next_token = row['next_token']
181
+ probability = row['probability']
182
+ context_count = row['context_count']
183
+ print(f"Context: {context} -> Next: '{next_token}' (p={probability:.3f}, count={context_count})")
184
+ ```
185
+
186
+ ### Loading Sample Datasets (Parquet Format)
187
+
188
+ ```python
189
+ import pandas as pd
190
+ import json
191
+
192
+ # Load sample dataset
193
+ sample_df = pd.read_parquet('samples/af_sample_1000.parquet')
194
+
195
+ # Access article data
196
+ for _, row in sample_df.head().iterrows():
197
+ tokens = json.loads(row['tokens']) # Convert back from JSON
198
+ scores = json.loads(row['scores']) # Convert back from JSON
199
+ print(f"Title: {row['title']}")
200
+ print(f"Tokens: {len(tokens)}")
201
+ print(f"Scores: {scores}")
202
+ ```
203
+
204
+ ### Loading Dictionary Models
205
+
206
+ ```python
207
+ from vocabulous import Vocabulous
208
+
209
+ # Load language detection model
210
+ model = Vocabulous.load('af_dictionary.json')
211
+
212
+ # Detect language of text
213
+ text = "Your text here"
214
+ detected_lang = model.detect_language(text)
215
+ ```
216
+
217
+ ## Data Quality
218
+
219
+ - **Source**: Official Wikipedia dumps from Wikimedia Foundation
220
+ - **Processing Date**: 2025-07-31T16:50:25.741062
221
+ - **Pipeline Version**: 1.0.0
222
+ - **Memory Constraints**: Processed within 32GB RAM limits
223
+ - **Quality Assurance**: Multi-stage validation and error handling
224
+
225
+ ## Citation
226
+
227
+ If you use this dataset in your research, please cite:
228
+
229
+ ```bibtex
230
+ @dataset{wikipedia_af_20250701,
231
+ title={Wikipedia AF Dataset},
232
+ author={Wikipedia Monthly Data Processing Pipeline},
233
+ year={2025},
234
+ url={https://huggingface.co/datasets/omarkamali/wikipedia-monthly-testing},
235
+ note={Processed from Wikipedia dump 20250701}
236
+ }
237
+ ```
238
+
239
+ ## License
240
+
241
+ This dataset is released under the same license as Wikipedia content:
242
+ - **Text**: [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/)
243
+ - **Code/Models**: [MIT License](https://opensource.org/licenses/MIT)
244
+
245
+ ## Contact
246
+
247
+ For questions or issues with this dataset, please open an issue in the repository or contact the maintainers.
248
+
249
+ ---
250
+
251
+ *Generated automatically by the Wikipedia Monthly Data Processing Pipeline*
252
+ *Processing completed: 2025-07-31T16:50:25.741062*
20250701/af/dataset/10000k.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcef67660aeeeaf2e0d854d9724664658337a0db84912abb57ee3b1a13cd5996
3
+ size 2255984
20250701/af/dataset/1000k.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d93cc156cf7b53e0f8b612cc81eb0574e528ad614b6f5541ef8ad6385f129726
3
+ size 447476
20250701/af/dataset/5000k.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67a12e7179faa482e2c0d279e2013b3479b8fbd9a447ef2394f3d46e3c295b05
3
+ size 1395097
20250701/af/dataset/full/full_000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf27980260c14b4669d73b6c39d52f5fcc54a4ee353c8049a0268de288a996fe
3
+ size 29493901
20250701/af/dataset/full/full_001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ed1501ed4333bc40e16bbb546c84f26b46d234e605161cd9289b3df33518c8f
3
+ size 31171738
20250701/af/dataset/full/full_002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bacd92631add76f72b3a96bd2d31aa788080eb6423fd451a2b00e43ea1597cd
3
+ size 22028302
20250701/af/dataset/full/full_003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4648d5d1362765bf3e4768a31b19ff3fe7ccddef67b1a7163992ae349cde91b3
3
+ size 16156360
20250701/af/dataset/full/full_004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c57c60ab18eb472387e7f2ce5b245d9cdb214848cff74434e6adad29158baf3
3
+ size 14276973
20250701/af/dataset/full/full_005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6be742a869c219955bdc0424caadd74fc5094e2e1d5e1a3481da95b45c3a47e
3
+ size 14233167
20250701/af/dataset/full/full_007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4713bb6e1f4824d1f5a239cb24384055b1f63d8b792ad9d42a8ced8b10cfcb60
3
+ size 19741456
20250701/af/dataset/full/full_008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a001bbb8ae24de08c7ca252f3cfdea90802edc9d29c4c8b94af4f8a743fa00ce
3
+ size 23841162
20250701/af/dataset/full/full_009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37869e7e2c03f59fe42933f79604260b46534b6ad27e1bd4ac564c93ceb8091c
3
+ size 16703061
20250701/af/dataset/full/full_010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5201a5bfad7346b6f75363cadce3bb03e8d8e18ce2bad87f2689c0d701384ed
3
+ size 4909680
20250701/af/dataset/full/full_011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b3b0ca71968164d73ea9f6faac5bef097a114d744d27f04009cd87ae1e07094
3
+ size 3815531
20250701/af/dataset/full/full_012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adc290382d49bb7e09cb1e55ccd7db33625724e88d9a0f520983d28568691ca3
3
+ size 8103349
20250701/af/dataset/full/full_013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdbe39ba98e070622ea259d58f1519d275c8886960bef4290e2ff1afd3d13f72
3
+ size 8073159
20250701/af/dataset/full/full_014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fb7c1f929e042575a1f66872286740674b05fd5525b023e007a5dc2284ff50d
3
+ size 7657276
20250701/af/dataset/full/full_015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e312778fb027aa9d5c31940d034da277149c1e41ffda8242727c43ca95c6dcd4
3
+ size 9908320
20250701/af/dataset/full/full_016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:990d58fd7f023ebc263d8eda01965d131df849c95dc66163002ca10f01dd03de
3
+ size 11055131
20250701/af/dataset/full/full_017.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae04db9958d75f0772516b73cc22bc30b3b357c4624f8a53bde9fd5acfea0d9c
3
+ size 13054392
20250701/af/dataset/full/full_018.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bb48caa6763bc4b5aeda2f33e3b372dac24fb89d7d8613e21ae3a73c56e4ee5
3
+ size 13224173
20250701/af/dataset/full/full_019.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:856140c15142aa9a2056c2886ca0cd7ca6c65be8eaa3110fa2d9aaded64c2b17
3
+ size 9828853
20250701/af/dataset/full/full_020.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0f8108768a2923e4ea4b169806b7080b003885004d0bbfcee92985d0480cc0c
3
+ size 10830389
20250701/af/dataset/full/full_021.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4a79fbad99fd61bbf27654c5dfdd222e76fba87723f2d2d3ce33d5c3bcd831e
3
+ size 9520160
20250701/af/dataset/full/full_022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a114828d963dc6af0ca2547e3c07c8ef87c730dabfa2961c8b430abf1dc3b075
3
+ size 9333909
20250701/af/dataset/full/full_024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0b4212987e0dd18023fd848252a14fd532b26c6ac793e6f0bc01df2a460ccdd
3
+ size 9761710
20250701/af/dataset/train/train_010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fd555e32a4c7afa127c4df791f5f987dd82142be2d5f8eb615be44ab3c71394
3
+ size 1506782
20250701/af/dataset/train/train_011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75749f9eea7914f3f5e41e26170d1d407d7000e7e8bb6c30f472639ec2ce9629
3
+ size 1018886
20250701/af/dataset/train/train_018.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5404f96ab560aed4ba2d9126e2310e448415feac96ee6946f12cbf6a6df0d597
3
+ size 4806670
20250701/af/metadata.json ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_articles": 124877,
3
+ "dump_date": "20250701",
4
+ "processing_completed_at": "2025-07-31T16:50:25.741062",
5
+ "stage_metadata": {
6
+ "1": {
7
+ "dump_path": "temp_20250731_145350/afwiki.xml.bz2",
8
+ "dump_date": "20250701",
9
+ "article_count": 124877,
10
+ "dump_metadata": {
11
+ "schema_version": "0.11",
12
+ "case_sensitive": "first-letter",
13
+ "sitename": "Wikipedia",
14
+ "base_url": "https://af.wikipedia.org/wiki/Tuisblad"
15
+ },
16
+ "temp_dir": "temp_20250731_145350/af",
17
+ "articles_parquet_files": [
18
+ "temp_20250731_145350/af/af_articles_part_000.parquet",
19
+ "temp_20250731_145350/af/af_articles_part_001.parquet",
20
+ "temp_20250731_145350/af/af_articles_part_002.parquet"
21
+ ],
22
+ "completed_at": "2025-07-31T14:57:44.819267",
23
+ "duration": 232.6967751979828
24
+ },
25
+ "2": {
26
+ "total_articles": 124877,
27
+ "majority_script": "Latn",
28
+ "script_distribution": {
29
+ "Latn": 124877
30
+ },
31
+ "normalized_articles_files": [
32
+ "normalized_articles/af/af_normalized_articles.parquet"
33
+ ],
34
+ "completed_at": "2025-07-31T15:26:41.943544",
35
+ "duration": 1737.2102828025818
36
+ },
37
+ "3": {
38
+ "total_articles": 124877,
39
+ "trained_models": {
40
+ "8k": {
41
+ "vocab_size": 8000,
42
+ "model_path": "models/af/tokenizer/af_tokenizer_8k.model",
43
+ "model_prefix": "models/af/tokenizer/af_tokenizer_8k"
44
+ },
45
+ "16k": {
46
+ "vocab_size": 16000,
47
+ "model_path": "models/af/tokenizer/af_tokenizer_16k.model",
48
+ "model_prefix": "models/af/tokenizer/af_tokenizer_16k"
49
+ },
50
+ "32k": {
51
+ "vocab_size": 32000,
52
+ "model_path": "models/af/tokenizer/af_tokenizer_32k.model",
53
+ "model_prefix": "models/af/tokenizer/af_tokenizer_32k"
54
+ },
55
+ "64k": {
56
+ "vocab_size": 64000,
57
+ "model_path": "models/af/tokenizer/af_tokenizer_64k.model",
58
+ "model_prefix": "models/af/tokenizer/af_tokenizer_64k"
59
+ }
60
+ },
61
+ "primary_model_path": "models/af/tokenizer/af_tokenizer_64k.model",
62
+ "model_path": "models/af/tokenizer/af_tokenizer_64k.model",
63
+ "training_text_size": 204924573,
64
+ "sample_tokens": [
65
+ "▁afrika",
66
+ "▁px",
67
+ "▁ligging",
68
+ "▁van",
69
+ "▁afrika",
70
+ "▁op",
71
+ "▁n",
72
+ "▁we",
73
+ "▁reldkaart",
74
+ "▁oppervlakte",
75
+ "▁km",
76
+ "▁de",
77
+ "▁bevolking",
78
+ "▁miljard",
79
+ "▁de",
80
+ "▁bevolkings",
81
+ "digtheid",
82
+ "▁km",
83
+ "▁lande",
84
+ "▁en"
85
+ ],
86
+ "sample_ids": [
87
+ 234,
88
+ 387,
89
+ 3767,
90
+ 31,
91
+ 234,
92
+ 86,
93
+ 18,
94
+ 221,
95
+ 61247,
96
+ 1921,
97
+ 588,
98
+ 75,
99
+ 1106,
100
+ 4654,
101
+ 75,
102
+ 2973,
103
+ 3542,
104
+ 588,
105
+ 1784,
106
+ 36
107
+ ],
108
+ "completed_at": "2025-07-31T15:29:18.234762",
109
+ "duration": 156.2300901412964
110
+ },
111
+ "4": {
112
+ "total_articles": 124877,
113
+ "training_samples": 49935,
114
+ "eval_samples": 999,
115
+ "model_path": "models/af/af_dictionary.json",
116
+ "dictionary_size": 484292,
117
+ "word_freq_size": 15051,
118
+ "final_accuracy": 1.0,
119
+ "final_f1": 1.0,
120
+ "avg_confidence": 0.9844343164510623,
121
+ "high_confidence_articles": 124660,
122
+ "high_confidence_ratio": 0.9982622900934519,
123
+ "training_cycles": 3,
124
+ "completed_at": "2025-07-31T16:02:59.733485",
125
+ "duration": 2022.2301919460297
126
+ },
127
+ "5": {
128
+ "subword_ngram_models": {
129
+ "2": {
130
+ "unique_ngrams": 988607,
131
+ "total_ngrams": 43074327,
132
+ "min_frequency": 5,
133
+ "model_file": "models/af/subword_ngram/af_2gram_model.parquet",
134
+ "metadata_file": "models/af/subword_ngram/af_2gram_metadata.json",
135
+ "top_ngrams_file": "models/af/subword_ngram/af_2gram_top1000.parquet"
136
+ },
137
+ "3": {
138
+ "unique_ngrams": 1933919,
139
+ "total_ngrams": 42949586,
140
+ "min_frequency": 3,
141
+ "model_file": "models/af/subword_ngram/af_3gram_model.parquet",
142
+ "metadata_file": "models/af/subword_ngram/af_3gram_metadata.json",
143
+ "top_ngrams_file": "models/af/subword_ngram/af_3gram_top1000.parquet"
144
+ },
145
+ "4": {
146
+ "unique_ngrams": 3175168,
147
+ "total_ngrams": 42824846,
148
+ "min_frequency": 2,
149
+ "model_file": "models/af/subword_ngram/af_4gram_model.parquet",
150
+ "metadata_file": "models/af/subword_ngram/af_4gram_metadata.json",
151
+ "top_ngrams_file": "models/af/subword_ngram/af_4gram_top1000.parquet"
152
+ }
153
+ },
154
+ "subword_markov_chains": {
155
+ "1": {
156
+ "context_size": 1,
157
+ "unique_contexts": 63465,
158
+ "total_transitions": 43074327,
159
+ "transitions_file": "models/af/subword_markov/af_markov1_transitions.parquet",
160
+ "metadata_file": "models/af/subword_markov/af_markov1_metadata.json"
161
+ },
162
+ "2": {
163
+ "context_size": 2,
164
+ "unique_contexts": 7502582,
165
+ "total_transitions": 42949586,
166
+ "transitions_file": "models/af/subword_markov/af_markov2_transitions.parquet",
167
+ "metadata_file": "models/af/subword_markov/af_markov2_metadata.json"
168
+ },
169
+ "3": {
170
+ "context_size": 3,
171
+ "unique_contexts": 21032987,
172
+ "total_transitions": 42824846,
173
+ "transitions_file": "models/af/subword_markov/af_markov3_transitions.parquet",
174
+ "metadata_file": "models/af/subword_markov/af_markov3_metadata.json"
175
+ }
176
+ },
177
+ "word_ngram_models": {
178
+ "2": {
179
+ "unique_ngrams": 1312897,
180
+ "total_ngrams": 38487033,
181
+ "min_frequency": 3,
182
+ "model_file": "models/af/word_ngram/af_2gram_model.parquet",
183
+ "metadata_file": "models/af/word_ngram/af_2gram_metadata.json",
184
+ "top_ngrams_file": "models/af/word_ngram/af_2gram_top1000.parquet"
185
+ },
186
+ "3": {
187
+ "unique_ngrams": 3125001,
188
+ "total_ngrams": 38362292,
189
+ "min_frequency": 2,
190
+ "model_file": "models/af/word_ngram/af_3gram_model.parquet",
191
+ "metadata_file": "models/af/word_ngram/af_3gram_metadata.json",
192
+ "top_ngrams_file": "models/af/word_ngram/af_3gram_top1000.parquet"
193
+ },
194
+ "4": {
195
+ "unique_ngrams": 2527228,
196
+ "total_ngrams": 38237555,
197
+ "min_frequency": 2,
198
+ "model_file": "models/af/word_ngram/af_4gram_model.parquet",
199
+ "metadata_file": "models/af/word_ngram/af_4gram_metadata.json",
200
+ "top_ngrams_file": "models/af/word_ngram/af_4gram_top1000.parquet"
201
+ }
202
+ },
203
+ "word_markov_chains": {
204
+ "1": {
205
+ "context_size": 1,
206
+ "unique_contexts": 823324,
207
+ "total_transitions": 38487033,
208
+ "transitions_file": "models/af/word_markov/af_markov1_transitions.parquet",
209
+ "metadata_file": "models/af/word_markov/af_markov1_metadata.json"
210
+ },
211
+ "2": {
212
+ "context_size": 2,
213
+ "unique_contexts": 8320124,
214
+ "total_transitions": 38362292,
215
+ "transitions_file": "models/af/word_markov/af_markov2_transitions.parquet",
216
+ "metadata_file": "models/af/word_markov/af_markov2_metadata.json"
217
+ },
218
+ "3": {
219
+ "context_size": 3,
220
+ "unique_contexts": 19534054,
221
+ "total_transitions": 38237555,
222
+ "transitions_file": "models/af/word_markov/af_markov3_transitions.parquet",
223
+ "metadata_file": "models/af/word_markov/af_markov3_metadata.json"
224
+ }
225
+ },
226
+ "subword_totals": {
227
+ "total_unique_ngrams": 6097694,
228
+ "total_ngrams": 128848759,
229
+ "total_markov_contexts": 28599034,
230
+ "total_transitions": 128848759
231
+ },
232
+ "word_totals": {
233
+ "total_unique_ngrams": 6965126,
234
+ "total_ngrams": 115086880,
235
+ "total_markov_contexts": 28677502,
236
+ "total_transitions": 115086880
237
+ },
238
+ "saved_files": {
239
+ "subword_2gram": "models/af/subword_ngram/af_2gram_model.parquet",
240
+ "subword_2gram_metadata": "models/af/subword_ngram/af_2gram_metadata.json",
241
+ "subword_2gram_top": "models/af/subword_ngram/af_2gram_top1000.parquet",
242
+ "subword_3gram": "models/af/subword_ngram/af_3gram_model.parquet",
243
+ "subword_3gram_metadata": "models/af/subword_ngram/af_3gram_metadata.json",
244
+ "subword_3gram_top": "models/af/subword_ngram/af_3gram_top1000.parquet",
245
+ "subword_4gram": "models/af/subword_ngram/af_4gram_model.parquet",
246
+ "subword_4gram_metadata": "models/af/subword_ngram/af_4gram_metadata.json",
247
+ "subword_4gram_top": "models/af/subword_ngram/af_4gram_top1000.parquet",
248
+ "subword_markov1": "models/af/subword_markov/af_markov1_transitions.parquet",
249
+ "subword_markov1_metadata": "models/af/subword_markov/af_markov1_metadata.json",
250
+ "subword_markov2": "models/af/subword_markov/af_markov2_transitions.parquet",
251
+ "subword_markov2_metadata": "models/af/subword_markov/af_markov2_metadata.json",
252
+ "subword_markov3": "models/af/subword_markov/af_markov3_transitions.parquet",
253
+ "subword_markov3_metadata": "models/af/subword_markov/af_markov3_metadata.json",
254
+ "word_2gram": "models/af/word_ngram/af_2gram_model.parquet",
255
+ "word_2gram_metadata": "models/af/word_ngram/af_2gram_metadata.json",
256
+ "word_2gram_top": "models/af/word_ngram/af_2gram_top1000.parquet",
257
+ "word_3gram": "models/af/word_ngram/af_3gram_model.parquet",
258
+ "word_3gram_metadata": "models/af/word_ngram/af_3gram_metadata.json",
259
+ "word_3gram_top": "models/af/word_ngram/af_3gram_top1000.parquet",
260
+ "word_4gram": "models/af/word_ngram/af_4gram_model.parquet",
261
+ "word_4gram_metadata": "models/af/word_ngram/af_4gram_metadata.json",
262
+ "word_4gram_top": "models/af/word_ngram/af_4gram_top1000.parquet",
263
+ "word_markov1": "models/af/word_markov/af_markov1_transitions.parquet",
264
+ "word_markov1_metadata": "models/af/word_markov/af_markov1_metadata.json",
265
+ "word_markov2": "models/af/word_markov/af_markov2_transitions.parquet",
266
+ "word_markov2_metadata": "models/af/word_markov/af_markov2_metadata.json",
267
+ "word_markov3": "models/af/word_markov/af_markov3_transitions.parquet",
268
+ "word_markov3_metadata": "models/af/word_markov/af_markov3_metadata.json"
269
+ },
270
+ "completed_at": "2025-07-31T16:44:59.441004",
271
+ "duration": 2536.534306049347
272
+ },
273
+ "6": {
274
+ "total_articles": 124877,
275
+ "output_files": [
276
+ "scored_articles/af/af_scored_articles_part_000.parquet",
277
+ "scored_articles/af/af_scored_articles_part_001.parquet",
278
+ "scored_articles/af/af_scored_articles_part_002.parquet",
279
+ "scored_articles/af/af_scored_articles_part_003.parquet",
280
+ "scored_articles/af/af_scored_articles_part_004.parquet"
281
+ ],
282
+ "score_statistics": {
283
+ "mean_tfidf": {
284
+ "mean": 0.05124321167467498,
285
+ "min": 0.0005799509719341689,
286
+ "max": 6.829809753400186,
287
+ "count": 124877
288
+ },
289
+ "overall_novelty": {
290
+ "mean": 0.0,
291
+ "min": 0.0,
292
+ "max": 0.0,
293
+ "count": 124877
294
+ },
295
+ "hapax_legomena_ratio": {
296
+ "mean": 0.72602454849505,
297
+ "min": 0.0,
298
+ "max": 1.0,
299
+ "count": 124877
300
+ },
301
+ "total_tokens": {
302
+ "mean": 345.9340310865892,
303
+ "min": 1,
304
+ "max": 39463,
305
+ "count": 124877
306
+ },
307
+ "lexical_diversity": {
308
+ "mean": 0.9843732208519338,
309
+ "min": 0.0,
310
+ "max": 1.0,
311
+ "count": 124877
312
+ },
313
+ "unique_tokens": {
314
+ "mean": 156.5526237818013,
315
+ "min": 1,
316
+ "max": 6644,
317
+ "count": 124877
318
+ },
319
+ "moving_average_ttr": {
320
+ "mean": 0.7046797727093798,
321
+ "min": 0.02930232558139535,
322
+ "max": 1.0,
323
+ "count": 124877
324
+ },
325
+ "type_token_ratio": {
326
+ "mean": 0.6290323290908232,
327
+ "min": 0.06140350877192982,
328
+ "max": 1.0,
329
+ "count": 124877
330
+ },
331
+ "max_tfidf": {
332
+ "mean": 0.24166506861657502,
333
+ "min": 0.013544769437258434,
334
+ "max": 6.829809753400186,
335
+ "count": 124877
336
+ },
337
+ "overall_representativeness": {
338
+ "mean": 0.0,
339
+ "min": 0.0,
340
+ "max": 0.0,
341
+ "count": 124877
342
+ }
343
+ },
344
+ "ngram_models_used": [],
345
+ "batch_size": 10000,
346
+ "completed_at": "2025-07-31T16:49:33.700147",
347
+ "duration": 257.318391084671
348
+ },
349
+ "7": {
350
+ "samples": {
351
+ "full": [
352
+ "samples/af/af_sample_fullk_part_000.parquet",
353
+ "samples/af/af_sample_fullk_part_001.parquet",
354
+ "samples/af/af_sample_fullk_part_002.parquet",
355
+ "samples/af/af_sample_fullk_part_003.parquet",
356
+ "samples/af/af_sample_fullk_part_004.parquet",
357
+ "samples/af/af_sample_fullk_part_005.parquet",
358
+ "samples/af/af_sample_fullk_part_006.parquet",
359
+ "samples/af/af_sample_fullk_part_007.parquet",
360
+ "samples/af/af_sample_fullk_part_008.parquet",
361
+ "samples/af/af_sample_fullk_part_009.parquet",
362
+ "samples/af/af_sample_fullk_part_010.parquet",
363
+ "samples/af/af_sample_fullk_part_011.parquet",
364
+ "samples/af/af_sample_fullk_part_012.parquet",
365
+ "samples/af/af_sample_fullk_part_013.parquet",
366
+ "samples/af/af_sample_fullk_part_014.parquet",
367
+ "samples/af/af_sample_fullk_part_015.parquet",
368
+ "samples/af/af_sample_fullk_part_016.parquet",
369
+ "samples/af/af_sample_fullk_part_017.parquet",
370
+ "samples/af/af_sample_fullk_part_018.parquet",
371
+ "samples/af/af_sample_fullk_part_019.parquet",
372
+ "samples/af/af_sample_fullk_part_020.parquet",
373
+ "samples/af/af_sample_fullk_part_021.parquet",
374
+ "samples/af/af_sample_fullk_part_022.parquet",
375
+ "samples/af/af_sample_fullk_part_023.parquet",
376
+ "samples/af/af_sample_fullk_part_024.parquet"
377
+ ],
378
+ "train": [
379
+ "samples/af/af_train_part_000.parquet",
380
+ "samples/af/af_train_part_001.parquet",
381
+ "samples/af/af_train_part_002.parquet",
382
+ "samples/af/af_train_part_003.parquet",
383
+ "samples/af/af_train_part_004.parquet",
384
+ "samples/af/af_train_part_005.parquet",
385
+ "samples/af/af_train_part_006.parquet",
386
+ "samples/af/af_train_part_007.parquet",
387
+ "samples/af/af_train_part_008.parquet",
388
+ "samples/af/af_train_part_009.parquet",
389
+ "samples/af/af_train_part_010.parquet",
390
+ "samples/af/af_train_part_011.parquet",
391
+ "samples/af/af_train_part_012.parquet",
392
+ "samples/af/af_train_part_013.parquet",
393
+ "samples/af/af_train_part_014.parquet",
394
+ "samples/af/af_train_part_015.parquet",
395
+ "samples/af/af_train_part_016.parquet",
396
+ "samples/af/af_train_part_017.parquet",
397
+ "samples/af/af_train_part_018.parquet",
398
+ "samples/af/af_train_part_019.parquet",
399
+ "samples/af/af_train_part_020.parquet",
400
+ "samples/af/af_train_part_021.parquet",
401
+ "samples/af/af_train_part_022.parquet",
402
+ "samples/af/af_train_part_023.parquet",
403
+ "samples/af/af_train_part_024.parquet"
404
+ ],
405
+ "1000k": [
406
+ "samples/af/af_sample_1000k_part_000.parquet"
407
+ ],
408
+ "5000k": [
409
+ "samples/af/af_sample_5000k_part_000.parquet"
410
+ ],
411
+ "10000k": [
412
+ "samples/af/af_sample_10000k_part_000.parquet"
413
+ ]
414
+ },
415
+ "sample_counts": {
416
+ "1000": 50,
417
+ "5000": 250,
418
+ "10000": 500,
419
+ "full": 124877,
420
+ "train": 124877
421
+ },
422
+ "total_articles_processed": 124877,
423
+ "processing_completed_at": "2025-07-31T16:49:52.374138",
424
+ "duration": 18.365309953689575
425
+ },
426
+ "8": {
427
+ "statistics_report": "statistics/af/af_statistics_report.json",
428
+ "corpus_statistics": {
429
+ "total_articles": 124877,
430
+ "total_tokens": 43199204,
431
+ "vocabulary_size": 63466,
432
+ "zipf_adherence": "excellent"
433
+ },
434
+ "sample_coverage": {
435
+ "full": {
436
+ "overall_coverage": 0.0,
437
+ "vocabulary_coverage": 1.0
438
+ },
439
+ "1000": {
440
+ "overall_coverage": 0.0,
441
+ "vocabulary_coverage": 0.15731257681278166
442
+ },
443
+ "5000": {
444
+ "overall_coverage": 0.0,
445
+ "vocabulary_coverage": 0.31859578356915513
446
+ },
447
+ "10000": {
448
+ "overall_coverage": 0.0,
449
+ "vocabulary_coverage": 0.41441086566035357
450
+ }
451
+ },
452
+ "processing_completed_at": "2025-07-31T16:50:24.744598",
453
+ "duration": 32.78457522392273
454
+ }
455
+ },
456
+ "pipeline_version": "1.0.0"
457
+ }
20250701/af/models/subword_markov/af_markov1_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "context_size": 1,
3
+ "unique_contexts": 63465,
4
+ "total_transitions": 43074327,
5
+ "language_code": "af",
6
+ "model_type": "subword",
7
+ "created_at": "2025-07-31T16:40:37.063668"
8
+ }
20250701/af/models/subword_markov/af_markov2_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "context_size": 2,
3
+ "unique_contexts": 7502582,
4
+ "total_transitions": 42949586,
5
+ "language_code": "af",
6
+ "model_type": "subword",
7
+ "created_at": "2025-07-31T16:41:22.070623"
8
+ }
20250701/af/models/subword_markov/af_markov3_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "context_size": 3,
3
+ "unique_contexts": 21032987,
4
+ "total_transitions": 42824846,
5
+ "language_code": "af",
6
+ "model_type": "subword",
7
+ "created_at": "2025-07-31T16:42:36.692158"
8
+ }
20250701/af/models/subword_ngram/af_2gram_metadata.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 2,
3
+ "total_ngrams": 43074327,
4
+ "unique_ngrams": 988607,
5
+ "min_frequency": 5,
6
+ "language_code": "af",
7
+ "model_type": "subword",
8
+ "created_at": "2025-07-31T16:40:08.319325"
9
+ }
20250701/af/models/subword_ngram/af_3gram_metadata.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 3,
3
+ "total_ngrams": 42949586,
4
+ "unique_ngrams": 1933919,
5
+ "min_frequency": 3,
6
+ "language_code": "af",
7
+ "model_type": "subword",
8
+ "created_at": "2025-07-31T16:40:13.616357"
9
+ }
20250701/af/models/subword_ngram/af_4gram_metadata.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 4,
3
+ "total_ngrams": 42824846,
4
+ "unique_ngrams": 3175168,
5
+ "min_frequency": 2,
6
+ "language_code": "af",
7
+ "model_type": "subword",
8
+ "created_at": "2025-07-31T16:40:22.238911"
9
+ }
20250701/af/models/tokenizer/af_tokenizer_16k.vocab ADDED
The diff for this file is too large to render. See raw diff
 
20250701/af/models/tokenizer/af_tokenizer_32k.vocab ADDED
The diff for this file is too large to render. See raw diff
 
20250701/af/models/tokenizer/af_tokenizer_64k.vocab ADDED
The diff for this file is too large to render. See raw diff
 
20250701/af/models/tokenizer/af_tokenizer_8k.vocab ADDED
The diff for this file is too large to render. See raw diff
 
20250701/af/models/vocabulary/af_dictionary_metadata.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "language_code": "af",
3
+ "total_articles": 124877,
4
+ "training_samples": 49935,
5
+ "eval_samples": 999,
6
+ "dictionary_size": 484292,
7
+ "word_freq_size": 15051,
8
+ "training_report": {
9
+ "cycle_reports": [
10
+ {
11
+ "f1": 1.0,
12
+ "accuracy": 1.0,
13
+ "precision": 1.0,
14
+ "recall": 1.0,
15
+ "confusion": 0,
16
+ "confidence_margin": 1.0,
17
+ "confusion_matrix": [],
18
+ "total_samples": 49558,
19
+ "removed_samples": 0
20
+ },
21
+ {
22
+ "f1": 1.0,
23
+ "accuracy": 1.0,
24
+ "precision": 1.0,
25
+ "recall": 1.0,
26
+ "confusion": 0,
27
+ "confidence_margin": 1.0,
28
+ "confusion_matrix": [],
29
+ "total_samples": 49558,
30
+ "removed_samples": 0
31
+ }
32
+ ]
33
+ },
34
+ "final_accuracy": 1.0,
35
+ "final_f1": 1.0,
36
+ "confidence_threshold": 0.5,
37
+ "confidence_margin": 0.3,
38
+ "training_cycles": 3,
39
+ "completed_at": "2025-07-31T16:02:59.004063"
40
+ }
20250701/af/models/word_markov/af_markov1_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "context_size": 1,
3
+ "unique_contexts": 823324,
4
+ "total_transitions": 38487033,
5
+ "language_code": "af",
6
+ "model_type": "word",
7
+ "created_at": "2025-07-31T16:43:12.538748"
8
+ }
20250701/af/models/word_markov/af_markov2_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "context_size": 2,
3
+ "unique_contexts": 8320124,
4
+ "total_transitions": 38362292,
5
+ "language_code": "af",
6
+ "model_type": "word",
7
+ "created_at": "2025-07-31T16:43:52.328902"
8
+ }
20250701/af/models/word_markov/af_markov3_metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "context_size": 3,
3
+ "unique_contexts": 19534054,
4
+ "total_transitions": 38237555,
5
+ "language_code": "af",
6
+ "model_type": "word",
7
+ "created_at": "2025-07-31T16:44:56.903211"
8
+ }
20250701/af/models/word_ngram/af_2gram_metadata.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 2,
3
+ "total_ngrams": 38487033,
4
+ "unique_ngrams": 1312897,
5
+ "min_frequency": 3,
6
+ "language_code": "af",
7
+ "model_type": "word",
8
+ "created_at": "2025-07-31T16:42:42.416761"
9
+ }
20250701/af/models/word_ngram/af_3gram_metadata.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 3,
3
+ "total_ngrams": 38362292,
4
+ "unique_ngrams": 3125001,
5
+ "min_frequency": 2,
6
+ "language_code": "af",
7
+ "model_type": "word",
8
+ "created_at": "2025-07-31T16:42:49.829509"
9
+ }
20250701/af/models/word_ngram/af_3gram_model.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fa503a945674b3707acc6224b165da87aefe600dd2e2f5d54ad8ce57c2b0ce3
3
+ size 54238615
20250701/af/models/word_ngram/af_4gram_metadata.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "n": 4,
3
+ "total_ngrams": 38237555,
4
+ "unique_ngrams": 2527228,
5
+ "min_frequency": 2,
6
+ "language_code": "af",
7
+ "model_type": "word",
8
+ "created_at": "2025-07-31T16:42:57.012278"
9
+ }
20250701/af/models/word_ngram/af_4gram_model.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e04ee684ca4c3c2ac49756b1cb67b308fb9a13c457b3af6eef5b13311ce3379
3
+ size 47134196