Guang2666 commited on
Commit
8a06e4b
·
verified ·
1 Parent(s): e3a6823

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/model_performance_comparison.png filter=lfs diff=lfs merge=lfs -text
37
+ assets/pipeline_overview.png filter=lfs diff=lfs merge=lfs -text
LICENSE.txt ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tencent is pleased to support the community by making DRIVE-SFT available.
2
+
3
+ Copyright (C) 2025 Tencent. All rights reserved.
4
+
5
+ The open-source software and/or model included in this distribution may have been modified by Tencent (“Tencent Modifications”). All Tencent Modifications are Copyright (C) Tencent.
6
+
7
+ DRIVE-SFT is licensed under the License Term of DRIVE-SFT, except for the third-party component listed below, which remain licensed under its original terms. DRIVE-SFT does not impose any additional restrictions beyond those specified in the original license of the third-party component. Users are required to comply with all applicable terms and conditions of the original license and to ensure that the use of the third-party component conforms to all relevant laws and regulations.
8
+
9
+ For the avoidance of doubt, DRIVE-SFT refers solely to weights made publicly available by Tencent in accordance with the License Term of DRIVE-SFT.
10
+
11
+ Terms of License Term of DRIVE-SFT:
12
+ --------------------------------------------------------------------
13
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and /or sublicense copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
14
+
15
+ - You agree to use DRIVE-SFT only for academic purposes, and refrain from using it for any commercial or production purposes under any circumstances.
16
+
17
+ - The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
18
+
19
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
20
+
21
+
22
+
23
+ Dependencies and Licenses:
24
+
25
+ This open-source project, DRIVE: Data Curation Best Practices for Reinforcement Learning wIthVErifiable Reward in Competitive Code Generation, builds upon the following open-source model and/or software component, which remains licensed under its original license. The model or software may include modifications made by Tencent (“Tencent Modifications”), which are Copyright (C) Tencent.
26
+
27
+ In case you believe there have been errors in the attribution below, you may submit the concerns to us for review and correction.
28
+
29
+ Open Source Model Licensed under the Apache-2.0:
30
+ --------------------------------------------------------------------
31
+ 1. Qwen2.5-32B
32
+ Copyright 2024 Alibaba Cloud
33
+
34
+ Terms of the Apache-2.0:
35
+ --------------------------------------------------------------------
36
+ Apache License
37
+ Version 2.0, January 2004
38
+ http://www.apache.org/licenses/
39
+
40
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
41
+
42
+ 1. Definitions.
43
+
44
+ "License" shall mean the terms and conditions for use, reproduction,
45
+ and distribution as defined by Sections 1 through 9 of this document.
46
+
47
+ "Licensor" shall mean the copyright owner or entity authorized by
48
+ the copyright owner that is granting the License.
49
+
50
+ "Legal Entity" shall mean the union of the acting entity and all
51
+ other entities that control, are controlled by, or are under common
52
+ control with that entity. For the purposes of this definition,
53
+ "control" means (i) the power, direct or indirect, to cause the
54
+ direction or management of such entity, whether by contract or
55
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
56
+ outstanding shares, or (iii) beneficial ownership of such entity.
57
+
58
+ "You" (or "Your") shall mean an individual or Legal Entity
59
+ exercising permissions granted by this License.
60
+
61
+ "Source" form shall mean the preferred form for making modifications,
62
+ including but not limited to software source code, documentation
63
+ source, and configuration files.
64
+
65
+ "Object" form shall mean any form resulting from mechanical
66
+ transformation or translation of a Source form, including but
67
+ not limited to compiled object code, generated documentation,
68
+ and conversions to other media types.
69
+
70
+ "Work" shall mean the work of authorship, whether in Source or
71
+ Object form, made available under the License, as indicated by a
72
+ copyright notice that is included in or attached to the work
73
+ (an example is provided in the Appendix below).
74
+
75
+ "Derivative Works" shall mean any work, whether in Source or Object
76
+ form, that is based on (or derived from) the Work and for which the
77
+ editorial revisions, annotations, elaborations, or other modifications
78
+ represent, as a whole, an original work of authorship. For the purposes
79
+ of this License, Derivative Works shall not include works that remain
80
+ separable from, or merely link (or bind by name) to the interfaces of,
81
+ the Work and Derivative Works thereof.
82
+
83
+ "Contribution" shall mean any work of authorship, including
84
+ the original version of the Work and any modifications or additions
85
+ to that Work or Derivative Works thereof, that is intentionally
86
+ submitted to Licensor for inclusion in the Work by the copyright owner
87
+ or by an individual or Legal Entity authorized to submit on behalf of
88
+ the copyright owner. For the purposes of this definition, "submitted"
89
+ means any form of electronic, verbal, or written communication sent
90
+ to the Licensor or its representatives, including but not limited to
91
+ communication on electronic mailing lists, source code control systems,
92
+ and issue tracking systems that are managed by, or on behalf of, the
93
+ Licensor for the purpose of discussing and improving the Work, but
94
+ excluding communication that is conspicuously marked or otherwise
95
+ designated in writing by the copyright owner as "Not a Contribution."
96
+
97
+ "Contributor" shall mean Licensor and any individual or Legal Entity
98
+ on behalf of whom a Contribution has been received by Licensor and
99
+ subsequently incorporated within the Work.
100
+
101
+ 2. Grant of Copyright License. Subject to the terms and conditions of
102
+ this License, each Contributor hereby grants to You a perpetual,
103
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
104
+ copyright license to reproduce, prepare Derivative Works of,
105
+ publicly display, publicly perform, sublicense, and distribute the
106
+ Work and such Derivative Works in Source or Object form.
107
+
108
+ 3. Grant of Patent License. Subject to the terms and conditions of
109
+ this License, each Contributor hereby grants to You a perpetual,
110
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
111
+ (except as stated in this section) patent license to make, have made,
112
+ use, offer to sell, sell, import, and otherwise transfer the Work,
113
+ where such license applies only to those patent claims licensable
114
+ by such Contributor that are necessarily infringed by their
115
+ Contribution(s) alone or by combination of their Contribution(s)
116
+ with the Work to which such Contribution(s) was submitted. If You
117
+ institute patent litigation against any entity (including a
118
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
119
+ or a Contribution incorporated within the Work constitutes direct
120
+ or contributory patent infringement, then any patent licenses
121
+ granted to You under this License for that Work shall terminate
122
+ as of the date such litigation is filed.
123
+
124
+ 4. Redistribution. You may reproduce and distribute copies of the
125
+ Work or Derivative Works thereof in any medium, with or without
126
+ modifications, and in Source or Object form, provided that You
127
+ meet the following conditions:
128
+
129
+ (a) You must give any other recipients of the Work or
130
+ Derivative Works a copy of this License; and
131
+
132
+ (b) You must cause any modified files to carry prominent notices
133
+ stating that You changed the files; and
134
+
135
+ (c) You must retain, in the Source form of any Derivative Works
136
+ that You distribute, all copyright, patent, trademark, and
137
+ attribution notices from the Source form of the Work,
138
+ excluding those notices that do not pertain to any part of
139
+ the Derivative Works; and
140
+
141
+ (d) If the Work includes a "NOTICE" text file as part of its
142
+ distribution, then any Derivative Works that You distribute must
143
+ include a readable copy of the attribution notices contained
144
+ within such NOTICE file, excluding those notices that do not
145
+ pertain to any part of the Derivative Works, in at least one
146
+ of the following places: within a NOTICE text file distributed
147
+ as part of the Derivative Works; within the Source form or
148
+ documentation, if provided along with the Derivative Works; or,
149
+ within a display generated by the Derivative Works, if and
150
+ wherever such third-party notices normally appear. The contents
151
+ of the NOTICE file are for informational purposes only and
152
+ do not modify the License. You may add Your own attribution
153
+ notices within Derivative Works that You distribute, alongside
154
+ or as an addendum to the NOTICE text from the Work, provided
155
+ that such additional attribution notices cannot be construed
156
+ as modifying the License.
157
+
158
+ You may add Your own copyright statement to Your modifications and
159
+ may provide additional or different license terms and conditions
160
+ for use, reproduction, or distribution of Your modifications, or
161
+ for any such Derivative Works as a whole, provided Your use,
162
+ reproduction, and distribution of the Work otherwise complies with
163
+ the conditions stated in this License.
164
+
165
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
166
+ any Contribution intentionally submitted for inclusion in the Work
167
+ by You to the Licensor shall be under the terms and conditions of
168
+ this License, without any additional terms or conditions.
169
+ Notwithstanding the above, nothing herein shall supersede or modify
170
+ the terms of any separate license agreement you may have executed
171
+ with Licensor regarding such Contributions.
172
+
173
+ 6. Trademarks. This License does not grant permission to use the trade
174
+ names, trademarks, service marks, or product names of the Licensor,
175
+ except as required for reasonable and customary use in describing the
176
+ origin of the Work and reproducing the content of the NOTICE file.
177
+
178
+ 7. Disclaimer of Warranty. Unless required by applicable law or
179
+ agreed to in writing, Licensor provides the Work (and each
180
+ Contributor provides its Contributions) on an "AS IS" BASIS,
181
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
182
+ implied, including, without limitation, any warranties or conditions
183
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
184
+ PARTICULAR PURPOSE. You are solely responsible for determining the
185
+ appropriateness of using or redistributing the Work and assume any
186
+ risks associated with Your exercise of permissions under this License.
187
+
188
+ 8. Limitation of Liability. In no event and under no legal theory,
189
+ whether in tort (including negligence), contract, or otherwise,
190
+ unless required by applicable law (such as deliberate and grossly
191
+ negligent acts) or agreed to in writing, shall any Contributor be
192
+ liable to You for damages, including any direct, indirect, special,
193
+ incidental, or consequential damages of any character arising as a
194
+ result of this License or out of the use or inability to use the
195
+ Work (including but not limited to damages for loss of goodwill,
196
+ work stoppage, computer failure or malfunction, or any and all
197
+ other commercial damages or losses), even if such Contributor
198
+ has been advised of the possibility of such damages.
199
+
200
+ 9. Accepting Warranty or Additional Liability. While redistributing
201
+ the Work or Derivative Works thereof, You may choose to offer,
202
+ and charge a fee for, acceptance of support, warranty, indemnity,
203
+ or other liability obligations and/or rights consistent with this
204
+ License. However, in accepting such obligations, You may act only
205
+ on Your own behalf and on Your sole responsibility, not on behalf
206
+ of any other Contributor, and only if You agree to indemnify,
207
+ defend, and hold each Contributor harmless for any liability
208
+ incurred by, or claims asserted against, such Contributor by reason
209
+ of your accepting any such warranty or additional liability..
210
+
211
+ END OF TERMS AND CONDITIONS
212
+
213
+ ==================================================
214
+ End of the Attribution Notice of this project.
README.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ <div align="center">
3
+
4
+ # DRIVE: <font color=#6495ED >D</font>ata Curation Best Practices for <font color=#6495ED >R</font>einforcement Learning w<font color=#6495ED >I</font>th <font color=#6495ED >VE</font>rifiable Reward in Competitive Code Generation
5
+
6
+ **Hunyuan Team, Tencent**
7
+
8
+ </div>
9
+
10
+ <p align="center">
11
+ <a href="https://arxiv.org/abs/2511.06307">📖 Paper</a> •
12
+ <a href="https://huggingface.co/tencent/DRIVE-SFT">📙 SFT Model </a> •
13
+ <a href="https://huggingface.co/tencent/DRIVE-RL">📘 RL Model </a> •
14
+ <a href="#citation"><b>📜 Citation</b></a>
15
+ </p>
16
+
17
+
18
+ -----
19
+
20
+ ## Abstract
21
+
22
+ Recent reasoning-first models have spurred a resurgence of interest in RLVR (Reinforcement Learning with Verifiable Reward). However, advances are dominated by mathematics, with competitive-programming code generation being relatively underexplored. This work investigates how to construct RLVR datasets and presents practical training techniques that yield strong performance.
23
+
24
+ Our pipeline begins with Supervised Fine-Tuning (SFT) distilled from strong open-source models. This is followed by a **two-stage RL process** using executable, testcase-driven rewards:
25
+
26
+ 1. **Stage 1 (Entropy Expansion):** Training on a large, uniformly distributed set of problems with moderate rollouts (8) and a shorter context (24k) to expand entropy and mitigate repetition.
27
+ 2. **Stage 2 (Hard-Focus Curriculum):** Updating on a small, high-quality set of *challenging* problems using Pre-GRPO with a large rollout budget (64) under a hard-focus curriculum.
28
+
29
+ We implement our method on Qwen2.5-32B and achieve state-of-the-art performance among models of similar scale, comparable to leading systems like DeepSeek v3.1.
30
+
31
+ ## 🚀 The DRIVE Pipeline
32
+
33
+ Our training pipeline consists of two main phases: Supervised Fine-Tuning (SFT) and a Two-Stage Reinforcement Learning process, as illustrated below.
34
+
35
+ ![pipeline_overview](assets/pipeline_overview.png)
36
+
37
+ > *Figure 2: The training pipeline of our models.*
38
+
39
+ ### Phase 1: Supervised Fine-Tuning (SFT)
40
+
41
+ We begin by fine-tuning Qwen2.5-32B. The key innovation in this stage is **Difficulty-Aware Sampling**:
42
+
43
+ * We first classify all competitive programming prompts into three categories: easy, medium, and hard.
44
+ * To force the model to focus on more challenging problems, we **duplicate hard samples twice** in the final SFT dataset.
45
+ * We also augment this with general-purpose coding and reasoning-intensive data to improve overall capabilities.
46
+
47
+ ### Phase 2: Two-Stage Reinforcement Learning
48
+
49
+ After SFT, the model still suffers from low entropy, repetitive generation, and poor performance on hard problems. Our two-stage RL process directly addresses this.
50
+
51
+ **Stage 1: Entropy Expansion**
52
+
53
+ * **Goal:** Increase output diversity and reduce repetitive patterns.
54
+ * **Data:** A large, uniformly distributed set of \~9k problems.
55
+ * **Method:** We use 8 rollouts and a shorter 24k token length. As shown in Figure 3, this "24k-style" training (blue line) successfully increases entropy, while standard training (orange line) leads to entropy collapse.
56
+
57
+ ![entropy_vs_steps](assets/entropy_vs_steps.png)
58
+
59
+ > *Figure 3: The entropy comparison of 24k-style training and 32k-style training.*
60
+
61
+ **Stage 2: Hard-Focus Curriculum**
62
+
63
+ * **Goal:** Master the most challenging problems.
64
+ * **Data:** A small, high-quality set of difficult problems (e.g., the 72, 50, and 32 hardest cases from LiveCode V6).
65
+ * **Method:** We apply a "hard-focus curriculum" that progressively retains only the most difficult instances. Crucially, we use a **large rollout budget (64-80 rollouts)** in this stage, which we found essential for stable gains on hard problems.
66
+
67
+ ## 📊 Key Results
68
+
69
+ Our final 32B model, **DRIVE-RL**, achieves state-of-the-art performance among similarly sized models and is competitive with larger 64k-context models.
70
+
71
+ ![](assets/model_performance_comparison.png)
72
+
73
+ > *Figure 1: Performance of our models on various benchmarks.*
74
+
75
+ ### Pass@1 Performance Comparison
76
+
77
+ The two-stage RL pipeline provides significant improvements over the SFT baseline, particularly on challenging benchmarks. We see a **+58.3% relative improvement** on Codeforces OJ.
78
+
79
+ | Model | LiveCode 08-11 | LiveCode V5 | LiveCode V6 | LeetCode Weekly (32) | Codeforces OJ (33) |
80
+ | :--- | :---: | :---: | :---: | :---: | :---: |
81
+ | DeepseekV3.1 (64k) | 0.692 | 0.713 | 0.693 | 0.688 | 0.161 |
82
+ | Seed1.6-0715 (64k) | 0.803 | 0.824 | 0.770 | 0.743 | 0.188 |
83
+ | Qwen3-235B-2507 (64k)| 0.681 | 0.713 | 0.646 | 0.688 | 0.200 |
84
+ | --- | --- | --- | --- | --- | --- |
85
+ | SFT model (32k) | 0.602 | 0.594 | 0.549 | 0.578 | 0.115 |
86
+ | RL Stage 1 model (24k) | 0.625 | 0.627 | 0.634 | 0.603 | 0.112 |
87
+ | **DRIVE-RL model (32k)** | **0.699** | **0.697** | **0.703** | **0.653** | **0.182** |
88
+ | *Rel. Improvement (RL vs SFT)* | *+16.1%* | *+17.3%* | *+28.1%* | *+13.0%* | *+58.3%* |
89
+
90
+ *(Data sourced from Table 2 in our paper)*
91
+
92
+ ### Key Findings
93
+
94
+ 1. **Difficulty-aware training is crucial:** Standard RL struggles with hard problems. Our hard-focus curriculum (Stage 2) is essential for pushing the model's capabilities.
95
+ 2. **Entropy expansion is necessary:** Skipping Stage 1 (Entropy Expansion) and training *only* on hard cases hurts generalization to out-of-distribution benchmarks. Both stages are necessary.
96
+ 3. **Large rollouts for hard problems:** A large rollout budget (e.g., 64+) is essential for mastering challenging cases.
97
+ 4. **Scaling:** The DRIVE strategy shows strong, positive scaling trends when applied to a large-scale internal MoE model.
98
+
99
+ <a id="citation"></a>
100
+ ## 📜 Citation
101
+
102
+ If you find this work useful, please cite our paper:
103
+
104
+ ```bibtex
105
+ @misc{zhu2025drivedatacurationbest,
106
+ title={DRIVE: Data Curation Best Practices for Reinforcement Learning with Verifiable Reward in Competitive Code Generation},
107
+ author={Speed Zhu and Jianwei Cai and Guang Chen and Lulu Wu and Saiyong Yang and Wiggin Zhou},
108
+ year={2025},
109
+ eprint={2511.06307},
110
+ archivePrefix={arXiv},
111
+ primaryClass={cs.LG},
112
+ url={https://arxiv.org/abs/2511.06307},
113
+ }
114
+ ```
115
+
116
+
117
+ ## License
118
+
119
+ This repository contains two separate licenses for different models:
120
+
121
+ - **DRIVE-RL Model**: Licensed under [LICENSE - DRIVE-RL.txt](LICENSE%20-%20DRIVE-RL.txt)
122
+ - **DRIVE-SFT Model**: Licensed under [LICENSE - DRIVE-SFT.txt](LICENSE%20-%20DRIVE-SFT.txt)
123
+
124
+ Please refer to the respective license file for the model you are using.
assets/entropy_vs_steps.png ADDED
assets/model_performance_comparison.png ADDED

Git LFS Details

  • SHA256: b29e33aa6769a66aa5379546c6a24b2b8c17512569f580700f92de6233223ccd
  • Pointer size: 131 Bytes
  • Size of remote file: 107 kB
assets/pipeline_overview.png ADDED

Git LFS Details

  • SHA256: 369ef785b08c80049fb85d688e7027643394639dba5a993c7bea8c494d3da05a
  • Pointer size: 132 Bytes
  • Size of remote file: 4.65 MB
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151645,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 27648,
12
+ "max_position_embeddings": 32768,
13
+ "max_window_layers": 70,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 40,
16
+ "num_hidden_layers": 64,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-06,
19
+ "rope_theta": 1000000.0,
20
+ "sliding_window": 131072,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.41.2",
24
+ "use_cache": true,
25
+ "use_sliding_window": false,
26
+ "vocab_size": 152064
27
+ }
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "repetition_penalty": 1.05,
10
+ "temperature": 0.7,
11
+ "top_k": 20,
12
+ "top_p": 0.8,
13
+ "transformers_version": "4.41.2"
14
+ }
model-00001-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a68542579d4098dc9014c00dea8cb24ab9f9033dd95814008dacc7dd79158fec
3
+ size 4498420872
model-00002-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49604c41d9417040968ff6840441e4d778fff71e9dd33484cbe50489fbcc88fb
3
+ size 4718804768
model-00003-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c0ac1de4a7a9d9cf276a16b1fd25e87a81e2f1726d117f37761c34c49434065
3
+ size 4467075880
model-00004-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5148ac9f531d86395e005c7c44173674b20d59fc00b543cce5da15538b7836ba
3
+ size 4467075880
model-00005-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9caa265c6157a485672eeddb5e472b2f0db1e97bbbef79ea017ecc018f45e7e7
3
+ size 4718804760
model-00006-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de8f2e9e78f83a376d44b9d9c5a8225165983e2c5004f61220d15cd1bebaf722
3
+ size 4467075904
model-00007-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ad9f5328e259638951382cf3eb0976ed7e31666161bc485922739cfcddee55b
3
+ size 4467075904
model-00008-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ad385ee07c6e80a421bf2f443c81f139dcc9032651baf0060717b7924c1cec2
3
+ size 4718804800
model-00009-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2c3b13b0e933401d1d8b077407ff46f447af3eb0a3d1a9963f9f3a8898bb170
3
+ size 4467075904
model-00010-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fd074e9882a1b70e3a58617b3cc9682eb50721fd4a444e69901065a5af0613a
3
+ size 4467075904
model-00011-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:420872ed4eabf929c0d170d7f80459f674926196b1d339f78510a23339d7fdad
3
+ size 4718804800
model-00012-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9068b19de75afa95ff893f26bda2cb3b7185cf5573cd31a077e7056e9a26d055
3
+ size 4467075904
model-00013-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fdd3f68b3981c44246bb17b39e73f72f4cac85c4582d70f20887b21acb60a50
3
+ size 4467075904
model-00014-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d98af6e7d0d3c543ad123179906527ca55aeb052486abf30b0d84136a6c1855
3
+ size 4718804800
model-00015-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:031fbad3f9a74711573793a93fb46a396fd0e13c35ef5d56f87edd38fa0e3700
3
+ size 4467075904
model-00016-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f985c90ee439d131903e7a439b11504b35fe858d9252ba0b2e5ba9e1a198c574
3
+ size 4467075904
model-00017-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0daf5f62f2154d14a0b5a081698da14bd4524d25e0ff54f3e57ffa1bc4032656
3
+ size 4718804800
model-00018-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23f5d22939b48456c344f97bffac2c822ba140d398aaf43af47460b1e4e8e1db
3
+ size 4467075904
model-00019-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d2c35dba197a6a272d97faf63289b0bf8ca46c1f789d8145795ad09cb9a33c1
3
+ size 4467075904
model-00020-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66d180b77327285a23ec7123c0a1e6899c1202a22d2363692d025df5b29adb20
3
+ size 4718804800
model-00021-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f4fca1c075e4cb1fa05aa96d8c65dbaaa4ece69334777e52213facab884ec28
3
+ size 4467075904
model-00022-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d1a29a9a5739a44e5e6738cd71e6b73557e1cb5e9b97afdd2dfb6fc69b784e0
3
+ size 4467075904
model-00023-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c5b93b811f8a292e70c0876016de39ccdd580d470938392886ac466967809fa
3
+ size 4718804800
model-00024-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecdb008bba5c84a7104d4953800a4ef88d238f10e9b74ff501149d17cbbf6a06
3
+ size 4467075904
model-00025-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fabc1792f9c8473c78091ea16c53cb2ae53e813848845d9137a82bfd09235288
3
+ size 4467075904
model-00026-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cfa704f936ea3783dc0bccd0c4a8961899fe4325b8c5f0803da4a13170af41a
3
+ size 4718804800
model-00027-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:690dbfadb17f59a6079b6e485d61a8db6a8a4de218e8584a0a3b6d34633949f6
3
+ size 4467075904
model-00028-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d4d012b22589df0ca53716dd3a7f72a9cc873d0925c650e43cf049e7ed00605
3
+ size 4467075904
model-00029-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52af2e15b6bef95e8f35b1cb042abdb2d1aeb2406bf09e9234c177bdd840672c
3
+ size 3680563768
model.safetensors.index.json ADDED
@@ -0,0 +1,778 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 131055505408
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00029-of-00029.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00029.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00002-of-00029.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00002-of-00029.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00029.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00029.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00002-of-00029.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00029.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00029.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00029.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00029.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00029.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00029.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00029.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00002-of-00029.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00002-of-00029.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00002-of-00029.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00002-of-00029.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00002-of-00029.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00002-of-00029.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00002-of-00029.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00002-of-00029.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00002-of-00029.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00002-of-00029.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00002-of-00029.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00002-of-00029.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00006-of-00029.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00006-of-00029.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00006-of-00029.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00006-of-00029.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00006-of-00029.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00005-of-00029.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00005-of-00029.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00005-of-00029.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00005-of-00029.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00005-of-00029.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00005-of-00029.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00005-of-00029.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00006-of-00029.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00006-of-00029.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00006-of-00029.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00006-of-00029.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00006-of-00029.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00006-of-00029.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00006-of-00029.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00006-of-00029.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00006-of-00029.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00006-of-00029.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00006-of-00029.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00006-of-00029.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00007-of-00029.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00007-of-00029.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00006-of-00029.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00007-of-00029.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00007-of-00029.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00006-of-00029.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00006-of-00029.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00006-of-00029.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00006-of-00029.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00006-of-00029.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00006-of-00029.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00006-of-00029.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00007-of-00029.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00007-of-00029.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00007-of-00029.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00007-of-00029.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00007-of-00029.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00007-of-00029.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00007-of-00029.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00007-of-00029.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00007-of-00029.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00007-of-00029.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00007-of-00029.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00007-of-00029.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00008-of-00029.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00008-of-00029.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00007-of-00029.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00007-of-00029.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00008-of-00029.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00007-of-00029.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00007-of-00029.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00007-of-00029.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00007-of-00029.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00007-of-00029.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00007-of-00029.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00007-of-00029.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00008-of-00029.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00008-of-00029.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00008-of-00029.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00008-of-00029.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00008-of-00029.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00008-of-00029.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00008-of-00029.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00008-of-00029.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00008-of-00029.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00008-of-00029.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00008-of-00029.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00008-of-00029.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00008-of-00029.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00008-of-00029.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00008-of-00029.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00008-of-00029.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00008-of-00029.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00008-of-00029.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00008-of-00029.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00008-of-00029.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00008-of-00029.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00008-of-00029.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00008-of-00029.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00008-of-00029.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00009-of-00029.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00009-of-00029.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00009-of-00029.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00009-of-00029.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00009-of-00029.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00008-of-00029.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00008-of-00029.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00008-of-00029.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00008-of-00029.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00008-of-00029.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00008-of-00029.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00008-of-00029.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00009-of-00029.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00009-of-00029.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00009-of-00029.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00009-of-00029.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00009-of-00029.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00009-of-00029.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00009-of-00029.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00009-of-00029.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00009-of-00029.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00009-of-00029.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00009-of-00029.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00009-of-00029.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00010-of-00029.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00010-of-00029.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00009-of-00029.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00010-of-00029.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00010-of-00029.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00009-of-00029.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00009-of-00029.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00009-of-00029.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00009-of-00029.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00009-of-00029.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00009-of-00029.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00009-of-00029.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00002-of-00029.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00002-of-00029.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00002-of-00029.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00002-of-00029.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00002-of-00029.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00002-of-00029.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00002-of-00029.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00002-of-00029.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00002-of-00029.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00029.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00002-of-00029.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00029.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00010-of-00029.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00010-of-00029.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00010-of-00029.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00010-of-00029.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00010-of-00029.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00010-of-00029.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00010-of-00029.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00010-of-00029.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00010-of-00029.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00010-of-00029.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00010-of-00029.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00010-of-00029.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00011-of-00029.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00011-of-00029.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00010-of-00029.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00010-of-00029.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00011-of-00029.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00010-of-00029.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00010-of-00029.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00010-of-00029.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00010-of-00029.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00010-of-00029.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00010-of-00029.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00010-of-00029.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00011-of-00029.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00011-of-00029.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00011-of-00029.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00011-of-00029.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00011-of-00029.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00011-of-00029.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00011-of-00029.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00011-of-00029.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00011-of-00029.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00011-of-00029.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00011-of-00029.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00011-of-00029.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00011-of-00029.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00011-of-00029.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00011-of-00029.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00011-of-00029.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00011-of-00029.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00011-of-00029.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00011-of-00029.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00011-of-00029.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00011-of-00029.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00011-of-00029.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00011-of-00029.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00011-of-00029.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00012-of-00029.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00012-of-00029.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00012-of-00029.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00012-of-00029.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00012-of-00029.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00011-of-00029.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00011-of-00029.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00011-of-00029.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00011-of-00029.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00011-of-00029.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00011-of-00029.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00011-of-00029.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00012-of-00029.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00012-of-00029.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00012-of-00029.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00012-of-00029.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00012-of-00029.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00012-of-00029.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00012-of-00029.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00012-of-00029.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00012-of-00029.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00012-of-00029.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00012-of-00029.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00012-of-00029.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00013-of-00029.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00013-of-00029.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00012-of-00029.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00013-of-00029.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00013-of-00029.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00012-of-00029.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00012-of-00029.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00012-of-00029.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00012-of-00029.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00012-of-00029.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00012-of-00029.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00012-of-00029.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00013-of-00029.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00013-of-00029.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00013-of-00029.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00013-of-00029.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00013-of-00029.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00013-of-00029.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00013-of-00029.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00013-of-00029.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00013-of-00029.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00013-of-00029.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00013-of-00029.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00013-of-00029.safetensors",
260
+ "model.layers.28.input_layernorm.weight": "model-00014-of-00029.safetensors",
261
+ "model.layers.28.mlp.down_proj.weight": "model-00014-of-00029.safetensors",
262
+ "model.layers.28.mlp.gate_proj.weight": "model-00013-of-00029.safetensors",
263
+ "model.layers.28.mlp.up_proj.weight": "model-00013-of-00029.safetensors",
264
+ "model.layers.28.post_attention_layernorm.weight": "model-00014-of-00029.safetensors",
265
+ "model.layers.28.self_attn.k_proj.bias": "model-00013-of-00029.safetensors",
266
+ "model.layers.28.self_attn.k_proj.weight": "model-00013-of-00029.safetensors",
267
+ "model.layers.28.self_attn.o_proj.weight": "model-00013-of-00029.safetensors",
268
+ "model.layers.28.self_attn.q_proj.bias": "model-00013-of-00029.safetensors",
269
+ "model.layers.28.self_attn.q_proj.weight": "model-00013-of-00029.safetensors",
270
+ "model.layers.28.self_attn.v_proj.bias": "model-00013-of-00029.safetensors",
271
+ "model.layers.28.self_attn.v_proj.weight": "model-00013-of-00029.safetensors",
272
+ "model.layers.29.input_layernorm.weight": "model-00014-of-00029.safetensors",
273
+ "model.layers.29.mlp.down_proj.weight": "model-00014-of-00029.safetensors",
274
+ "model.layers.29.mlp.gate_proj.weight": "model-00014-of-00029.safetensors",
275
+ "model.layers.29.mlp.up_proj.weight": "model-00014-of-00029.safetensors",
276
+ "model.layers.29.post_attention_layernorm.weight": "model-00014-of-00029.safetensors",
277
+ "model.layers.29.self_attn.k_proj.bias": "model-00014-of-00029.safetensors",
278
+ "model.layers.29.self_attn.k_proj.weight": "model-00014-of-00029.safetensors",
279
+ "model.layers.29.self_attn.o_proj.weight": "model-00014-of-00029.safetensors",
280
+ "model.layers.29.self_attn.q_proj.bias": "model-00014-of-00029.safetensors",
281
+ "model.layers.29.self_attn.q_proj.weight": "model-00014-of-00029.safetensors",
282
+ "model.layers.29.self_attn.v_proj.bias": "model-00014-of-00029.safetensors",
283
+ "model.layers.29.self_attn.v_proj.weight": "model-00014-of-00029.safetensors",
284
+ "model.layers.3.input_layernorm.weight": "model-00003-of-00029.safetensors",
285
+ "model.layers.3.mlp.down_proj.weight": "model-00003-of-00029.safetensors",
286
+ "model.layers.3.mlp.gate_proj.weight": "model-00003-of-00029.safetensors",
287
+ "model.layers.3.mlp.up_proj.weight": "model-00003-of-00029.safetensors",
288
+ "model.layers.3.post_attention_layernorm.weight": "model-00003-of-00029.safetensors",
289
+ "model.layers.3.self_attn.k_proj.bias": "model-00002-of-00029.safetensors",
290
+ "model.layers.3.self_attn.k_proj.weight": "model-00002-of-00029.safetensors",
291
+ "model.layers.3.self_attn.o_proj.weight": "model-00002-of-00029.safetensors",
292
+ "model.layers.3.self_attn.q_proj.bias": "model-00002-of-00029.safetensors",
293
+ "model.layers.3.self_attn.q_proj.weight": "model-00002-of-00029.safetensors",
294
+ "model.layers.3.self_attn.v_proj.bias": "model-00002-of-00029.safetensors",
295
+ "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00029.safetensors",
296
+ "model.layers.30.input_layernorm.weight": "model-00014-of-00029.safetensors",
297
+ "model.layers.30.mlp.down_proj.weight": "model-00014-of-00029.safetensors",
298
+ "model.layers.30.mlp.gate_proj.weight": "model-00014-of-00029.safetensors",
299
+ "model.layers.30.mlp.up_proj.weight": "model-00014-of-00029.safetensors",
300
+ "model.layers.30.post_attention_layernorm.weight": "model-00014-of-00029.safetensors",
301
+ "model.layers.30.self_attn.k_proj.bias": "model-00014-of-00029.safetensors",
302
+ "model.layers.30.self_attn.k_proj.weight": "model-00014-of-00029.safetensors",
303
+ "model.layers.30.self_attn.o_proj.weight": "model-00014-of-00029.safetensors",
304
+ "model.layers.30.self_attn.q_proj.bias": "model-00014-of-00029.safetensors",
305
+ "model.layers.30.self_attn.q_proj.weight": "model-00014-of-00029.safetensors",
306
+ "model.layers.30.self_attn.v_proj.bias": "model-00014-of-00029.safetensors",
307
+ "model.layers.30.self_attn.v_proj.weight": "model-00014-of-00029.safetensors",
308
+ "model.layers.31.input_layernorm.weight": "model-00015-of-00029.safetensors",
309
+ "model.layers.31.mlp.down_proj.weight": "model-00015-of-00029.safetensors",
310
+ "model.layers.31.mlp.gate_proj.weight": "model-00015-of-00029.safetensors",
311
+ "model.layers.31.mlp.up_proj.weight": "model-00015-of-00029.safetensors",
312
+ "model.layers.31.post_attention_layernorm.weight": "model-00015-of-00029.safetensors",
313
+ "model.layers.31.self_attn.k_proj.bias": "model-00014-of-00029.safetensors",
314
+ "model.layers.31.self_attn.k_proj.weight": "model-00014-of-00029.safetensors",
315
+ "model.layers.31.self_attn.o_proj.weight": "model-00014-of-00029.safetensors",
316
+ "model.layers.31.self_attn.q_proj.bias": "model-00014-of-00029.safetensors",
317
+ "model.layers.31.self_attn.q_proj.weight": "model-00014-of-00029.safetensors",
318
+ "model.layers.31.self_attn.v_proj.bias": "model-00014-of-00029.safetensors",
319
+ "model.layers.31.self_attn.v_proj.weight": "model-00014-of-00029.safetensors",
320
+ "model.layers.32.input_layernorm.weight": "model-00015-of-00029.safetensors",
321
+ "model.layers.32.mlp.down_proj.weight": "model-00015-of-00029.safetensors",
322
+ "model.layers.32.mlp.gate_proj.weight": "model-00015-of-00029.safetensors",
323
+ "model.layers.32.mlp.up_proj.weight": "model-00015-of-00029.safetensors",
324
+ "model.layers.32.post_attention_layernorm.weight": "model-00015-of-00029.safetensors",
325
+ "model.layers.32.self_attn.k_proj.bias": "model-00015-of-00029.safetensors",
326
+ "model.layers.32.self_attn.k_proj.weight": "model-00015-of-00029.safetensors",
327
+ "model.layers.32.self_attn.o_proj.weight": "model-00015-of-00029.safetensors",
328
+ "model.layers.32.self_attn.q_proj.bias": "model-00015-of-00029.safetensors",
329
+ "model.layers.32.self_attn.q_proj.weight": "model-00015-of-00029.safetensors",
330
+ "model.layers.32.self_attn.v_proj.bias": "model-00015-of-00029.safetensors",
331
+ "model.layers.32.self_attn.v_proj.weight": "model-00015-of-00029.safetensors",
332
+ "model.layers.33.input_layernorm.weight": "model-00016-of-00029.safetensors",
333
+ "model.layers.33.mlp.down_proj.weight": "model-00016-of-00029.safetensors",
334
+ "model.layers.33.mlp.gate_proj.weight": "model-00015-of-00029.safetensors",
335
+ "model.layers.33.mlp.up_proj.weight": "model-00016-of-00029.safetensors",
336
+ "model.layers.33.post_attention_layernorm.weight": "model-00016-of-00029.safetensors",
337
+ "model.layers.33.self_attn.k_proj.bias": "model-00015-of-00029.safetensors",
338
+ "model.layers.33.self_attn.k_proj.weight": "model-00015-of-00029.safetensors",
339
+ "model.layers.33.self_attn.o_proj.weight": "model-00015-of-00029.safetensors",
340
+ "model.layers.33.self_attn.q_proj.bias": "model-00015-of-00029.safetensors",
341
+ "model.layers.33.self_attn.q_proj.weight": "model-00015-of-00029.safetensors",
342
+ "model.layers.33.self_attn.v_proj.bias": "model-00015-of-00029.safetensors",
343
+ "model.layers.33.self_attn.v_proj.weight": "model-00015-of-00029.safetensors",
344
+ "model.layers.34.input_layernorm.weight": "model-00016-of-00029.safetensors",
345
+ "model.layers.34.mlp.down_proj.weight": "model-00016-of-00029.safetensors",
346
+ "model.layers.34.mlp.gate_proj.weight": "model-00016-of-00029.safetensors",
347
+ "model.layers.34.mlp.up_proj.weight": "model-00016-of-00029.safetensors",
348
+ "model.layers.34.post_attention_layernorm.weight": "model-00016-of-00029.safetensors",
349
+ "model.layers.34.self_attn.k_proj.bias": "model-00016-of-00029.safetensors",
350
+ "model.layers.34.self_attn.k_proj.weight": "model-00016-of-00029.safetensors",
351
+ "model.layers.34.self_attn.o_proj.weight": "model-00016-of-00029.safetensors",
352
+ "model.layers.34.self_attn.q_proj.bias": "model-00016-of-00029.safetensors",
353
+ "model.layers.34.self_attn.q_proj.weight": "model-00016-of-00029.safetensors",
354
+ "model.layers.34.self_attn.v_proj.bias": "model-00016-of-00029.safetensors",
355
+ "model.layers.34.self_attn.v_proj.weight": "model-00016-of-00029.safetensors",
356
+ "model.layers.35.input_layernorm.weight": "model-00017-of-00029.safetensors",
357
+ "model.layers.35.mlp.down_proj.weight": "model-00017-of-00029.safetensors",
358
+ "model.layers.35.mlp.gate_proj.weight": "model-00016-of-00029.safetensors",
359
+ "model.layers.35.mlp.up_proj.weight": "model-00016-of-00029.safetensors",
360
+ "model.layers.35.post_attention_layernorm.weight": "model-00017-of-00029.safetensors",
361
+ "model.layers.35.self_attn.k_proj.bias": "model-00016-of-00029.safetensors",
362
+ "model.layers.35.self_attn.k_proj.weight": "model-00016-of-00029.safetensors",
363
+ "model.layers.35.self_attn.o_proj.weight": "model-00016-of-00029.safetensors",
364
+ "model.layers.35.self_attn.q_proj.bias": "model-00016-of-00029.safetensors",
365
+ "model.layers.35.self_attn.q_proj.weight": "model-00016-of-00029.safetensors",
366
+ "model.layers.35.self_attn.v_proj.bias": "model-00016-of-00029.safetensors",
367
+ "model.layers.35.self_attn.v_proj.weight": "model-00016-of-00029.safetensors",
368
+ "model.layers.36.input_layernorm.weight": "model-00017-of-00029.safetensors",
369
+ "model.layers.36.mlp.down_proj.weight": "model-00017-of-00029.safetensors",
370
+ "model.layers.36.mlp.gate_proj.weight": "model-00017-of-00029.safetensors",
371
+ "model.layers.36.mlp.up_proj.weight": "model-00017-of-00029.safetensors",
372
+ "model.layers.36.post_attention_layernorm.weight": "model-00017-of-00029.safetensors",
373
+ "model.layers.36.self_attn.k_proj.bias": "model-00017-of-00029.safetensors",
374
+ "model.layers.36.self_attn.k_proj.weight": "model-00017-of-00029.safetensors",
375
+ "model.layers.36.self_attn.o_proj.weight": "model-00017-of-00029.safetensors",
376
+ "model.layers.36.self_attn.q_proj.bias": "model-00017-of-00029.safetensors",
377
+ "model.layers.36.self_attn.q_proj.weight": "model-00017-of-00029.safetensors",
378
+ "model.layers.36.self_attn.v_proj.bias": "model-00017-of-00029.safetensors",
379
+ "model.layers.36.self_attn.v_proj.weight": "model-00017-of-00029.safetensors",
380
+ "model.layers.37.input_layernorm.weight": "model-00017-of-00029.safetensors",
381
+ "model.layers.37.mlp.down_proj.weight": "model-00017-of-00029.safetensors",
382
+ "model.layers.37.mlp.gate_proj.weight": "model-00017-of-00029.safetensors",
383
+ "model.layers.37.mlp.up_proj.weight": "model-00017-of-00029.safetensors",
384
+ "model.layers.37.post_attention_layernorm.weight": "model-00017-of-00029.safetensors",
385
+ "model.layers.37.self_attn.k_proj.bias": "model-00017-of-00029.safetensors",
386
+ "model.layers.37.self_attn.k_proj.weight": "model-00017-of-00029.safetensors",
387
+ "model.layers.37.self_attn.o_proj.weight": "model-00017-of-00029.safetensors",
388
+ "model.layers.37.self_attn.q_proj.bias": "model-00017-of-00029.safetensors",
389
+ "model.layers.37.self_attn.q_proj.weight": "model-00017-of-00029.safetensors",
390
+ "model.layers.37.self_attn.v_proj.bias": "model-00017-of-00029.safetensors",
391
+ "model.layers.37.self_attn.v_proj.weight": "model-00017-of-00029.safetensors",
392
+ "model.layers.38.input_layernorm.weight": "model-00018-of-00029.safetensors",
393
+ "model.layers.38.mlp.down_proj.weight": "model-00018-of-00029.safetensors",
394
+ "model.layers.38.mlp.gate_proj.weight": "model-00018-of-00029.safetensors",
395
+ "model.layers.38.mlp.up_proj.weight": "model-00018-of-00029.safetensors",
396
+ "model.layers.38.post_attention_layernorm.weight": "model-00018-of-00029.safetensors",
397
+ "model.layers.38.self_attn.k_proj.bias": "model-00017-of-00029.safetensors",
398
+ "model.layers.38.self_attn.k_proj.weight": "model-00017-of-00029.safetensors",
399
+ "model.layers.38.self_attn.o_proj.weight": "model-00017-of-00029.safetensors",
400
+ "model.layers.38.self_attn.q_proj.bias": "model-00017-of-00029.safetensors",
401
+ "model.layers.38.self_attn.q_proj.weight": "model-00017-of-00029.safetensors",
402
+ "model.layers.38.self_attn.v_proj.bias": "model-00017-of-00029.safetensors",
403
+ "model.layers.38.self_attn.v_proj.weight": "model-00017-of-00029.safetensors",
404
+ "model.layers.39.input_layernorm.weight": "model-00018-of-00029.safetensors",
405
+ "model.layers.39.mlp.down_proj.weight": "model-00018-of-00029.safetensors",
406
+ "model.layers.39.mlp.gate_proj.weight": "model-00018-of-00029.safetensors",
407
+ "model.layers.39.mlp.up_proj.weight": "model-00018-of-00029.safetensors",
408
+ "model.layers.39.post_attention_layernorm.weight": "model-00018-of-00029.safetensors",
409
+ "model.layers.39.self_attn.k_proj.bias": "model-00018-of-00029.safetensors",
410
+ "model.layers.39.self_attn.k_proj.weight": "model-00018-of-00029.safetensors",
411
+ "model.layers.39.self_attn.o_proj.weight": "model-00018-of-00029.safetensors",
412
+ "model.layers.39.self_attn.q_proj.bias": "model-00018-of-00029.safetensors",
413
+ "model.layers.39.self_attn.q_proj.weight": "model-00018-of-00029.safetensors",
414
+ "model.layers.39.self_attn.v_proj.bias": "model-00018-of-00029.safetensors",
415
+ "model.layers.39.self_attn.v_proj.weight": "model-00018-of-00029.safetensors",
416
+ "model.layers.4.input_layernorm.weight": "model-00003-of-00029.safetensors",
417
+ "model.layers.4.mlp.down_proj.weight": "model-00003-of-00029.safetensors",
418
+ "model.layers.4.mlp.gate_proj.weight": "model-00003-of-00029.safetensors",
419
+ "model.layers.4.mlp.up_proj.weight": "model-00003-of-00029.safetensors",
420
+ "model.layers.4.post_attention_layernorm.weight": "model-00003-of-00029.safetensors",
421
+ "model.layers.4.self_attn.k_proj.bias": "model-00003-of-00029.safetensors",
422
+ "model.layers.4.self_attn.k_proj.weight": "model-00003-of-00029.safetensors",
423
+ "model.layers.4.self_attn.o_proj.weight": "model-00003-of-00029.safetensors",
424
+ "model.layers.4.self_attn.q_proj.bias": "model-00003-of-00029.safetensors",
425
+ "model.layers.4.self_attn.q_proj.weight": "model-00003-of-00029.safetensors",
426
+ "model.layers.4.self_attn.v_proj.bias": "model-00003-of-00029.safetensors",
427
+ "model.layers.4.self_attn.v_proj.weight": "model-00003-of-00029.safetensors",
428
+ "model.layers.40.input_layernorm.weight": "model-00019-of-00029.safetensors",
429
+ "model.layers.40.mlp.down_proj.weight": "model-00019-of-00029.safetensors",
430
+ "model.layers.40.mlp.gate_proj.weight": "model-00018-of-00029.safetensors",
431
+ "model.layers.40.mlp.up_proj.weight": "model-00019-of-00029.safetensors",
432
+ "model.layers.40.post_attention_layernorm.weight": "model-00019-of-00029.safetensors",
433
+ "model.layers.40.self_attn.k_proj.bias": "model-00018-of-00029.safetensors",
434
+ "model.layers.40.self_attn.k_proj.weight": "model-00018-of-00029.safetensors",
435
+ "model.layers.40.self_attn.o_proj.weight": "model-00018-of-00029.safetensors",
436
+ "model.layers.40.self_attn.q_proj.bias": "model-00018-of-00029.safetensors",
437
+ "model.layers.40.self_attn.q_proj.weight": "model-00018-of-00029.safetensors",
438
+ "model.layers.40.self_attn.v_proj.bias": "model-00018-of-00029.safetensors",
439
+ "model.layers.40.self_attn.v_proj.weight": "model-00018-of-00029.safetensors",
440
+ "model.layers.41.input_layernorm.weight": "model-00019-of-00029.safetensors",
441
+ "model.layers.41.mlp.down_proj.weight": "model-00019-of-00029.safetensors",
442
+ "model.layers.41.mlp.gate_proj.weight": "model-00019-of-00029.safetensors",
443
+ "model.layers.41.mlp.up_proj.weight": "model-00019-of-00029.safetensors",
444
+ "model.layers.41.post_attention_layernorm.weight": "model-00019-of-00029.safetensors",
445
+ "model.layers.41.self_attn.k_proj.bias": "model-00019-of-00029.safetensors",
446
+ "model.layers.41.self_attn.k_proj.weight": "model-00019-of-00029.safetensors",
447
+ "model.layers.41.self_attn.o_proj.weight": "model-00019-of-00029.safetensors",
448
+ "model.layers.41.self_attn.q_proj.bias": "model-00019-of-00029.safetensors",
449
+ "model.layers.41.self_attn.q_proj.weight": "model-00019-of-00029.safetensors",
450
+ "model.layers.41.self_attn.v_proj.bias": "model-00019-of-00029.safetensors",
451
+ "model.layers.41.self_attn.v_proj.weight": "model-00019-of-00029.safetensors",
452
+ "model.layers.42.input_layernorm.weight": "model-00020-of-00029.safetensors",
453
+ "model.layers.42.mlp.down_proj.weight": "model-00020-of-00029.safetensors",
454
+ "model.layers.42.mlp.gate_proj.weight": "model-00019-of-00029.safetensors",
455
+ "model.layers.42.mlp.up_proj.weight": "model-00019-of-00029.safetensors",
456
+ "model.layers.42.post_attention_layernorm.weight": "model-00020-of-00029.safetensors",
457
+ "model.layers.42.self_attn.k_proj.bias": "model-00019-of-00029.safetensors",
458
+ "model.layers.42.self_attn.k_proj.weight": "model-00019-of-00029.safetensors",
459
+ "model.layers.42.self_attn.o_proj.weight": "model-00019-of-00029.safetensors",
460
+ "model.layers.42.self_attn.q_proj.bias": "model-00019-of-00029.safetensors",
461
+ "model.layers.42.self_attn.q_proj.weight": "model-00019-of-00029.safetensors",
462
+ "model.layers.42.self_attn.v_proj.bias": "model-00019-of-00029.safetensors",
463
+ "model.layers.42.self_attn.v_proj.weight": "model-00019-of-00029.safetensors",
464
+ "model.layers.43.input_layernorm.weight": "model-00020-of-00029.safetensors",
465
+ "model.layers.43.mlp.down_proj.weight": "model-00020-of-00029.safetensors",
466
+ "model.layers.43.mlp.gate_proj.weight": "model-00020-of-00029.safetensors",
467
+ "model.layers.43.mlp.up_proj.weight": "model-00020-of-00029.safetensors",
468
+ "model.layers.43.post_attention_layernorm.weight": "model-00020-of-00029.safetensors",
469
+ "model.layers.43.self_attn.k_proj.bias": "model-00020-of-00029.safetensors",
470
+ "model.layers.43.self_attn.k_proj.weight": "model-00020-of-00029.safetensors",
471
+ "model.layers.43.self_attn.o_proj.weight": "model-00020-of-00029.safetensors",
472
+ "model.layers.43.self_attn.q_proj.bias": "model-00020-of-00029.safetensors",
473
+ "model.layers.43.self_attn.q_proj.weight": "model-00020-of-00029.safetensors",
474
+ "model.layers.43.self_attn.v_proj.bias": "model-00020-of-00029.safetensors",
475
+ "model.layers.43.self_attn.v_proj.weight": "model-00020-of-00029.safetensors",
476
+ "model.layers.44.input_layernorm.weight": "model-00020-of-00029.safetensors",
477
+ "model.layers.44.mlp.down_proj.weight": "model-00020-of-00029.safetensors",
478
+ "model.layers.44.mlp.gate_proj.weight": "model-00020-of-00029.safetensors",
479
+ "model.layers.44.mlp.up_proj.weight": "model-00020-of-00029.safetensors",
480
+ "model.layers.44.post_attention_layernorm.weight": "model-00020-of-00029.safetensors",
481
+ "model.layers.44.self_attn.k_proj.bias": "model-00020-of-00029.safetensors",
482
+ "model.layers.44.self_attn.k_proj.weight": "model-00020-of-00029.safetensors",
483
+ "model.layers.44.self_attn.o_proj.weight": "model-00020-of-00029.safetensors",
484
+ "model.layers.44.self_attn.q_proj.bias": "model-00020-of-00029.safetensors",
485
+ "model.layers.44.self_attn.q_proj.weight": "model-00020-of-00029.safetensors",
486
+ "model.layers.44.self_attn.v_proj.bias": "model-00020-of-00029.safetensors",
487
+ "model.layers.44.self_attn.v_proj.weight": "model-00020-of-00029.safetensors",
488
+ "model.layers.45.input_layernorm.weight": "model-00021-of-00029.safetensors",
489
+ "model.layers.45.mlp.down_proj.weight": "model-00021-of-00029.safetensors",
490
+ "model.layers.45.mlp.gate_proj.weight": "model-00021-of-00029.safetensors",
491
+ "model.layers.45.mlp.up_proj.weight": "model-00021-of-00029.safetensors",
492
+ "model.layers.45.post_attention_layernorm.weight": "model-00021-of-00029.safetensors",
493
+ "model.layers.45.self_attn.k_proj.bias": "model-00020-of-00029.safetensors",
494
+ "model.layers.45.self_attn.k_proj.weight": "model-00020-of-00029.safetensors",
495
+ "model.layers.45.self_attn.o_proj.weight": "model-00020-of-00029.safetensors",
496
+ "model.layers.45.self_attn.q_proj.bias": "model-00020-of-00029.safetensors",
497
+ "model.layers.45.self_attn.q_proj.weight": "model-00020-of-00029.safetensors",
498
+ "model.layers.45.self_attn.v_proj.bias": "model-00020-of-00029.safetensors",
499
+ "model.layers.45.self_attn.v_proj.weight": "model-00020-of-00029.safetensors",
500
+ "model.layers.46.input_layernorm.weight": "model-00021-of-00029.safetensors",
501
+ "model.layers.46.mlp.down_proj.weight": "model-00021-of-00029.safetensors",
502
+ "model.layers.46.mlp.gate_proj.weight": "model-00021-of-00029.safetensors",
503
+ "model.layers.46.mlp.up_proj.weight": "model-00021-of-00029.safetensors",
504
+ "model.layers.46.post_attention_layernorm.weight": "model-00021-of-00029.safetensors",
505
+ "model.layers.46.self_attn.k_proj.bias": "model-00021-of-00029.safetensors",
506
+ "model.layers.46.self_attn.k_proj.weight": "model-00021-of-00029.safetensors",
507
+ "model.layers.46.self_attn.o_proj.weight": "model-00021-of-00029.safetensors",
508
+ "model.layers.46.self_attn.q_proj.bias": "model-00021-of-00029.safetensors",
509
+ "model.layers.46.self_attn.q_proj.weight": "model-00021-of-00029.safetensors",
510
+ "model.layers.46.self_attn.v_proj.bias": "model-00021-of-00029.safetensors",
511
+ "model.layers.46.self_attn.v_proj.weight": "model-00021-of-00029.safetensors",
512
+ "model.layers.47.input_layernorm.weight": "model-00022-of-00029.safetensors",
513
+ "model.layers.47.mlp.down_proj.weight": "model-00022-of-00029.safetensors",
514
+ "model.layers.47.mlp.gate_proj.weight": "model-00021-of-00029.safetensors",
515
+ "model.layers.47.mlp.up_proj.weight": "model-00022-of-00029.safetensors",
516
+ "model.layers.47.post_attention_layernorm.weight": "model-00022-of-00029.safetensors",
517
+ "model.layers.47.self_attn.k_proj.bias": "model-00021-of-00029.safetensors",
518
+ "model.layers.47.self_attn.k_proj.weight": "model-00021-of-00029.safetensors",
519
+ "model.layers.47.self_attn.o_proj.weight": "model-00021-of-00029.safetensors",
520
+ "model.layers.47.self_attn.q_proj.bias": "model-00021-of-00029.safetensors",
521
+ "model.layers.47.self_attn.q_proj.weight": "model-00021-of-00029.safetensors",
522
+ "model.layers.47.self_attn.v_proj.bias": "model-00021-of-00029.safetensors",
523
+ "model.layers.47.self_attn.v_proj.weight": "model-00021-of-00029.safetensors",
524
+ "model.layers.48.input_layernorm.weight": "model-00022-of-00029.safetensors",
525
+ "model.layers.48.mlp.down_proj.weight": "model-00022-of-00029.safetensors",
526
+ "model.layers.48.mlp.gate_proj.weight": "model-00022-of-00029.safetensors",
527
+ "model.layers.48.mlp.up_proj.weight": "model-00022-of-00029.safetensors",
528
+ "model.layers.48.post_attention_layernorm.weight": "model-00022-of-00029.safetensors",
529
+ "model.layers.48.self_attn.k_proj.bias": "model-00022-of-00029.safetensors",
530
+ "model.layers.48.self_attn.k_proj.weight": "model-00022-of-00029.safetensors",
531
+ "model.layers.48.self_attn.o_proj.weight": "model-00022-of-00029.safetensors",
532
+ "model.layers.48.self_attn.q_proj.bias": "model-00022-of-00029.safetensors",
533
+ "model.layers.48.self_attn.q_proj.weight": "model-00022-of-00029.safetensors",
534
+ "model.layers.48.self_attn.v_proj.bias": "model-00022-of-00029.safetensors",
535
+ "model.layers.48.self_attn.v_proj.weight": "model-00022-of-00029.safetensors",
536
+ "model.layers.49.input_layernorm.weight": "model-00023-of-00029.safetensors",
537
+ "model.layers.49.mlp.down_proj.weight": "model-00023-of-00029.safetensors",
538
+ "model.layers.49.mlp.gate_proj.weight": "model-00022-of-00029.safetensors",
539
+ "model.layers.49.mlp.up_proj.weight": "model-00022-of-00029.safetensors",
540
+ "model.layers.49.post_attention_layernorm.weight": "model-00023-of-00029.safetensors",
541
+ "model.layers.49.self_attn.k_proj.bias": "model-00022-of-00029.safetensors",
542
+ "model.layers.49.self_attn.k_proj.weight": "model-00022-of-00029.safetensors",
543
+ "model.layers.49.self_attn.o_proj.weight": "model-00022-of-00029.safetensors",
544
+ "model.layers.49.self_attn.q_proj.bias": "model-00022-of-00029.safetensors",
545
+ "model.layers.49.self_attn.q_proj.weight": "model-00022-of-00029.safetensors",
546
+ "model.layers.49.self_attn.v_proj.bias": "model-00022-of-00029.safetensors",
547
+ "model.layers.49.self_attn.v_proj.weight": "model-00022-of-00029.safetensors",
548
+ "model.layers.5.input_layernorm.weight": "model-00004-of-00029.safetensors",
549
+ "model.layers.5.mlp.down_proj.weight": "model-00004-of-00029.safetensors",
550
+ "model.layers.5.mlp.gate_proj.weight": "model-00003-of-00029.safetensors",
551
+ "model.layers.5.mlp.up_proj.weight": "model-00004-of-00029.safetensors",
552
+ "model.layers.5.post_attention_layernorm.weight": "model-00004-of-00029.safetensors",
553
+ "model.layers.5.self_attn.k_proj.bias": "model-00003-of-00029.safetensors",
554
+ "model.layers.5.self_attn.k_proj.weight": "model-00003-of-00029.safetensors",
555
+ "model.layers.5.self_attn.o_proj.weight": "model-00003-of-00029.safetensors",
556
+ "model.layers.5.self_attn.q_proj.bias": "model-00003-of-00029.safetensors",
557
+ "model.layers.5.self_attn.q_proj.weight": "model-00003-of-00029.safetensors",
558
+ "model.layers.5.self_attn.v_proj.bias": "model-00003-of-00029.safetensors",
559
+ "model.layers.5.self_attn.v_proj.weight": "model-00003-of-00029.safetensors",
560
+ "model.layers.50.input_layernorm.weight": "model-00023-of-00029.safetensors",
561
+ "model.layers.50.mlp.down_proj.weight": "model-00023-of-00029.safetensors",
562
+ "model.layers.50.mlp.gate_proj.weight": "model-00023-of-00029.safetensors",
563
+ "model.layers.50.mlp.up_proj.weight": "model-00023-of-00029.safetensors",
564
+ "model.layers.50.post_attention_layernorm.weight": "model-00023-of-00029.safetensors",
565
+ "model.layers.50.self_attn.k_proj.bias": "model-00023-of-00029.safetensors",
566
+ "model.layers.50.self_attn.k_proj.weight": "model-00023-of-00029.safetensors",
567
+ "model.layers.50.self_attn.o_proj.weight": "model-00023-of-00029.safetensors",
568
+ "model.layers.50.self_attn.q_proj.bias": "model-00023-of-00029.safetensors",
569
+ "model.layers.50.self_attn.q_proj.weight": "model-00023-of-00029.safetensors",
570
+ "model.layers.50.self_attn.v_proj.bias": "model-00023-of-00029.safetensors",
571
+ "model.layers.50.self_attn.v_proj.weight": "model-00023-of-00029.safetensors",
572
+ "model.layers.51.input_layernorm.weight": "model-00023-of-00029.safetensors",
573
+ "model.layers.51.mlp.down_proj.weight": "model-00023-of-00029.safetensors",
574
+ "model.layers.51.mlp.gate_proj.weight": "model-00023-of-00029.safetensors",
575
+ "model.layers.51.mlp.up_proj.weight": "model-00023-of-00029.safetensors",
576
+ "model.layers.51.post_attention_layernorm.weight": "model-00023-of-00029.safetensors",
577
+ "model.layers.51.self_attn.k_proj.bias": "model-00023-of-00029.safetensors",
578
+ "model.layers.51.self_attn.k_proj.weight": "model-00023-of-00029.safetensors",
579
+ "model.layers.51.self_attn.o_proj.weight": "model-00023-of-00029.safetensors",
580
+ "model.layers.51.self_attn.q_proj.bias": "model-00023-of-00029.safetensors",
581
+ "model.layers.51.self_attn.q_proj.weight": "model-00023-of-00029.safetensors",
582
+ "model.layers.51.self_attn.v_proj.bias": "model-00023-of-00029.safetensors",
583
+ "model.layers.51.self_attn.v_proj.weight": "model-00023-of-00029.safetensors",
584
+ "model.layers.52.input_layernorm.weight": "model-00024-of-00029.safetensors",
585
+ "model.layers.52.mlp.down_proj.weight": "model-00024-of-00029.safetensors",
586
+ "model.layers.52.mlp.gate_proj.weight": "model-00024-of-00029.safetensors",
587
+ "model.layers.52.mlp.up_proj.weight": "model-00024-of-00029.safetensors",
588
+ "model.layers.52.post_attention_layernorm.weight": "model-00024-of-00029.safetensors",
589
+ "model.layers.52.self_attn.k_proj.bias": "model-00023-of-00029.safetensors",
590
+ "model.layers.52.self_attn.k_proj.weight": "model-00023-of-00029.safetensors",
591
+ "model.layers.52.self_attn.o_proj.weight": "model-00023-of-00029.safetensors",
592
+ "model.layers.52.self_attn.q_proj.bias": "model-00023-of-00029.safetensors",
593
+ "model.layers.52.self_attn.q_proj.weight": "model-00023-of-00029.safetensors",
594
+ "model.layers.52.self_attn.v_proj.bias": "model-00023-of-00029.safetensors",
595
+ "model.layers.52.self_attn.v_proj.weight": "model-00023-of-00029.safetensors",
596
+ "model.layers.53.input_layernorm.weight": "model-00024-of-00029.safetensors",
597
+ "model.layers.53.mlp.down_proj.weight": "model-00024-of-00029.safetensors",
598
+ "model.layers.53.mlp.gate_proj.weight": "model-00024-of-00029.safetensors",
599
+ "model.layers.53.mlp.up_proj.weight": "model-00024-of-00029.safetensors",
600
+ "model.layers.53.post_attention_layernorm.weight": "model-00024-of-00029.safetensors",
601
+ "model.layers.53.self_attn.k_proj.bias": "model-00024-of-00029.safetensors",
602
+ "model.layers.53.self_attn.k_proj.weight": "model-00024-of-00029.safetensors",
603
+ "model.layers.53.self_attn.o_proj.weight": "model-00024-of-00029.safetensors",
604
+ "model.layers.53.self_attn.q_proj.bias": "model-00024-of-00029.safetensors",
605
+ "model.layers.53.self_attn.q_proj.weight": "model-00024-of-00029.safetensors",
606
+ "model.layers.53.self_attn.v_proj.bias": "model-00024-of-00029.safetensors",
607
+ "model.layers.53.self_attn.v_proj.weight": "model-00024-of-00029.safetensors",
608
+ "model.layers.54.input_layernorm.weight": "model-00025-of-00029.safetensors",
609
+ "model.layers.54.mlp.down_proj.weight": "model-00025-of-00029.safetensors",
610
+ "model.layers.54.mlp.gate_proj.weight": "model-00024-of-00029.safetensors",
611
+ "model.layers.54.mlp.up_proj.weight": "model-00025-of-00029.safetensors",
612
+ "model.layers.54.post_attention_layernorm.weight": "model-00025-of-00029.safetensors",
613
+ "model.layers.54.self_attn.k_proj.bias": "model-00024-of-00029.safetensors",
614
+ "model.layers.54.self_attn.k_proj.weight": "model-00024-of-00029.safetensors",
615
+ "model.layers.54.self_attn.o_proj.weight": "model-00024-of-00029.safetensors",
616
+ "model.layers.54.self_attn.q_proj.bias": "model-00024-of-00029.safetensors",
617
+ "model.layers.54.self_attn.q_proj.weight": "model-00024-of-00029.safetensors",
618
+ "model.layers.54.self_attn.v_proj.bias": "model-00024-of-00029.safetensors",
619
+ "model.layers.54.self_attn.v_proj.weight": "model-00024-of-00029.safetensors",
620
+ "model.layers.55.input_layernorm.weight": "model-00025-of-00029.safetensors",
621
+ "model.layers.55.mlp.down_proj.weight": "model-00025-of-00029.safetensors",
622
+ "model.layers.55.mlp.gate_proj.weight": "model-00025-of-00029.safetensors",
623
+ "model.layers.55.mlp.up_proj.weight": "model-00025-of-00029.safetensors",
624
+ "model.layers.55.post_attention_layernorm.weight": "model-00025-of-00029.safetensors",
625
+ "model.layers.55.self_attn.k_proj.bias": "model-00025-of-00029.safetensors",
626
+ "model.layers.55.self_attn.k_proj.weight": "model-00025-of-00029.safetensors",
627
+ "model.layers.55.self_attn.o_proj.weight": "model-00025-of-00029.safetensors",
628
+ "model.layers.55.self_attn.q_proj.bias": "model-00025-of-00029.safetensors",
629
+ "model.layers.55.self_attn.q_proj.weight": "model-00025-of-00029.safetensors",
630
+ "model.layers.55.self_attn.v_proj.bias": "model-00025-of-00029.safetensors",
631
+ "model.layers.55.self_attn.v_proj.weight": "model-00025-of-00029.safetensors",
632
+ "model.layers.56.input_layernorm.weight": "model-00026-of-00029.safetensors",
633
+ "model.layers.56.mlp.down_proj.weight": "model-00026-of-00029.safetensors",
634
+ "model.layers.56.mlp.gate_proj.weight": "model-00025-of-00029.safetensors",
635
+ "model.layers.56.mlp.up_proj.weight": "model-00025-of-00029.safetensors",
636
+ "model.layers.56.post_attention_layernorm.weight": "model-00026-of-00029.safetensors",
637
+ "model.layers.56.self_attn.k_proj.bias": "model-00025-of-00029.safetensors",
638
+ "model.layers.56.self_attn.k_proj.weight": "model-00025-of-00029.safetensors",
639
+ "model.layers.56.self_attn.o_proj.weight": "model-00025-of-00029.safetensors",
640
+ "model.layers.56.self_attn.q_proj.bias": "model-00025-of-00029.safetensors",
641
+ "model.layers.56.self_attn.q_proj.weight": "model-00025-of-00029.safetensors",
642
+ "model.layers.56.self_attn.v_proj.bias": "model-00025-of-00029.safetensors",
643
+ "model.layers.56.self_attn.v_proj.weight": "model-00025-of-00029.safetensors",
644
+ "model.layers.57.input_layernorm.weight": "model-00026-of-00029.safetensors",
645
+ "model.layers.57.mlp.down_proj.weight": "model-00026-of-00029.safetensors",
646
+ "model.layers.57.mlp.gate_proj.weight": "model-00026-of-00029.safetensors",
647
+ "model.layers.57.mlp.up_proj.weight": "model-00026-of-00029.safetensors",
648
+ "model.layers.57.post_attention_layernorm.weight": "model-00026-of-00029.safetensors",
649
+ "model.layers.57.self_attn.k_proj.bias": "model-00026-of-00029.safetensors",
650
+ "model.layers.57.self_attn.k_proj.weight": "model-00026-of-00029.safetensors",
651
+ "model.layers.57.self_attn.o_proj.weight": "model-00026-of-00029.safetensors",
652
+ "model.layers.57.self_attn.q_proj.bias": "model-00026-of-00029.safetensors",
653
+ "model.layers.57.self_attn.q_proj.weight": "model-00026-of-00029.safetensors",
654
+ "model.layers.57.self_attn.v_proj.bias": "model-00026-of-00029.safetensors",
655
+ "model.layers.57.self_attn.v_proj.weight": "model-00026-of-00029.safetensors",
656
+ "model.layers.58.input_layernorm.weight": "model-00026-of-00029.safetensors",
657
+ "model.layers.58.mlp.down_proj.weight": "model-00026-of-00029.safetensors",
658
+ "model.layers.58.mlp.gate_proj.weight": "model-00026-of-00029.safetensors",
659
+ "model.layers.58.mlp.up_proj.weight": "model-00026-of-00029.safetensors",
660
+ "model.layers.58.post_attention_layernorm.weight": "model-00026-of-00029.safetensors",
661
+ "model.layers.58.self_attn.k_proj.bias": "model-00026-of-00029.safetensors",
662
+ "model.layers.58.self_attn.k_proj.weight": "model-00026-of-00029.safetensors",
663
+ "model.layers.58.self_attn.o_proj.weight": "model-00026-of-00029.safetensors",
664
+ "model.layers.58.self_attn.q_proj.bias": "model-00026-of-00029.safetensors",
665
+ "model.layers.58.self_attn.q_proj.weight": "model-00026-of-00029.safetensors",
666
+ "model.layers.58.self_attn.v_proj.bias": "model-00026-of-00029.safetensors",
667
+ "model.layers.58.self_attn.v_proj.weight": "model-00026-of-00029.safetensors",
668
+ "model.layers.59.input_layernorm.weight": "model-00027-of-00029.safetensors",
669
+ "model.layers.59.mlp.down_proj.weight": "model-00027-of-00029.safetensors",
670
+ "model.layers.59.mlp.gate_proj.weight": "model-00027-of-00029.safetensors",
671
+ "model.layers.59.mlp.up_proj.weight": "model-00027-of-00029.safetensors",
672
+ "model.layers.59.post_attention_layernorm.weight": "model-00027-of-00029.safetensors",
673
+ "model.layers.59.self_attn.k_proj.bias": "model-00026-of-00029.safetensors",
674
+ "model.layers.59.self_attn.k_proj.weight": "model-00026-of-00029.safetensors",
675
+ "model.layers.59.self_attn.o_proj.weight": "model-00026-of-00029.safetensors",
676
+ "model.layers.59.self_attn.q_proj.bias": "model-00026-of-00029.safetensors",
677
+ "model.layers.59.self_attn.q_proj.weight": "model-00026-of-00029.safetensors",
678
+ "model.layers.59.self_attn.v_proj.bias": "model-00026-of-00029.safetensors",
679
+ "model.layers.59.self_attn.v_proj.weight": "model-00026-of-00029.safetensors",
680
+ "model.layers.6.input_layernorm.weight": "model-00004-of-00029.safetensors",
681
+ "model.layers.6.mlp.down_proj.weight": "model-00004-of-00029.safetensors",
682
+ "model.layers.6.mlp.gate_proj.weight": "model-00004-of-00029.safetensors",
683
+ "model.layers.6.mlp.up_proj.weight": "model-00004-of-00029.safetensors",
684
+ "model.layers.6.post_attention_layernorm.weight": "model-00004-of-00029.safetensors",
685
+ "model.layers.6.self_attn.k_proj.bias": "model-00004-of-00029.safetensors",
686
+ "model.layers.6.self_attn.k_proj.weight": "model-00004-of-00029.safetensors",
687
+ "model.layers.6.self_attn.o_proj.weight": "model-00004-of-00029.safetensors",
688
+ "model.layers.6.self_attn.q_proj.bias": "model-00004-of-00029.safetensors",
689
+ "model.layers.6.self_attn.q_proj.weight": "model-00004-of-00029.safetensors",
690
+ "model.layers.6.self_attn.v_proj.bias": "model-00004-of-00029.safetensors",
691
+ "model.layers.6.self_attn.v_proj.weight": "model-00004-of-00029.safetensors",
692
+ "model.layers.60.input_layernorm.weight": "model-00027-of-00029.safetensors",
693
+ "model.layers.60.mlp.down_proj.weight": "model-00027-of-00029.safetensors",
694
+ "model.layers.60.mlp.gate_proj.weight": "model-00027-of-00029.safetensors",
695
+ "model.layers.60.mlp.up_proj.weight": "model-00027-of-00029.safetensors",
696
+ "model.layers.60.post_attention_layernorm.weight": "model-00027-of-00029.safetensors",
697
+ "model.layers.60.self_attn.k_proj.bias": "model-00027-of-00029.safetensors",
698
+ "model.layers.60.self_attn.k_proj.weight": "model-00027-of-00029.safetensors",
699
+ "model.layers.60.self_attn.o_proj.weight": "model-00027-of-00029.safetensors",
700
+ "model.layers.60.self_attn.q_proj.bias": "model-00027-of-00029.safetensors",
701
+ "model.layers.60.self_attn.q_proj.weight": "model-00027-of-00029.safetensors",
702
+ "model.layers.60.self_attn.v_proj.bias": "model-00027-of-00029.safetensors",
703
+ "model.layers.60.self_attn.v_proj.weight": "model-00027-of-00029.safetensors",
704
+ "model.layers.61.input_layernorm.weight": "model-00028-of-00029.safetensors",
705
+ "model.layers.61.mlp.down_proj.weight": "model-00028-of-00029.safetensors",
706
+ "model.layers.61.mlp.gate_proj.weight": "model-00027-of-00029.safetensors",
707
+ "model.layers.61.mlp.up_proj.weight": "model-00028-of-00029.safetensors",
708
+ "model.layers.61.post_attention_layernorm.weight": "model-00028-of-00029.safetensors",
709
+ "model.layers.61.self_attn.k_proj.bias": "model-00027-of-00029.safetensors",
710
+ "model.layers.61.self_attn.k_proj.weight": "model-00027-of-00029.safetensors",
711
+ "model.layers.61.self_attn.o_proj.weight": "model-00027-of-00029.safetensors",
712
+ "model.layers.61.self_attn.q_proj.bias": "model-00027-of-00029.safetensors",
713
+ "model.layers.61.self_attn.q_proj.weight": "model-00027-of-00029.safetensors",
714
+ "model.layers.61.self_attn.v_proj.bias": "model-00027-of-00029.safetensors",
715
+ "model.layers.61.self_attn.v_proj.weight": "model-00027-of-00029.safetensors",
716
+ "model.layers.62.input_layernorm.weight": "model-00028-of-00029.safetensors",
717
+ "model.layers.62.mlp.down_proj.weight": "model-00028-of-00029.safetensors",
718
+ "model.layers.62.mlp.gate_proj.weight": "model-00028-of-00029.safetensors",
719
+ "model.layers.62.mlp.up_proj.weight": "model-00028-of-00029.safetensors",
720
+ "model.layers.62.post_attention_layernorm.weight": "model-00028-of-00029.safetensors",
721
+ "model.layers.62.self_attn.k_proj.bias": "model-00028-of-00029.safetensors",
722
+ "model.layers.62.self_attn.k_proj.weight": "model-00028-of-00029.safetensors",
723
+ "model.layers.62.self_attn.o_proj.weight": "model-00028-of-00029.safetensors",
724
+ "model.layers.62.self_attn.q_proj.bias": "model-00028-of-00029.safetensors",
725
+ "model.layers.62.self_attn.q_proj.weight": "model-00028-of-00029.safetensors",
726
+ "model.layers.62.self_attn.v_proj.bias": "model-00028-of-00029.safetensors",
727
+ "model.layers.62.self_attn.v_proj.weight": "model-00028-of-00029.safetensors",
728
+ "model.layers.63.input_layernorm.weight": "model-00029-of-00029.safetensors",
729
+ "model.layers.63.mlp.down_proj.weight": "model-00029-of-00029.safetensors",
730
+ "model.layers.63.mlp.gate_proj.weight": "model-00028-of-00029.safetensors",
731
+ "model.layers.63.mlp.up_proj.weight": "model-00028-of-00029.safetensors",
732
+ "model.layers.63.post_attention_layernorm.weight": "model-00029-of-00029.safetensors",
733
+ "model.layers.63.self_attn.k_proj.bias": "model-00028-of-00029.safetensors",
734
+ "model.layers.63.self_attn.k_proj.weight": "model-00028-of-00029.safetensors",
735
+ "model.layers.63.self_attn.o_proj.weight": "model-00028-of-00029.safetensors",
736
+ "model.layers.63.self_attn.q_proj.bias": "model-00028-of-00029.safetensors",
737
+ "model.layers.63.self_attn.q_proj.weight": "model-00028-of-00029.safetensors",
738
+ "model.layers.63.self_attn.v_proj.bias": "model-00028-of-00029.safetensors",
739
+ "model.layers.63.self_attn.v_proj.weight": "model-00028-of-00029.safetensors",
740
+ "model.layers.7.input_layernorm.weight": "model-00005-of-00029.safetensors",
741
+ "model.layers.7.mlp.down_proj.weight": "model-00005-of-00029.safetensors",
742
+ "model.layers.7.mlp.gate_proj.weight": "model-00004-of-00029.safetensors",
743
+ "model.layers.7.mlp.up_proj.weight": "model-00004-of-00029.safetensors",
744
+ "model.layers.7.post_attention_layernorm.weight": "model-00005-of-00029.safetensors",
745
+ "model.layers.7.self_attn.k_proj.bias": "model-00004-of-00029.safetensors",
746
+ "model.layers.7.self_attn.k_proj.weight": "model-00004-of-00029.safetensors",
747
+ "model.layers.7.self_attn.o_proj.weight": "model-00004-of-00029.safetensors",
748
+ "model.layers.7.self_attn.q_proj.bias": "model-00004-of-00029.safetensors",
749
+ "model.layers.7.self_attn.q_proj.weight": "model-00004-of-00029.safetensors",
750
+ "model.layers.7.self_attn.v_proj.bias": "model-00004-of-00029.safetensors",
751
+ "model.layers.7.self_attn.v_proj.weight": "model-00004-of-00029.safetensors",
752
+ "model.layers.8.input_layernorm.weight": "model-00005-of-00029.safetensors",
753
+ "model.layers.8.mlp.down_proj.weight": "model-00005-of-00029.safetensors",
754
+ "model.layers.8.mlp.gate_proj.weight": "model-00005-of-00029.safetensors",
755
+ "model.layers.8.mlp.up_proj.weight": "model-00005-of-00029.safetensors",
756
+ "model.layers.8.post_attention_layernorm.weight": "model-00005-of-00029.safetensors",
757
+ "model.layers.8.self_attn.k_proj.bias": "model-00005-of-00029.safetensors",
758
+ "model.layers.8.self_attn.k_proj.weight": "model-00005-of-00029.safetensors",
759
+ "model.layers.8.self_attn.o_proj.weight": "model-00005-of-00029.safetensors",
760
+ "model.layers.8.self_attn.q_proj.bias": "model-00005-of-00029.safetensors",
761
+ "model.layers.8.self_attn.q_proj.weight": "model-00005-of-00029.safetensors",
762
+ "model.layers.8.self_attn.v_proj.bias": "model-00005-of-00029.safetensors",
763
+ "model.layers.8.self_attn.v_proj.weight": "model-00005-of-00029.safetensors",
764
+ "model.layers.9.input_layernorm.weight": "model-00005-of-00029.safetensors",
765
+ "model.layers.9.mlp.down_proj.weight": "model-00005-of-00029.safetensors",
766
+ "model.layers.9.mlp.gate_proj.weight": "model-00005-of-00029.safetensors",
767
+ "model.layers.9.mlp.up_proj.weight": "model-00005-of-00029.safetensors",
768
+ "model.layers.9.post_attention_layernorm.weight": "model-00005-of-00029.safetensors",
769
+ "model.layers.9.self_attn.k_proj.bias": "model-00005-of-00029.safetensors",
770
+ "model.layers.9.self_attn.k_proj.weight": "model-00005-of-00029.safetensors",
771
+ "model.layers.9.self_attn.o_proj.weight": "model-00005-of-00029.safetensors",
772
+ "model.layers.9.self_attn.q_proj.bias": "model-00005-of-00029.safetensors",
773
+ "model.layers.9.self_attn.q_proj.weight": "model-00005-of-00029.safetensors",
774
+ "model.layers.9.self_attn.v_proj.bias": "model-00005-of-00029.safetensors",
775
+ "model.layers.9.self_attn.v_proj.weight": "model-00005-of-00029.safetensors",
776
+ "model.norm.weight": "model-00029-of-00029.safetensors"
777
+ }
778
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|im_end|>",
201
+ "errors": "replace",
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff