Lamapi commited on
Commit
7a1a096
·
verified ·
1 Parent(s): bcb1a91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -80,7 +80,7 @@ library_name: transformers
80
 
81
  ## 📖 Overview
82
 
83
- **Next-Codex 30B** is a high-performance, specialized **Mixture-of-Experts (MoE)** Large Language Model designed specifically for code generation, debugging, and software engineering tasks.
84
 
85
  Unlike traditional dense models, **Next-Codex** utilizes a sparse architecture with **30 Billion total parameters**, but only activates **3 Billion parameters per token**. This unique design allows it to deliver the deep reasoning capabilities of a massive model while maintaining the ultra-low latency and inference cost of a lightweight 3B model. It is fine-tuned on a massive corpus of code across 20+ programming languages, making it the most efficient coding assistant in its class.
86
 
@@ -99,7 +99,7 @@ Unlike traditional dense models, **Next-Codex** utilizes a sparse architecture w
99
 
100
  ## 📊 Benchmark Performance (Coding & Logic)
101
 
102
- **Next-Coder 30B** achieves state-of-the-art results among open-weights coding models, balancing extreme efficiency with high accuracy.
103
 
104
  Benchmarks are being conducted...
105
  ---
@@ -194,6 +194,6 @@ Licensed under the **MIT License** — free for commercial and non-commercial us
194
 
195
  ---
196
 
197
- > **Next-Coder 30B** — Smart as a giant, fast as a lightweight. The future of coding is MoE.
198
 
199
  [![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)
 
80
 
81
  ## 📖 Overview
82
 
83
+ **Next-Codex** is a high-performance, specialized **Mixture-of-Experts (MoE)** Large Language Model designed specifically for code generation, debugging, and software engineering tasks.
84
 
85
  Unlike traditional dense models, **Next-Codex** utilizes a sparse architecture with **30 Billion total parameters**, but only activates **3 Billion parameters per token**. This unique design allows it to deliver the deep reasoning capabilities of a massive model while maintaining the ultra-low latency and inference cost of a lightweight 3B model. It is fine-tuned on a massive corpus of code across 20+ programming languages, making it the most efficient coding assistant in its class.
86
 
 
99
 
100
  ## 📊 Benchmark Performance (Coding & Logic)
101
 
102
+ **Next-Codex** achieves state-of-the-art results among open-weights coding models, balancing extreme efficiency with high accuracy.
103
 
104
  Benchmarks are being conducted...
105
  ---
 
194
 
195
  ---
196
 
197
+ > **Next-Codex** — Smart as a giant, fast as a lightweight. The future of coding is MoE.
198
 
199
  [![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)