Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen3-4B-Instruct
|
| 4 |
+
pipeline_tag: text-classification
|
| 5 |
+
tags:
|
| 6 |
+
- code
|
| 7 |
+
- coding
|
| 8 |
+
- instruct
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
Trilogix1/Hugston_code-rl-Qwen3-4B-Instruct-2507-SFT-30b pipeline_tag: text-generation tags:
|
| 13 |
+
# Qwen3 Instruct
|
| 14 |
+
# Coder 4B
|
| 15 |
+
# Hugston
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# Original weights at: https://huggingface.co/code-rl/Qwen3-4B-Instruct-2507-SFT-30b
|
| 20 |
+
|
| 21 |
+
This model is converted and quantized version by Hugston Team created with Quanta (see Github to know more about it).
|
| 22 |
+
This is a real, proof-of-concept and implementation on how to convert and quantize a .safetensor llm model in GGUF.
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
Quantization was performed using an automatic and faster method, which leads to less time and faster results.
|
| 30 |
+
|
| 31 |
+
This model was made possible by: https://Hugston.com
|
| 32 |
+
|
| 33 |
+
You can use the model with HugstonOne Enterprise Edition
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
Tested in general and coding tasks. Loaded with 262000 tokens ctx and feed with 150kb code as input, and gave back 230kb code output or ~ 60000 tokens at once.
|
| 39 |
+
The code had 5 errors and certainly is not a 0-shot in long coding. It is working with 2-3 tries,
|
| 40 |
+
which makes it very impressive for it´s size and considering being an instruct model.
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
Watch HugstonOne coding and preview in action:
|
| 47 |
+
---
|
| 48 |
+
https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci
|
| 49 |
+
Usage
|
| 50 |
+
---
|
| 51 |
+
-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework
|
| 52 |
+
---
|
| 53 |
+
-Download model from https://hugston.com/explore?folder=llm_models or Huggingface
|
| 54 |
+
---
|
| 55 |
+
-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.
|