-
GGUF Editor
🏢100Edit GGUF model metadata from Hugging Face or local files
-
mergekit-gui
🔀290Merge AI models using a YAML configuration file
-
GGUF My Repo
🦙1.95kQuantize Hugging Face models to GGUF and publish repo
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 14
Joe
Joe57005
·
AI & ML interests
None yet
Recent Activity
updated a collection 3 days ago
Models to try updated a collection 21 days ago
Models to try updated a collection about 1 month ago
Models to tryOrganizations
None yet
For MOE 1.5B
Models to try
For finetune
-
glaiveai/glaive-function-calling-v2
Viewer • Updated • 113k • 45.2k • 503 - RunningAgents17
Chat Template Editor
💬17View, edit, test and submit Chat Templates
- RunningAgents100
GGUF Editor
🏢100Edit GGUF model metadata from Hugging Face or local files
-
0xSero/glm47-reap-calibration-v2
Viewer • Updated • 1.36k • 58 • 3
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
-
inclusionAI/Ling-mini-2.0
Text Generation • 16B • Updated • 30.6k • 193 -
Orion-zhen/Qwen3-30B-A3B-Instruct-2507-IQK-GGUF
31B • Updated • 87 • 1 -
Intel/Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound
31B • Updated • 80 • 26 -
Tiiny/SmallThinker-21BA3B-Instruct
Text Generation • 22B • Updated • 86 • 111
LLM Tools
- RunningAgents100
GGUF Editor
🏢100Edit GGUF model metadata from Hugging Face or local files
- Runtime errorAgentsFeatured290
mergekit-gui
🔀290Merge AI models using a YAML configuration file
- Running on A10G1.95k
GGUF My Repo
🦙1.95kQuantize Hugging Face models to GGUF and publish repo
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 14
For finetune
-
glaiveai/glaive-function-calling-v2
Viewer • Updated • 113k • 45.2k • 503 - RunningAgents17
Chat Template Editor
💬17View, edit, test and submit Chat Templates
- RunningAgents100
GGUF Editor
🏢100Edit GGUF model metadata from Hugging Face or local files
-
0xSero/glm47-reap-calibration-v2
Viewer • Updated • 1.36k • 58 • 3
For MOE 1.5B
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
-
inclusionAI/Ling-mini-2.0
Text Generation • 16B • Updated • 30.6k • 193 -
Orion-zhen/Qwen3-30B-A3B-Instruct-2507-IQK-GGUF
31B • Updated • 87 • 1 -
Intel/Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound
31B • Updated • 80 • 26 -
Tiiny/SmallThinker-21BA3B-Instruct
Text Generation • 22B • Updated • 86 • 111
Models to try