Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
In a Training Loop ๐
175.0
TFLOPS
436
3
348
David Belton
PRO
DavidAU
Follow
Ohprota's profile picture
Rimwise232's profile picture
nizda's profile picture
3,236 followers
ยท
22 following
David-AU-github
DISCORD: David_AU [drawless111]
AI & ML interests
Application(s) of single/multiple LLMs in specialized use cases & automation tasks. LLM, Prompt , System Role and Parameter engineering VIA chat / API. 500+ LLMs graded.
Recent Activity
replied
to
their
post
about 3 hours ago
21 Qwen 3.5 Fine Tunes (thinking and instruct) ; reg and uncensored (2B to 27B) exceed benchmarks, and work better than org models. All are bench marked against org model. Many exceed all benchmarks of org model. Claude, GLM, Gemini and other distills. Thinking AND dedicated Instruct versions. Core goal: Increase benchmarks, and address long thinking blocks. Highlights: 9B and 27B instruct "Claude" versions hit 624 and 675 on the "ARC-C" (hard challenge). Thinking fine tunes exceed org model performance (in thinking mode). In many cases there is a drastic reduction in thinking block size. 9B Claude Heretic Uncensored, GGUF : -Neo, Code Imatrix (duel imatrix) - Updated Jinja template - Custom tensor enhancements. https://huggingface.co/DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING-MAX-NEOCODE-Imatrix-GGUF COLLECTION [21 models]: https://huggingface.co/collections/DavidAU/qwen-35-08-2-4-9-27-35b-regular-uncensored
new
activity
about 3 hours ago
DavidAU/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT:
GGUF pls?
updated
a model
about 3 hours ago
DavidAU/Qwen3.5-21B-Claude-4.6-Opus-Thinking-EXP2
View all activity
Organizations
DavidAU
's Spaces
1
Sort:ย Recently updated
Running
63
GGUF Model VRAM Calculator
๐
Calculate VRAM requirements for LLM models