view post Post 7712 1440GB of VRAM is incredibly satisfying 😁 See translation 17 replies · 🔥 25 25 👀 10 10 ❤️ 3 3 🤯 2 2 + Reply
view post Post 2630 Run GLM-4.7-Flash locally on your device with 24GB RAM!🔥It's the best performing 30B model on SWE-Bench and GPQA. With 200K context, it excels at coding, agents, chat & reasoning.GGUF: unsloth/GLM-4.7-Flash-GGUFGuide: https://unsloth.ai/docs/models/glm-4.7-flash See translation 🔥 10 10 + Reply