Apertus-8B-Instruct-2509-FP8

Premium FP8 quantization with 2,048-sample calibration across 4 diverse datasets

This is a premium FP8 quantized version of swiss-ai/Apertus-8B-Instruct-2509 featuring rigorous multi-dataset calibration for production-grade reliability. Quantized by TevunahAi on enterprise-grade hardware.

🎯 Recommended Usage: vLLM

For optimal performance with full FP8 benefits and premium calibration quality, use vLLM or TensorRT-LLM:

Quick Start with vLLM

pip install vllm

Python API:

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

# vLLM auto-detects FP8 from model config
llm = LLM(model="TevunahAi/Apertus-8B-Instruct-2509-FP8", dtype="auto")

# Prepare prompt with chat template
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/Apertus-8B-Instruct-2509-FP8")
messages = [{"role": "user", "content": "Explain quantum computing"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate
sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
outputs = llm.generate([prompt], sampling_params)

for output in outputs:
    print(output.outputs[0].text)

OpenAI-Compatible API Server:

vllm serve TevunahAi/Apertus-8B-Instruct-2509-FP8 \
    --dtype auto \
    --max-model-len 8192

Then use with OpenAI client:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",  # dummy key
)

response = client.chat.completions.create(
    model="TevunahAi/Apertus-8B-Instruct-2509-FP8",
    messages=[
        {"role": "user", "content": "Explain quantum computing"}
    ],
    temperature=0.7,
    max_tokens=512,
)

print(response.choices[0].message.content)

vLLM Benefits

  • Weights, activations, and KV cache in FP8
  • ~8GB VRAM (50% reduction vs BF16)
  • Native FP8 tensor core acceleration on Ada/Hopper GPUs
  • Runs on consumer GPUs (RTX 4070, RTX 3080+)
  • Premium 2048-sample calibration for production reliability
  • Swiss precision meets TevunahAi quality

⚙️ Alternative: Transformers

This model can also be loaded with transformers. Note: Transformers will decompress FP8 → BF16 during inference. However, at 8B parameters, this is manageable (~16GB VRAM).

Transformers Example (Click to expand)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loads FP8 weights but decompresses to BF16 during compute
model = AutoModelForCausalLM.from_pretrained(
    "TevunahAi/Apertus-8B-Instruct-2509-FP8",
    device_map="auto",
    torch_dtype="auto",
    low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/Apertus-8B-Instruct-2509-FP8")

# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Requirements:

pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors

System Requirements:

  • ~16GB VRAM (decompressed to BF16)
  • CUDA 11.8 or newer
  • PyTorch 2.1+ with CUDA support

📊 Model Details

Property Value
Base Model swiss-ai/Apertus-8B-Instruct-2509
Architecture Dense (8B parameters)
Quantization Method FP8 E4M3 weight-only
Framework llm-compressor + compressed_tensors
Calibration Samples 2,048 (4-8x industry standard)
Calibration Datasets 4 diverse sources
Storage Size ~8GB
VRAM (vLLM) ~8GB
VRAM (Transformers) ~16GB (decompressed to BF16)
Target Hardware NVIDIA RTX 3080, RTX 4070, RTX 5000 Ada
Quantization Time 58.2 minutes

🏆 Premium Calibration

This model was quantized using TevunahAi's premium multi-dataset calibration process:

Calibration Details

  • Total Samples: 2,048 (4-8x industry standard)
  • Datasets Used: 4 complementary sources
  • Coverage: Comprehensive across all use cases
Dataset Samples Purpose
Open-Platypus 512 STEM reasoning and logic
UltraChat-200k 512 Natural conversations
OpenHermes-2.5 512 Instruction following
SlimOrca 512 Diverse general tasks

Why Premium Calibration?

Most FP8 quantizations use 128-512 samples from a single dataset. TevunahAi uses 2,048 samples across 4 diverse datasets, ensuring:

  • Superior robustness across task types
  • Better statistical coverage for quantization scales
  • Minimal quality loss compared to FP16
  • Production-grade reliability
  • Consistent performance on edge cases

When quality matters, choose TevunahAi premium calibration quantizations.

🔧 Why FP8?

With vLLM/TensorRT-LLM:

  • 50% memory reduction vs BF16 (weights + activations + KV cache)
  • Faster inference via native FP8 tensor cores
  • Better throughput with optimized kernels
  • Minimal quality loss with premium 2048-sample calibration
  • Accessible on consumer GPUs (RTX 3080+, RTX 4070+)

With Transformers:

  • Smaller download size (~8GB vs ~16GB BF16)
  • Compatible with standard transformers workflow
  • ⚠️ Decompresses to BF16 during inference (no runtime memory benefit)

For production inference, use vLLM to realize the full FP8 benefits.

💾 Model Files

This model is stored as safetensors files (all required for inference). The compressed format enables efficient storage and faster downloads.

🌟 About Apertus

Apertus-8B by Swiss AI is a high-quality 8B parameter instruction-tuned model known for:

  • Strong reasoning capabilities
  • Multilingual support
  • Efficient architecture for fast iteration
  • Swiss precision in model design
  • Apache 2.0 license for commercial use

🚀 Apertus Model Family

Swiss AI's Apertus family represents precision-engineered instruction-following models:

Model Parameters VRAM (vLLM) Quantization Time Use Case
Apertus-8B-FP8 (this) 8B ~8GB 58 min Efficient reasoning, consumer-friendly
Apertus-70B-2048-FP8 70B ~70GB 7.8 hours Flagship performance, production

8B Benefits:

  • Fast inference on consumer GPUs
  • Excellent quality-per-watt efficiency
  • Swiss engineering meets TevunahAi quantization
  • Accessible deployment for most users

🔬 Quantization Infrastructure

Professional hardware for premium calibration:

  • CPUs: Dual Intel Xeon Max 9480 (224 threads, 128GB HBM2e @ 2000 GB/s)
  • Memory: 256GB DDR5-4800 (16 DIMMs, 8-channel per socket, ~614 GB/s)
  • Total Memory Bandwidth: ~2,614 GB/s aggregate
  • GPU: NVIDIA RTX 5000 Ada Generation (32GB VRAM, native FP8 support)
  • Software: Ubuntu 25.10 | Python 3.12 | PyTorch 2.8 | CUDA 13.0 | llm-compressor

Why This Matters:

  • 58 minutes of rigorous quantization and validation
  • 2,048-sample calibration requires significant computational resources
  • Professional infrastructure enables quality impossible on consumer setups

📚 Original Model

This quantization is based on swiss-ai/Apertus-8B-Instruct-2509 by Swiss AI.

For comprehensive information about:

  • Model architecture and training methodology
  • Language capabilities and evaluation
  • Ethical considerations
  • Usage guidelines

Please refer to the original model card.

🔧 Hardware Requirements

Minimum (vLLM):

  • GPU: NVIDIA RTX 3080 (10GB) or better
  • VRAM: 8GB minimum, 10GB+ recommended
  • CUDA: 11.8 or newer

Recommended (vLLM):

  • GPU: NVIDIA RTX 4070 / 4090 / RTX 5000 Ada
  • VRAM: 12GB+
  • CUDA: 12.0+

Transformers:

  • GPU: Any CUDA-capable GPU
  • VRAM: 16GB+
  • Works but not optimal for performance

📖 Additional Resources

📄 License

This model inherits the Apache 2.0 License from the original Apertus model.

🙏 Acknowledgments

  • Original Model: Swiss AI team
  • Quantization Framework: Neural Magic's llm-compressor
  • Quantized by: TevunahAi

📝 Citation

If you use Apertus, please cite the original work:

@misc{apertus2025,
  title={Apertus: Swiss Precision in Large Language Models},
  author={Swiss AI},
  year={2025},
  url={https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509}
}

🌟 Why TevunahAi Premium Calibration FP8?

Uncompromising Quality

Aspect Standard FP8 TevunahAi Premium FP8
Calibration Samples 128-512 2,048
Datasets Single 4 diverse
Calibration Time Minutes 58 minutes
Quality Validation Basic Rigorous
Edge Case Handling Adequate Superior
Production Ready Maybe Absolutely
Infrastructure Consumer/Prosumer Enterprise-grade

Professional Infrastructure

  • 2.6 TB/s aggregate memory bandwidth
  • 2,048 samples across 4 complementary datasets
  • Quality-first approach over speed
  • Enterprise-ready results

TevunahAi: The gold standard for FP8 quantizations.


Professional AI Model Quantization by TevunahAi

Premium multi-dataset calibration on enterprise-grade infrastructure

View all models | Contact for custom quantization

Downloads last month
16
Safetensors
Model size
8B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TevunahAi/Apertus-8B-Instruct-2509-FP8

Quantized
(31)
this model

Collection including TevunahAi/Apertus-8B-Instruct-2509-FP8