Model Overview

Silan10/flux_quantized_bitsandbytes is an 8-bit quantized version of the black-forest-labs/FLUX.1-dev text-to-image model. In this version, the transformer, text_encoder and text_encoder_2 components have been quantized to 8-bit precision using bitsandbytes.

Bitsandbytes quantization uses 8-bit integer representation with dynamic scaling factors. This provides substantial memory savings while maintaining high image quality through mixed-precision computation.

Usage

import torch
import os
from diffusers import FluxPipeline

model_path = "Silan10/flux_quantized_bitsandbytes"

print("Loading pipeline...")

pipe = FluxPipeline.from_pretrained(
            model_path,
            torch_dtype=torch.bfloat16
        )
pipe.to("cuda")
print("✓ Pipeline loaded successfully.")

prompt = "Ultra-detailed nighttime cyberpunk city street, several pedestrians in modern clothes, one person in the foreground looking toward the camera, sharp facial features and detailed hair, wet pavement reflecting colorful neon signs, shop windows with small readable text on signs, a gradient sky fading from deep blue to purple, a mix of strong highlights and deep shadows, highly detailed, 4K, cinematic lighting."
print("Generating image...")

image = pipe(
    prompt,
    num_inference_steps=20,
    guidance_scale=3.5,
    max_sequence_length=512,
    width=1024,
    height=1024,
    generator=torch.Generator("cpu").manual_seed(42)
).images[0]

image.save("output_bitsandbytes.png")
print("✓ Image generated successfully.")
print("DONE!")

Credits

Downloads last month
39
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Silan10/flux_quantized_bitsandbytes

Finetuned
(528)
this model