Diffusers
VQDiffusionPipeline
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("microsoft/vq-diffusion-ithq", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

VQ Diffusion

#!pip install diffusers[torch] transformers
import torch
from diffusers import VQDiffusionPipeline

pipeline = VQDiffusionPipeline.from_pretrained("microsoft/vq-diffusion-ithq", torch_dtype=torch.float16)
pipeline = pipeline.to("cuda")

output = pipeline("teddy bear playing in the pool", truncation_rate=1.0)

image = output.images[0]
image.save("./teddy_bear.png")

img

Contribution: This model was contribution by williamberman in VQ-diffusion.

Downloads last month
74
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using microsoft/vq-diffusion-ithq 2

Paper for microsoft/vq-diffusion-ithq