Unconditional Image Generation
Diffusers
Safetensors
English
bitdance
imagenet
class-conditional
custom-pipeline
Instructions to use BiliSakura/BitDance-ImageNet-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use BiliSakura/BitDance-ImageNet-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("BiliSakura/BitDance-ImageNet-diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
| from __future__ import annotations | |
| from diffusers.configuration_utils import ConfigMixin, register_to_config | |
| from diffusers.models.modeling_utils import ModelMixin | |
| class BitDanceImageNetAutoencoder(ModelMixin, ConfigMixin): | |
| def __init__(self, ddconfig=None, num_codebooks: int = 4, **kwargs): | |
| super().__init__() | |
| self.ddconfig = ddconfig | |
| self.num_codebooks = num_codebooks | |
| def from_pretrained(cls, pretrained_model_name_or_path: str, *args, **kwargs): | |
| del pretrained_model_name_or_path, args, kwargs | |
| return cls() | |