Template-KleinBase4B-ControlNet / README_from_modelscope.md
kelseye's picture
Upload folder using huggingface_hub
82d3188 verified
metadata
frameworks:
  - Pytorch
license: Apache License 2.0
tags: []
tasks:
  - text-to-image-synthesis

Templates-结构控制(FLUX.2-klein-base-4B)

本模型是 DiffSynth-Studio 开源的 Diffusion Templates 系列模型之一。该模型为 ControlNet 控制模型,能够通过输入的参考图对生成图像的空间结构、物体轮廓与透视进行精准的条件引导。

效果展示

Condition Prompt: A cat is sitting on a stone, bathed in bright sunshine. Prompt: A cat is sitting on a stone, surrounded by colorful magical particles.
Condition Prompt: A lovely fox wearing a casual green shirt, sitting in a cafe bar, smiling gently, peaceful anime aesthetic. Prompt: A cute 3D rendered anthropomorphic fox character wearing a bright green shirt, sitting in a cozy magical tavern, smiling happily.
Condition Prompt: A photorealistic glass crystal ball containing a tiny, dreamy scene of a castle, a large tree, and a girl, soft warm lighting, detailed texture. Prompt: A cute 3D Pixar style scene inside a crystal ball, featuring a girl standing by a large tree with a castle in the background.

推理代码

git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
  • 直接推理,需 40G 显存
from diffsynth.diffusion.template import TemplatePipeline
from diffsynth.pipelines.flux2_image import Flux2ImagePipeline, ModelConfig
import torch
from modelscope import dataset_snapshot_download
from PIL import Image

pipe = Flux2ImagePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[
        ModelConfig(model_id="black-forest-labs/FLUX.2-klein-base-4B", origin_file_pattern="transformer/*.safetensors"),
        ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="text_encoder/*.safetensors"),
        ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
    ],
    tokenizer_config=ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="tokenizer/"),
)
template = TemplatePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[ModelConfig(model_id="DiffSynth-Studio/Template-KleinBase4B-ControlNet")],
)
dataset_snapshot_download(
    "DiffSynth-Studio/examples_in_diffsynth",
    allow_file_pattern=["templates/*"],
    local_dir="data/examples",
)
image = template(
    pipe,
    prompt="A cat is sitting on a stone, bathed in bright sunshine.",
    seed=0, cfg_scale=4, num_inference_steps=50,
    template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "A cat is sitting on a stone, bathed in bright sunshine.",
    }],
    negative_template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "",
    }],
)
image.save("image_ControlNet_sunshine.jpg")
image = template(
    pipe,
    prompt="A cat is sitting on a stone, surrounded by colorful magical particles.",
    seed=0, cfg_scale=4, num_inference_steps=50,
    template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "A cat is sitting on a stone, surrounded by colorful magical particles.",
    }],
    negative_template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "",
    }],
)
image.save("image_ControlNet_magic.jpg")
  • 开启惰性加载和显存管理,需 24G 显存
from diffsynth.diffusion.template import TemplatePipeline
from diffsynth.pipelines.flux2_image import Flux2ImagePipeline, ModelConfig
import torch
from modelscope import dataset_snapshot_download
from PIL import Image

vram_config = {
    "offload_dtype": "disk",
    "offload_device": "disk",
    "onload_dtype": torch.float8_e4m3fn,
    "onload_device": "cpu",
    "preparing_dtype": torch.float8_e4m3fn,
    "preparing_device": "cuda",
    "computation_dtype": torch.bfloat16,
    "computation_device": "cuda",
}
pipe = Flux2ImagePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[
        ModelConfig(model_id="black-forest-labs/FLUX.2-klein-base-4B", origin_file_pattern="transformer/*.safetensors", **vram_config),
        ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="text_encoder/*.safetensors", **vram_config),
        ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
    ],
    tokenizer_config=ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="tokenizer/"),
    vram_limit=torch.cuda.mem_get_info("cuda")[1] / (1024 ** 3) - 0.5,
)
template = TemplatePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[ModelConfig(model_id="DiffSynth-Studio/Template-KleinBase4B-ControlNet")],
    lazy_loading=True,
)
dataset_snapshot_download(
    "DiffSynth-Studio/examples_in_diffsynth",
    allow_file_pattern=["templates/*"],
    local_dir="data/examples",
)
image = template(
    pipe,
    prompt="A cat is sitting on a stone, bathed in bright sunshine.",
    seed=0, cfg_scale=4, num_inference_steps=50,
    template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "A cat is sitting on a stone, bathed in bright sunshine.",
    }],
    negative_template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "",
    }],
)
image.save("image_ControlNet_sunshine.jpg")
image = template(
    pipe,
    prompt="A cat is sitting on a stone, surrounded by colorful magical particles.",
    seed=0, cfg_scale=4, num_inference_steps=50,
    template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "A cat is sitting on a stone, surrounded by colorful magical particles.",
    }],
    negative_template_inputs = [{
        "image": Image.open("data/examples/templates/image_depth.jpg"),
        "prompt": "",
    }],
)
image.save("image_ControlNet_magic.jpg")

训练代码

安装 DiffSynth-Studio 后,使用以下脚本可开启训练,更多信息请参考 DiffSynth-Studio 文档

modelscope download --dataset DiffSynth-Studio/diffsynth_example_dataset --include "flux2/Template-KleinBase4B-ControlNet/*" --local_dir ./data/diffsynth_example_dataset

accelerate launch examples/flux2/model_training/train.py \
  --dataset_base_path data/diffsynth_example_dataset/flux2/Template-KleinBase4B-ControlNet \
  --dataset_metadata_path data/diffsynth_example_dataset/flux2/Template-KleinBase4B-ControlNet/metadata.jsonl \
  --extra_inputs "template_inputs" \
  --max_pixels 1048576 \
  --dataset_repeat 50 \
  --model_id_with_origin_paths "black-forest-labs/FLUX.2-klein-4B:text_encoder/*.safetensors,black-forest-labs/FLUX.2-klein-base-4B:transformer/*.safetensors,black-forest-labs/FLUX.2-klein-4B:vae/diffusion_pytorch_model.safetensors" \
  --template_model_id_or_path "DiffSynth-Studio/Template-KleinBase4B-ControlNet:" \
  --tokenizer_path "black-forest-labs/FLUX.2-klein-4B:tokenizer/" \
  --learning_rate 1e-4 \
  --num_epochs 2 \
  --remove_prefix_in_ckpt "pipe.template_model." \
  --output_path "./models/train/Template-KleinBase4B-ControlNet_full" \
  --trainable_models "template_model" \
  --use_gradient_checkpointing \
  --find_unused_parameters