| Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. | |
| This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) β | |
| The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) β | |
| The starting beta value of inference. beta_end (float, defaults to 0.02) β | |
| The final beta value. beta_schedule (str, defaults to "linear") β | |
| The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from | |
| linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) β | |
| Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) β | |
| The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we | |
| will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) β | |
| Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) β | |
| The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) β | |
| Each diffusion step uses the alphas product value at that step and at the previous one. For the final step | |
| there is no previous alpha. When this option is True the previous alpha product is fixed to 1, | |
| otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) β | |
| An offset added to the inference steps. You can use a combination of offset=1 and | |
| set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable | |
| Diffusion. prediction_type (str, defaults to epsilon, optional) β | |
| Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), | |
| sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen | |
| Video paper). thresholding (bool, defaults to False) β | |
| Whether to use the βdynamic thresholdingβ method. This is unsuitable for latent-space diffusion models such | |
| as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) β | |
| The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) β | |
| The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") β | |
| The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and | |
| Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) β | |
| The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions | |
| c_skip and c_out. Increasing this will decrease the approximation error (although the approximation | |
| error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) β | |
| Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and | |
| dark samples instead of limiting it to samples with medium brightness. Loosely related to | |
| --offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with | |
| non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config | |
| attributes that are passed in the schedulerβs __init__ function, such as num_train_timesteps. They can be | |
| accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving | |
| functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) β torch.FloatTensor Parameters sample (torch.FloatTensor) β | |
| The input sample. timestep (int, optional) β | |
| The current timestep in the diffusion chain. Returns | |
| torch.FloatTensor | |
| A scaled input sample. | |
| Ensures interchangeability with schedulers that need to scale the denoising model input depending on the | |
| current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) β | |
| The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) β | |
| The number of diffusion steps used when generating samples with a pre-trained model. If used, | |
| timesteps must be None. device (str or torch.device, optional) β | |
| The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) β | |
| The original number of inference steps, which will be used to generate a linearly-spaced timestep | |
| schedule (which is different from the standard diffusers implementation). We will then take | |
| num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as | |
| our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) β | |
| Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default | |
| timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep | |
| schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) β ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β | |
| The direct output from learned diffusion model. timestep (float) β | |
| The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β | |
| A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β | |
| A random number generator. return_dict (bool, optional, defaults to True) β | |
| Whether or not to return a LCMSchedulerOutput or tuple. Returns | |
| ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple | |
| If return_dict is True, LCMSchedulerOutput is returned, otherwise a | |
| tuple is returned where the first element is the sample tensor. | |
| Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion | |
| process from the learned model outputs (most often the predicted noise). | |