-
Notifications
You must be signed in to change notification settings - Fork 6.5k
LCM-LoRA docs #5782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
LCM-LoRA docs #5782
Changes from 9 commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
0356568
begin doc
patil-suraj f5e851b
fix examples
patil-suraj 0cc9a5b
add in toctree
patil-suraj 0fb995b
fix toctree
patil-suraj 1e1d71d
improve copy
patil-suraj d2c0b2f
improve introductions
patil-suraj f9f09c7
add lcm doc
patil-suraj cab206b
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patil-suraj 1adb1fc
fix filename
patil-suraj 9b84385
Apply suggestions from code review
patil-suraj f383066
address Sayak's comments
patil-suraj 6a85b32
remove controlnet aux
patil-suraj 5a3ae20
open in colab
patil-suraj 67a4c17
move to Specific pipeline examples
patil-suraj 072e8ff
update controlent and adapter examples
patil-suraj File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,270 @@ | ||
| <!--Copyright 2023 The HuggingFace Team. All rights reserved. | ||
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
| the License. You may obtain a copy of the License at | ||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
| specific language governing permissions and limitations under the License. | ||
| --> | ||
|
|
||
| # Performing inference with LCM | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. | ||
|
|
||
| From the [official website](https://latent-consistency-models.github.io/): | ||
|
|
||
| > LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. | ||
| For a more technical overview of LCMs, refer to [the paper](https://huggingface.co/papers/2310.04378). | ||
|
|
||
| LCM distilled models are available for [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), and the [SSD-1B](https://huggingface.co/segmind/SSD-1B) model. All the checkpoints can be found in this [collection](https://huggingface.co/collections/latent-consistency/latent-consistency-models-weights-654ce61a95edd6dffccef6a8). | ||
|
|
||
patil-suraj marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| This guide shows how to perform inference with LCMs for | ||
| - text-to-image | ||
| - image-to-image | ||
| - combined with style LoRAs | ||
| - controlent/t2iadapter | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Text-to-image | ||
|
|
||
| You'll use the [`StableDiffusionXLPipeline`] pipeline with the scheduler: [`LCMScheduler`] and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```python | ||
| from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler | ||
| import torch | ||
|
|
||
| unet = UNet2DConditionModel.from_pretrained( | ||
| "latent-consistency/lcm-sdxl", | ||
| torch_dtype=torch.float16, | ||
| variant="fp16", | ||
| ) | ||
| pipe = StableDiffusionXLPipeline.from_pretrained( | ||
| "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", | ||
| ).to("cuda") | ||
| pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
|
|
||
| prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" | ||
|
|
||
| generator = torch.manual_seed(0) | ||
| image = pipe( | ||
| prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 | ||
| ).images[0] | ||
| ``` | ||
|
|
||
|  | ||
|
|
||
| Notice that we use only 4 steps for generation which is way less than what's typically used for standard SDXL. | ||
|
|
||
| Some details to keep in mind: | ||
|
|
||
| * To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don't have any effect on the denoising process. | ||
| * The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for `guidance_scale`. However, disabling `guidance_scale` using a value of 1.0 is also effective in most cases. | ||
|
|
||
|
|
||
| ## Image-to-image | ||
|
|
||
| LCMs can be applied to image-to-image tasks too. Let's look at how we can perform image-to-image generation with LCMs. For this example we'll use the [LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) model, but the same steps can be applied to other LCM models as well. | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler | ||
| from diffusers.utils import make_image_grid, load_image | ||
|
|
||
| unet = UNet2DConditionModel.from_pretrained( | ||
| "SimianLuo/LCM_Dreamshaper_v7", | ||
| subfolder="unet", | ||
| torch_dtype=torch.float16, | ||
| ) | ||
|
|
||
| pipe = AutoPipelineForImage2Image.from_pretrained( | ||
| "Lykon/dreamshaper-7", | ||
| unet=unet, | ||
| torch_dtype=torch.float16, | ||
| variant="fp16", | ||
| ).to("cuda") | ||
| pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
|
|
||
| # prepare image | ||
| url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" | ||
| init_image = load_image(url) | ||
| prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" | ||
|
|
||
| # pass prompt and image to pipeline | ||
| generator = torch.manual_seed(0) | ||
| image = pipe( | ||
| prompt, | ||
| image=init_image, | ||
| num_inference_steps=4, | ||
| guidance_scale=7.5, | ||
| strength=0.5, | ||
| generator=generator | ||
| ).images[0] | ||
| make_image_grid([init_image, image], rows=1, cols=2) | ||
| ``` | ||
|
|
||
|  | ||
|
|
||
|
|
||
| <Tip> | ||
|
|
||
| Based on your prompt and the image you provide, you can get different results. To get the best results, we recommend you to try different values for `num_inference_steps`, `strength` and `guidance_scale` parameters and choose the best one. | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| </Tip> | ||
|
|
||
|
|
||
| ## Combined with style LoRAs | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| LCMs can be used with other style LoRAs, generating styled-images in very few steps. (4-8). In the following example we'll use the [papercut LoRA](TheLastBen/Papercut_SDXL). | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```python | ||
| from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler | ||
| import torch | ||
|
|
||
| unet = UNet2DConditionModel.from_pretrained( | ||
| "latent-consistency/lcm-sdxl", | ||
| torch_dtype=torch.float16, | ||
| variant="fp16", | ||
| ) | ||
| pipe = StableDiffusionXLPipeline.from_pretrained( | ||
| "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", | ||
| ).to("cuda") | ||
| pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
|
|
||
| pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") | ||
|
|
||
| prompt = "papercut, a cute fox" | ||
|
|
||
| generator = torch.manual_seed(0) | ||
| image = pipe( | ||
| prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 | ||
| ).images[0] | ||
| image | ||
| ``` | ||
|
|
||
|  | ||
|
|
||
|
|
||
| ## Controlnet/t2iadapter | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| LCM can be used with controlnet/t2iadapter. Let's look at how we can perform inference with controlnet/t2iadapter. | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ### Controlnet with SD-v1-5 and LCM-LoRA | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| For this example we'll use the [LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) model with canny controlnet, but the same steps can be applied to other LCM models as well. | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler | ||
| from diffusers.utils import load_image | ||
| from PIL import Image | ||
| import cv2 | ||
| import numpy as np | ||
|
|
||
| image = load_image( | ||
| "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" | ||
| ).resize((512, 512)) | ||
|
|
||
| image = np.array(image) | ||
|
|
||
| low_threshold = 100 | ||
| high_threshold = 200 | ||
|
|
||
| image = cv2.Canny(image, low_threshold, high_threshold) | ||
| image = image[:, :, None] | ||
| image = np.concatenate([image, image, image], axis=2) | ||
| canny_image = Image.fromarray(image) | ||
| canny_image | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) | ||
| pipe = StableDiffusionControlNetPipeline.from_pretrained( | ||
| "SimianLuo/LCM_Dreamshaper_v7", | ||
| controlnet=controlnet, | ||
| torch_dtype=torch.float16, | ||
| safety_checker=None, | ||
| ).to("cuda") | ||
|
|
||
| # set scheduler | ||
| pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
patil-suraj marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| generator = torch.manual_seed(0) | ||
| image = pipe( | ||
| "the mona lisa", | ||
| image=canny_image, | ||
| num_inference_steps=4, | ||
| guidance_scale=1, | ||
| controlnet_conditioning_scale=0.75, | ||
| generator=generator, | ||
| ).images[0] | ||
| make_image_grid([canny_image, image], rows=1, cols=2) | ||
patil-suraj marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
|
|
||
|  | ||
|
|
||
|
|
||
| <Tip> | ||
| The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| </Tip> | ||
|
|
||
| ### T2IAdapter with SDXL and LCM-LoRA | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| This example shows how to use the `lcm-sdxl` with the [Canny T2IAdapter](TencentARC/t2i-adapter-canny-sdxl-1.0). | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| Before running this example, you need to install the `controlnet_aux` package. | ||
|
|
||
| ```bash | ||
| pip install controlnet_aux | ||
| ``` | ||
|
|
||
| ```python | ||
| from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler | ||
| from diffusers.utils import load_image, make_image_grid | ||
| from controlnet_aux.canny import CannyDetector | ||
patil-suraj marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| import torch | ||
|
|
||
| # load adapter | ||
| adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") | ||
|
|
||
| unet = UNet2DConditionModel.from_pretrained( | ||
| "latent-consistency/lcm-sdxl", | ||
| torch_dtype=torch.float16, | ||
| variant="fp16", | ||
| ) | ||
| pipe = StableDiffusionXLAdapterPipeline.from_pretrained( | ||
| "stabilityai/stable-diffusion-xl-base-1.0", | ||
| unet=unet, | ||
| adapter=adapter, | ||
| torch_dtype=torch.float16, | ||
| variant="fp16", | ||
| ).to("cuda") | ||
|
|
||
| pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | ||
|
|
||
| canny_detector = CannyDetector() | ||
|
|
||
|
|
||
| url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" | ||
| image = load_image(url) | ||
|
|
||
| # Detect the canny map in low resolution to avoid high-frequency details | ||
| canny_image = canny_detector(image, detect_resolution=384, image_resolution=1024) | ||
|
|
||
| prompt = "Mystical fairy in real, magic, 4k picture, high quality" | ||
| negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" | ||
|
|
||
| generator = torch.manual_seed(0) | ||
| image = pipe( | ||
| prompt=prompt, | ||
| negative_prompt=negative_prompt, | ||
| image=canny_image, | ||
| num_inference_steps=4, | ||
| guidance_scale=1.5, | ||
| adapter_conditioning_scale=0.8, | ||
| adapter_conditioning_factor=1, | ||
| generator=generator, | ||
| ).images[0] | ||
| make_image_grid([canny_image, image], rows=1, cols=2) | ||
| ``` | ||
|
|
||
|  | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.