Skip to content

Commit 7f12b4d

Browse files
estelleaflAflalopatrickvonplatenyiyixuxu
authored andcommitted
[ldm3d] Ldm3d upscaler to community pipeline (huggingface#5870)
--------- Co-authored-by: Aflalo <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
1 parent 4f8e47d commit 7f12b4d

File tree

8 files changed

+964
-5
lines changed

8 files changed

+964
-5
lines changed

docs/source/en/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -334,7 +334,7 @@
334334
- local: api/pipelines/stable_diffusion/upscale
335335
title: Super-resolution
336336
- local: api/pipelines/stable_diffusion/ldm3d_diffusion
337-
title: LDM3D Text-to-(RGB, Depth)
337+
title: LDM3D Text-to-(RGB, Depth), Text-to-(RGB-pano, Depth-pano), LDM3D Upscaler
338338
- local: api/pipelines/stable_diffusion/adapter
339339
title: Stable Diffusion T2I-Adapter
340340
- local: api/pipelines/stable_diffusion/gligen

docs/source/en/api/pipelines/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ The table below lists all the pipelines currently available in 🤗 Diffusers an
5353
| [Kandinsky 2.2](kandinsky_v22) | text2image, image2image, inpainting |
5454
| [Latent Consistency Models](latent_consistency_models) | text2image |
5555
| [Latent Diffusion](latent_diffusion) | text2image, super-resolution |
56-
| [LDM3D](stable_diffusion/ldm3d_diffusion) | text2image, text-to-3D |
56+
| [LDM3D](stable_diffusion/ldm3d_diffusion) | text2image, text-to-3D, text-to-pano, upscaling |
5757
| [MultiDiffusion](panorama) | text2image |
5858
| [MusicLDM](musicldm) | text2audio |
5959
| [Paint by Example](paint_by_example) | inpainting |

docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,11 @@ specific language governing permissions and limitations under the License.
1414

1515
LDM3D was proposed in [LDM3D: Latent Diffusion Model for 3D](https://huggingface.co/papers/2305.10853) by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as [Stable Diffusion](./overview) which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps.
1616

17+
Two checkpoints are available for use:
18+
- [ldm3d-original](https://huggingface.co/Intel/ldm3d). The original checkpoint used in the [paper](https://arxiv.org/pdf/2305.10853.pdf)
19+
- [ldm3d-4c](https://huggingface.co/Intel/ldm3d-4c). The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images.
20+
21+
1722
The abstract from the paper is:
1823

1924
*This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at [this url](https://t.ly/tdi2).*
@@ -26,12 +31,25 @@ Make sure to check out the Stable Diffusion [Tips](overview#tips) section to lea
2631

2732
## StableDiffusionLDM3DPipeline
2833

29-
[[autodoc]] StableDiffusionLDM3DPipeline
34+
[[autodoc]] pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.StableDiffusionLDM3DPipeline
3035
- all
3136
- __call__
3237

38+
3339
## LDM3DPipelineOutput
3440

3541
[[autodoc]] pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput
3642
- all
3743
- __call__
44+
45+
# Upscaler
46+
47+
[LDM3D-VR](https://arxiv.org/pdf/2311.03226.pdf) is an extended version of LDM3D.
48+
49+
The abstract from the paper is:
50+
*Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods*
51+
52+
Two checkpoints are available for use:
53+
- [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
54+
- [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline.
55+

docs/source/en/api/pipelines/stable_diffusion/overview.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,10 +121,16 @@ The table below summarizes the available Stable Diffusion pipelines, their suppo
121121
<td class="px-4 py-2 text-gray-700">
122122
<a href="./ldm3d_diffusion">StableDiffusionLDM3D</a>
123123
</td>
124-
<td class="px-4 py-2 text-gray-700">text-to-rgb, text-to-depth</td>
124+
<td class="px-4 py-2 text-gray-700">text-to-rgb, text-to-depth, text-to-pano</td>
125125
<td class="px-4 py-2"><a href="https://huggingface.co/spaces/r23/ldm3d-space"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
126126
</td>
127127
</tr>
128+
<tr>
129+
<td class="px-4 py-2 text-gray-700">
130+
<a href="./ldm3d_diffusion">StableDiffusionUpscaleLDM3D</a>
131+
</td>
132+
<td class="px-4 py-2 text-gray-700">ldm3d super-resolution</td>
133+
</tr>
128134
</tbody>
129135
</table>
130136
</div>

docs/source/ja/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,3 +96,4 @@ specific language governing permissions and limitations under the License.
9696
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
9797
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
9898
| [stable_diffusion_ldm3d](./api/pipelines/stable_diffusion/ldm3d_diffusion) | [LDM3D: Latent Diffusion Model for 3D](https://arxiv.org/abs/2305.10853) | Text to Image and Depth Generation |
99+
| [stable_diffusion_upscaler_ldm3d](./api/pipelines/stable_diffusion/ldm3d_diffusion) | [LDM3D-VR: Latent Diffusion Model for 3D VR](https://arxiv.org/pdf/2311.03226) | Image and Depth Upscaling |

examples/community/README.md

Lines changed: 43 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,8 @@ prompt-to-prompt | change parts of a prompt and retain image structure (see [pap
4848
| Latent Consistency Pipeline | Implementation of [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) | [Latent Consistency Pipeline](#latent-consistency-pipeline) | - | [Simian Luo](https://github.com/luosiallen) |
4949
| Latent Consistency Img2img Pipeline | Img2img pipeline for Latent Consistency Models | [Latent Consistency Img2Img Pipeline](#latent-consistency-img2img-pipeline) | - | [Logan Zoellner](https://github.com/nagolinc) |
5050
| Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts | [Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
51-
51+
| LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
52+
|
5253

5354
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
5455
```py
@@ -2344,6 +2345,47 @@ images = pipe(
23442345
assert len(images) == (len(prompts) - 1) * num_interpolation_steps
23452346
```
23462347

2348+
### StableDiffusionUpscaleLDM3D Pipeline
2349+
[LDM3D-VR](https://arxiv.org/pdf/2311.03226.pdf) is an extended version of LDM3D.
2350+
2351+
The abstract from the paper is:
2352+
*Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods*
2353+
2354+
Two checkpoints are available for use:
2355+
- [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
2356+
- [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline pipeline.
2357+
2358+
'''py
2359+
from PIL import Image
2360+
import os
2361+
import torch
2362+
from diffusers import StableDiffusionLDM3DPipeline, DiffusionPipeline
2363+
2364+
#Generate a rgb/depth output from LDM3D
2365+
pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
2366+
pipe_ldm3d.to("cuda")
2367+
2368+
prompt =f"A picture of some lemons on a table"
2369+
output = pipe_ldm3d(prompt)
2370+
rgb_image, depth_image = output.rgb, output.depth
2371+
rgb_image[0].save(f"lemons_ldm3d_rgb.jpg")
2372+
depth_image[0].save(f"lemons_ldm3d_depth.png")
2373+
2374+
2375+
#Upscale the previous output to a resolution of (1024, 1024)
2376+
pipe_ldm3d_upscale = DiffusionPipeline.from_pretrained("Intel/ldm3d-sr", custom_pipeline="pipeline_stable_diffusion_upscale_ldm3d")
2377+
2378+
pipe_ldm3d_upscale.to("cuda")
2379+
2380+
low_res_img = Image.open(f"lemons_ldm3d_rgb.jpg").convert("RGB")
2381+
low_res_depth = Image.open(f"lemons_ldm3d_depth.png").convert("L")
2382+
outputs = pipe_ldm3d_upscale(prompt="high quality high resolution uhd 4k image", rgb=low_res_img, depth=low_res_depth, num_inference_steps=50, target_res=[1024, 1024])
2383+
2384+
upscaled_rgb, upscaled_depth =outputs.rgb[0], outputs.depth[0]
2385+
upscaled_rgb.save(f"upscaled_lemons_rgb.png")
2386+
upscaled_depth.save(f"upscaled_lemons_depth.png")
2387+
'''
2388+
23472389
### ControlNet + T2I Adapter Pipeline
23482390
This pipelines combines both ControlNet and T2IAdapter into a single pipeline, where the forward pass is executed once.
23492391
It receives `control_image` and `adapter_image`, as well as `controlnet_conditioning_scale` and `adapter_conditioning_scale`, for the ControlNet and Adapter modules, respectively. Whenever `adapter_conditioning_scale = 0` or `controlnet_conditioning_scale = 0`, it will act as a full ControlNet module or as a full T2IAdapter module, respectively.

0 commit comments

Comments
 (0)