Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
275 commits
Select commit Hold shift + click to select a range
672bd49
Use `t` instead of `timestep` in `_apply_perturbed_attention_guidance…
hlky Dec 16, 2024
a7d5052
Add `dynamic_shifting` to SD3 (#10236)
hlky Dec 16, 2024
3f421fe
Fix `use_flow_sigmas` (#10242)
hlky Dec 16, 2024
87e8157
Fix ControlNetUnion _callback_tensor_inputs (#10218)
hlky Dec 16, 2024
438bd60
Use non-human subject in StableDiffusion3ControlNetPipeline example (…
hlky Dec 16, 2024
7186bb4
Add enable_vae_tiling to AllegroPipeline, fix example (#10212)
hlky Dec 16, 2024
e9a3911
Fix checkpoint in CogView3PlusPipeline example (#10211)
hlky Dec 16, 2024
2f023d7
Fix RePaint Scheduler (#10185)
hlky Dec 16, 2024
5ed761a
Add ControlNetUnion to AutoPipeline from_pretrained (#10219)
hlky Dec 16, 2024
aafed3f
fix downsample bug in MidResTemporalBlock1D (#10250)
holmosaint Dec 16, 2024
9f00c61
[core] TorchAO Quantizer (#10009)
a-r-r-o-w Dec 16, 2024
7667cfc
[docs] Add missing AttnProcessors (#10246)
stevhliu Dec 16, 2024
6fb94d5
[chore] add contribution note for lawrence. (#10253)
sayakpaul Dec 17, 2024
0d96a89
Fix copied from comment in Mochi lora loader (#10255)
a-r-r-o-w Dec 17, 2024
ac86393
[LoRA] Support LTX Video (#10228)
a-r-r-o-w Dec 17, 2024
f9d5a93
[docs] Clarify dtypes for Sana (#10248)
a-r-r-o-w Dec 17, 2024
e24941b
[Single File] Add GGUF support (#9964)
DN6 Dec 17, 2024
128b96f
Fix Mochi Quality Issues (#10033)
DN6 Dec 17, 2024
1524781
[tests] Remove/rename unsupported quantization torchao type (#10263)
a-r-r-o-w Dec 17, 2024
2739241
[docs] delete_adapters() (#10245)
stevhliu Dec 17, 2024
9c68c94
[Community Pipeline] Fix typo that cause error on regional prompting …
cjkangme Dec 17, 2024
ec1c7a7
Add `set_shift` to FlowMatchEulerDiscreteScheduler (#10269)
hlky Dec 17, 2024
9408aa2
[LoRA] feat: lora support for SANA. (#10234)
sayakpaul Dec 18, 2024
ba6fd6e
[chore] fix: licensing headers in mochi and ltx (#10275)
sayakpaul Dec 18, 2024
0ac52d6
Use `torch` in `get_2d_rotary_pos_embed` (#10155)
hlky Dec 18, 2024
63cdf9c
[chore] fix: reamde -> readme (#10276)
sayakpaul Dec 18, 2024
88b015d
Make `time_embed_dim` of `UNet2DModel` changeable (#10262)
Bichidian Dec 18, 2024
8eb73c8
Support pass kwargs to sd3 custom attention processor (#9818)
Matrix53 Dec 18, 2024
83709d5
Flux Control(Depth/Canny) + Inpaint (#10192)
affromero Dec 18, 2024
e222246
Fix sigma_last with use_flow_sigmas (#10267)
hlky Dec 18, 2024
b389f33
Fix Doc links in GGUF and Quantization overview docs (#10279)
DN6 Dec 18, 2024
8304adc
Make zeroing prompt embeds for Mochi Pipeline configurable (#10284)
DN6 Dec 18, 2024
862a7d5
[Single File] Add single file support for Flux Canny, Depth and Fill …
DN6 Dec 18, 2024
c4c99c3
[tests] Fix broken cuda, nightly and lora tests on main for CogVideoX…
a-r-r-o-w Dec 18, 2024
f66bd32
Rename Mochi integration test correctly (#10220)
a-r-r-o-w Dec 18, 2024
f35a387
[tests] remove nullop import checks from lora tests (#10273)
a-r-r-o-w Dec 18, 2024
9c0e20d
[chore] Update README_sana.md to update the default model (#10285)
sayakpaul Dec 19, 2024
f781b8c
Hunyuan VAE tiling fixes and transformer docs (#10295)
a-r-r-o-w Dec 19, 2024
4450d26
Add Flux Control to AutoPipeline (#10292)
hlky Dec 19, 2024
2f7a417
Update lora_conversion_utils.py (#9980)
zhaowendao30 Dec 19, 2024
0ed09a1
Check correct model type is passed to `from_pretrained` (#10189)
hlky Dec 19, 2024
1826a1e
[LoRA] Support HunyuanVideo (#10254)
SHYuanBest Dec 19, 2024
9764f22
[Single File] Add single file support for Mochi Transformer (#10268)
DN6 Dec 19, 2024
3ee9669
Allow Mochi Transformer to be split across multiple GPUs (#10300)
DN6 Dec 19, 2024
074798b
Fix `local_files_only` for checkpoints with shards (#10294)
hlky Dec 19, 2024
d8825e7
Fix failing lora tests after HunyuanVideo lora (#10307)
a-r-r-o-w Dec 19, 2024
b756ec6
unet's `sample_size` attribute is to accept tuple(h, w) in `StableDif…
Foundsheep Dec 19, 2024
648d968
Enable Gradient Checkpointing for UNet2DModel (New) (#7201)
dg845 Dec 20, 2024
3191248
[WIP] SD3.5 IP-Adapter Pipeline Integration (#9987)
guiyrt Dec 20, 2024
41ba8c0
Add support for sharded models when TorchAO quantization is enabled (…
a-r-r-o-w Dec 20, 2024
151b74c
Make tensors in ResNet contiguous for Hunyuan VAE (#10309)
a-r-r-o-w Dec 20, 2024
dbc1d50
[Single File] Add GGUF support for LTX (#10298)
DN6 Dec 20, 2024
17128c4
[LoRA] feat: support loading regular Flux LoRAs into Flux Control, an…
sayakpaul Dec 20, 2024
bf6eaa8
[Tests] add integration tests for lora expansion stuff in Flux. (#10318)
sayakpaul Dec 20, 2024
e12d610
Mochi docs (#9934)
DN6 Dec 20, 2024
b64ca6c
[Docs] Update ltx_video.md to remove generator from `from_pretrained(…
sayakpaul Dec 20, 2024
c8ee4af
docs: fix a mistake in docstring (#10319)
Leojc Dec 20, 2024
9020086
[BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() Ty…
syntaxticsugr Dec 20, 2024
7d4db57
[docs] Fix quantization links (#10323)
stevhliu Dec 20, 2024
a6288a5
[Sana]add 2K related model for Sana (#10322)
lawrence-cj Dec 20, 2024
d413881
[Docs] Update gguf.md to remove generator from the pipeline from_pret…
sayakpaul Dec 21, 2024
a756694
Fix push_tests_mps.yml (#10326)
hlky Dec 21, 2024
bf9a641
Fix EMAModel test_from_pretrained (#10325)
hlky Dec 21, 2024
be20709
Support Flux IP Adapter (#10261)
hlky Dec 21, 2024
233dffd
flux controlnet inpaint config bug (#10291)
yigitozgenc Dec 21, 2024
6aaa051
Community hosted weights for diffusers format HunyuanVideo weights (#…
a-r-r-o-w Dec 23, 2024
f615f00
Fix enable_sequential_cpu_offload in test_kandinsky_combined (#10324)
hlky Dec 23, 2024
7c2f0af
update `get_parameter_dtype` (#10342)
yiyixuxu Dec 23, 2024
da21d59
[Single File] Add Single File support for HunYuan video (#10320)
DN6 Dec 23, 2024
b58868e
[Sana bug] bug fix for 2K model config (#10340)
lawrence-cj Dec 23, 2024
3c2e2aa
`.from_single_file()` - Add missing `.shape` (#10332)
gau-nernst Dec 23, 2024
ffc0eaa
Bump minimum TorchAO version to 0.7.0 (#10293)
a-r-r-o-w Dec 23, 2024
6a970a4
[docs] fix: torchao example. (#10278)
sayakpaul Dec 23, 2024
02c777c
[tests] Refactor TorchAO serialization fast tests (#10271)
a-r-r-o-w Dec 23, 2024
76e2727
[SANA LoRA] sana lora training tests and misc. (#10296)
sayakpaul Dec 23, 2024
5fcee4a
[Single File] Fix loading (#10349)
DN6 Dec 23, 2024
c34fc34
[Tests] QoL improvements to the LoRA test suite (#10304)
sayakpaul Dec 23, 2024
71cc201
Fix FluxIPAdapterTesterMixin (#10354)
hlky Dec 23, 2024
055d955
Fix failing CogVideoX LoRA fuse test (#10352)
a-r-r-o-w Dec 23, 2024
9d27df8
Rename LTX blocks and docs title (#10213)
a-r-r-o-w Dec 23, 2024
ea1ba0b
[LoRA] test fix (#10351)
sayakpaul Dec 23, 2024
851dfa3
[Tests] Fix more tests sayak (#10359)
sayakpaul Dec 23, 2024
4b55713
[core] LTX Video 0.9.1 (#10330)
a-r-r-o-w Dec 23, 2024
92933ec
[chore] post release 0.32.0 (#10361)
sayakpaul Dec 23, 2024
9d2c8d8
fix test pypi installation in the release workflow (#10360)
sayakpaul Dec 24, 2024
c1e7fd5
[Docs] Added `model search` to community_projects.md (#10358)
suzukimain Dec 24, 2024
6dfaec3
make style for https://github.com/huggingface/diffusers/pull/10368 (#…
yiyixuxu Dec 24, 2024
c0c1168
Make passing the IP Adapter mask to the attention mechanism optional …
elismasilva Dec 24, 2024
023b0e0
[tests] fix `AssertionError: Torch not compiled with CUDA enabled` (#…
faaany Dec 24, 2024
825979d
[training] fix: registration of out_channels in the control flux scri…
sayakpaul Dec 24, 2024
cd991d1
Fix TorchAO related bugs; revert device_map changes (#10371)
a-r-r-o-w Dec 25, 2024
1b202c5
[LoRA] feat: support `unload_lora_weights()` for Flux Control. (#10206)
sayakpaul Dec 25, 2024
f430a0c
Add torch_xla support to pipeline_aura_flow.py (#10365)
AlanPonnachan Dec 27, 2024
83da817
[Add] torch_xla support to pipeline_sana.py (#10364)
SahilCarterr Dec 27, 2024
55ac1db
Default values in SD3 pipelines when submodules are not loaded (#10393)
hlky Dec 27, 2024
01780c3
[Fix] Broken links in hunyuan docs (#10402)
SahilCarterr Dec 28, 2024
5f72473
[training] add ds support to lora sd3. (#10378)
sayakpaul Dec 30, 2024
3f591ef
[Typo] Update md files (#10404)
luchaoqi Dec 31, 2024
0744378
[docs] Quantization tip (#10249)
stevhliu Dec 31, 2024
91008aa
[docs] Video generation update (#10272)
stevhliu Dec 31, 2024
4b9f1c7
Add correct number of channels when resuming from checkpoint for Flux…
thesantatitan Jan 2, 2025
44640c8
Fix Flux multiple Lora loading bug (#10388)
maxs-kan Jan 2, 2025
7ab7c12
[Sana] 1k PE bug fixed (#10431)
lawrence-cj Jan 2, 2025
f4fdb3a
fix bug for ascend npu (#10429)
gameofdimension Jan 2, 2025
68bd693
IP-Adapter support for `StableDiffusion3ControlNetPipeline` (#10363)
guiyrt Jan 2, 2025
3cb6686
[LTX-Video] fix attribute adjustment for ltx. (#10426)
sayakpaul Jan 2, 2025
476795c
Update Flux docstrings (#10423)
a-r-r-o-w Jan 2, 2025
d81cc6f
[docs] Fix internal links (#10418)
stevhliu Jan 2, 2025
f7822ae
Update train_text_to_image_sdxl.py (#8830)
jiagaoxiang Jan 2, 2025
c28db0a
Fix AutoPipeline `from_pipe` where source pipeline is missing target …
hlky Jan 2, 2025
a17832b
add pythor_xla support for render a video (#10443)
chaowenguo0 Jan 3, 2025
4e44534
Update rerender_a_video.py fix dtype error (#10451)
chaowenguo0 Jan 4, 2025
fdcbbdf
Add torch_xla and from_single_file support to TextToVideoZeroPipeline…
hlky Jan 5, 2025
b572635
[Tests] add slow and nightly markers to sd3 lora integation. (#10458)
sayakpaul Jan 6, 2025
1896b1f
`lora_bias` PEFT version check in `unet.load_attn_procs` (#10474)
hlky Jan 6, 2025
04e783c
Update variable names correctly in docs (#10435)
a-r-r-o-w Jan 6, 2025
6da6406
[Fix] broken links in docs (#10434)
SahilCarterr Jan 6, 2025
2f25156
LEditsPP - examples, check height/width, add tiling/slicing (#10471)
hlky Jan 6, 2025
d9d94e1
[LoRA] fix: lora unloading when using expanded Flux LoRAs. (#10397)
sayakpaul Jan 6, 2025
7747b58
Fix hunyuan video attention mask dim (#10454)
a-r-r-o-w Jan 6, 2025
8f2253c
Add torch_xla and from_single_file to instruct-pix2pix (#10444)
hlky Jan 6, 2025
4f5e3e3
Regarding the RunwayML path for V1.5 did change to stable-diffusion-v…
AMEERAZAM08 Jan 6, 2025
661bde0
Fix style (#10478)
a-r-r-o-w Jan 7, 2025
b94cfd7
[Training] QoL improvements in the Flux Control training scripts (#10…
sayakpaul Jan 7, 2025
f1e0c7c
Refactor instructpix2pix lora to support peft (#10205)
Aiden-Frost Jan 7, 2025
811560b
[LoRA] Support original format loras for HunyuanVideo (#10376)
a-r-r-o-w Jan 7, 2025
628f2c5
Use Pipelines without scheduler (#10439)
hlky Jan 7, 2025
854a046
[CI] Add minimal testing for legacy Torch versions (#10479)
DN6 Jan 7, 2025
e0b96ba
Bump jinja2 from 3.1.4 to 3.1.5 in /examples/research_projects/realfi…
dependabot[bot] Jan 7, 2025
03bcf5a
RFInversionFluxPipeline, small fix for enable_model_cpu_offload & ena…
Teriks Jan 7, 2025
01bd796
Fix HunyuanVideo produces NaN on PyTorch<2.5 (#10482)
hlky Jan 7, 2025
ee7e141
Use pipelines without vae (#10441)
hlky Jan 7, 2025
71ad16b
Add `_no_split_modules` to some models (#10308)
a-r-r-o-w Jan 8, 2025
80fd926
[Sana][bug fix]change clean_caption from True to False. (#10481)
lawrence-cj Jan 8, 2025
cb342b7
Add AuraFlow GGUF support (#10463)
AstraliteHeart Jan 8, 2025
1288c85
Update tokenizers in `pr_test_peft_backend` (#10132)
hlky Jan 8, 2025
e2deb82
Fix compatibility with pipeline when loading model with device_map on…
SunMarc Jan 8, 2025
9731773
[CI] Torch Min Version Test Fix (#10491)
DN6 Jan 8, 2025
4df9d49
Fix tokenizers install from main in LoRA tests (#10494)
hlky Jan 8, 2025
5655b22
Notebooks for Community Scripts-5 (#10499)
ParagEkbote Jan 8, 2025
a0acbdc
fix for #7365, prevent pipelines from overriding provided prompt embe…
bghira Jan 8, 2025
b13cdbb
UNet2DModel mid_block_type (#10469)
hlky Jan 8, 2025
c096457
[Sana 4K] (#10493)
lawrence-cj Jan 8, 2025
95c5ce4
PyTorch/XLA support (#10498)
hlky Jan 8, 2025
daf9d0f
[chore] remove prints from tests. (#10505)
sayakpaul Jan 9, 2025
a26d570
AutoModel instead of AutoModelForCausalLM (#10507)
geronimi73 Jan 9, 2025
d006f07
[docs] Fix missing parameters in docstrings (#10419)
stevhliu Jan 9, 2025
f0c6d97
flux: make scheduler config params optional (#10384)
vladmandic Jan 9, 2025
7bc8b92
add callable object to convert frame into control_frame to reduce cpu…
chaowenguo0 Jan 9, 2025
553b138
[LoRA] clean up `load_lora_into_text_encoder()` and `fuse_lora()` cop…
sayakpaul Jan 9, 2025
7116fd2
Support pass kwargs to cogvideox custom attention processor (#10456)
huanngzh Jan 9, 2025
83ba01a
small readme changes for advanced training examples (#10473)
linoytsaban Jan 10, 2025
12fbe3f
Use Pipelines without unet (#10440)
hlky Jan 10, 2025
a6f043a
[LoRA] allow big CUDA tests to run properly for LoRA (and others) (#9…
sayakpaul Jan 10, 2025
52c05bd
Add a `disable_mmap` option to the `from_single_file` loader to impro…
danhipke Jan 10, 2025
9f06a0d
[CI] Match remaining assertions from big runner (#10521)
sayakpaul Jan 10, 2025
d6c030f
add the xm.mark_step for the first denosing loop (#10530)
chaowenguo0 Jan 10, 2025
1b0fe63
Typo fix in the table number of a referenced paper (#10528)
andreabosisio Jan 11, 2025
e7db062
[DC-AE] support tiling for DC-AE (#10510)
chenjy2003 Jan 11, 2025
36acdd7
[Tests] skip tests properly with `unittest.skip()` (#10527)
sayakpaul Jan 11, 2025
5cda8ea
Use `randn_tensor` to replace `torch.randn` (#10535)
lmxyy Jan 12, 2025
0785dba
[Docs] Add negative prompt docs to FluxPipeline (#10531)
sayakpaul Jan 12, 2025
edb8c1b
[Flux] Improve true cfg condition (#10539)
sayakpaul Jan 12, 2025
e1c7269
Fix Latte output_type (#10558)
a-r-r-o-w Jan 13, 2025
50c81df
Fix StableDiffusionInstructPix2PixPipelineSingleFileSlowTests (#10557)
hlky Jan 13, 2025
980736b
Fix train_dreambooth_lora_sd3_miniature (#10554)
hlky Jan 13, 2025
c3478a4
Fix Nightly AudioLDM2PipelineFastTests (#10556)
hlky Jan 13, 2025
f7cb595
[Single File] Fix loading Flux Dev finetunes with Comfy Prefix (#10545)
DN6 Jan 13, 2025
329771e
[LoRA] improve failure handling for peft. (#10551)
sayakpaul Jan 13, 2025
ae019da
[Sana] add Sana to auto-text2image-pipeline; (#10538)
lawrence-cj Jan 13, 2025
df355ea
Fix documentation for FluxPipeline (#10563)
ohm314 Jan 13, 2025
9fc9c6d
Added IP-Adapter for `StableDiffusion3ControlNetInpaintingPipeline` (…
guiyrt Jan 13, 2025
794f7e4
Implement framewise encoding/decoding in LTX Video VAE (#10488)
rootonchair Jan 13, 2025
74b6752
[Docs] Update hunyuan_video.md to rectify the checkpoint id (#10524)
sayakpaul Jan 13, 2025
aa79d7d
Test sequential cpu offload for torchao quantization (#10506)
a-r-r-o-w Jan 14, 2025
4a4afd5
Fix batch > 1 in HunyuanVideo (#10548)
hlky Jan 14, 2025
3279751
[CI] Update HF Token in Fast GPU Tests (#10568)
DN6 Jan 14, 2025
fbff43a
[FEAT] DDUF format (#10037)
SunMarc Jan 14, 2025
be62c85
[CI] Update HF Token on Fast GPU Model Tests (#10570)
DN6 Jan 14, 2025
6b72784
allow passing hf_token to load_textual_inversion (#10546)
Teriks Jan 14, 2025
3d70777
[Sana-4K] (#10537)
lawrence-cj Jan 14, 2025
4dec63c
IP-Adapter for `StableDiffusion3InpaintPipeline` (#10581)
guiyrt Jan 15, 2025
f9e957f
Fix offload tests for CogVideoX and CogView3 (#10547)
a-r-r-o-w Jan 15, 2025
2432f80
[LoRA] feat: support loading loras into 4bit quantized Flux models. (…
sayakpaul Jan 15, 2025
bba59fb
[Tests] add: test to check 8bit bnb quantized models work with lora l…
sayakpaul Jan 15, 2025
c944f06
[Chore] fix vae annotation in mochi pipeline (#10585)
sayakpaul Jan 15, 2025
b0c8973
[Sana 4K] Add vae tiling option to avoid OOM (#10583)
leisuzz Jan 15, 2025
e8114bd
IP-Adapter for `StableDiffusion3Img2ImgPipeline` (#10589)
guiyrt Jan 16, 2025
b785ddb
[DC-AE, SANA] fix SanaMultiscaleLinearAttention apply_quadratic_atten…
chenjy2003 Jan 16, 2025
0b065c0
Move buffers to device (#10523)
hlky Jan 16, 2025
9e1b8a0
[Docs] Update SD3 ip_adapter model_id to diffusers checkpoint (#10597)
guiyrt Jan 16, 2025
08e62fe
Scheduling fixes on MPS (#10549)
hlky Jan 16, 2025
17d99c4
[Docs] Add documentation about using ParaAttention to optimize FLUX a…
chengzeyi Jan 16, 2025
cecada5
NPU adaption for RMSNorm (#10534)
leisuzz Jan 16, 2025
aeac0a0
implementing flux on TPUs with ptxla (#10515)
entrpn Jan 16, 2025
23b467c
[core] ConsisID (#10140)
SHYuanBest Jan 19, 2025
328e0d2
[training] set rest of the blocks with `requires_grad` False. (#10607)
sayakpaul Jan 19, 2025
4842f5d
chore: remove redundant words (#10609)
sunxunle Jan 20, 2025
75a636d
bugfix for npu not support float64 (#10123)
baymax591 Jan 20, 2025
4ace7d0
[chore] change licensing to 2025 from 2024. (#10615)
sayakpaul Jan 21, 2025
012d08b
Enable dreambooth lora finetune example on other devices (#10602)
jiqing-feng Jan 21, 2025
158a5a8
Remove the FP32 Wrapper when evaluating (#10617)
lmxyy Jan 21, 2025
ec37e20
[tests] make tests device-agnostic (part 3) (#10437)
faaany Jan 21, 2025
a1f9a71
fix offload gpu tests etc (#10366)
yiyixuxu Jan 21, 2025
a647682
Remove cache migration script (#10619)
Wauplin Jan 21, 2025
beacaa5
[core] Layerwise Upcasting (#10347)
a-r-r-o-w Jan 22, 2025
ca60ad8
Improve TorchAO error message (#10627)
a-r-r-o-w Jan 22, 2025
8d6f6d6
[CI] Update HF_TOKEN in all workflows (#10613)
DN6 Jan 22, 2025
04d4092
add onnxruntime-migraphx as part of check for onnxruntime in import_u…
kahmed10 Jan 23, 2025
78bc824
[Tests] modify the test slices for the failing flax test (#10630)
sayakpaul Jan 23, 2025
d77c53b
[docs] fix image path in para attention docs (#10632)
sayakpaul Jan 23, 2025
5483162
[docs] uv installation (#10622)
stevhliu Jan 23, 2025
9684c52
width and height are mixed-up (#10629)
raulc0399 Jan 23, 2025
37c9697
Add IP-Adapter example to Flux docs (#10633)
hlky Jan 23, 2025
a451c0e
removing redundant requires_grad = False (#10628)
YanivDorGalron Jan 23, 2025
5897137
[chore] add a script to extract loras from full fine-tuned models (#1…
sayakpaul Jan 24, 2025
87252d8
Add pipeline_stable_diffusion_xl_attentive_eraser (#10579)
Anonym0u3 Jan 24, 2025
07860f9
NPU Adaption for Sanna (#10409)
leisuzz Jan 24, 2025
4f3ec53
Add sigmoid scheduler in `scheduling_ddpm.py` docs (#10648)
JacobHelwig Jan 26, 2025
4fa2459
create a script to train autoencoderkl (#10605)
lavinal712 Jan 27, 2025
f7f36c7
Add community pipeline for semantic guidance for FLUX (#10610)
Marlon154 Jan 27, 2025
18f7d1d
ControlNet Union controlnet_conditioning_scale for multiple control i…
hlky Jan 27, 2025
4157177
[training] Convert to ImageFolder script (#10664)
hlky Jan 27, 2025
158c5c4
Add provider_options to OnnxRuntimeModel (#10661)
hlky Jan 27, 2025
8ceec90
fix check_inputs func in LuminaText2ImgPipeline (#10651)
victolee0 Jan 27, 2025
e89ab5b
SDXL ControlNet Union pipelines, make control_image argument immutibl…
Teriks Jan 27, 2025
fb42066
Revert RePaint scheduler 'fix' (#10644)
GiusCat Jan 27, 2025
658e24e
[core] Pyramid Attention Broadcast (#9562)
a-r-r-o-w Jan 27, 2025
f295e2e
[fix] refer use_framewise_encoding on AutoencoderKLHunyuanVideo._enco…
hanchchch Jan 28, 2025
c4d4ac2
Refactor gradient checkpointing (#10611)
a-r-r-o-w Jan 28, 2025
7b100ce
[Tests] conditionally check `fp8_e4m3_bf16_max_memory < fp8_e4m3_fp32…
sayakpaul Jan 28, 2025
196aef5
Fix pipeline dtype unexpected change when using SDXL reference commun…
dimitribarbot Jan 28, 2025
e6037e8
[tests] update llamatokenizer in hunyuanvideo tests (#10681)
sayakpaul Jan 29, 2025
33f9361
support StableDiffusionAdapterPipeline.from_single_file (#10552)
Teriks Jan 29, 2025
ea76880
fix(hunyuan-video): typo in height and width input check (#10684)
badayvedat Jan 29, 2025
aad69ac
[FIX] check_inputs function in Auraflow Pipeline (#10678)
SahilCarterr Jan 29, 2025
1ae9b05
Fix enable memory efficient attention on ROCm (#10564)
tenpercent Jan 31, 2025
5d2d239
Fix inconsistent random transform in instruct pix2pix (#10698)
Luvata Jan 31, 2025
9f28f1a
feat(training-utils): support device and dtype params in compute_dens…
badayvedat Feb 1, 2025
537891e
Fixed grammar in "write_own_pipeline" readme (#10706)
N0-Flux-given Feb 3, 2025
3e35f56
Fix Documentation about Image-to-Image Pipeline (#10704)
ParagEkbote Feb 3, 2025
5e8e6cb
[bitsandbytes] Simplify bnb int8 dequant (#10401)
sayakpaul Feb 4, 2025
f63d322
Fix train_text_to_image.py --help (#10711)
nkthiebaut Feb 4, 2025
dbe0094
Notebooks for Community Scripts-6 (#10713)
ParagEkbote Feb 4, 2025
5b1dcd1
[Fix] Type Hint in from_pretrained() to Ensure Correct Type Inference…
SahilCarterr Feb 4, 2025
23bc56a
add provider_options in from_pretrained (#10719)
xieofxie Feb 5, 2025
145522c
[Community] Enhanced `Model Search` (#10417)
suzukimain Feb 6, 2025
cd0a4a8
[bugfix] NPU Adaption for Sana (#10724)
leisuzz Feb 6, 2025
d43ce14
Quantized Flux with IP-Adapter (#10728)
hlky Feb 6, 2025
464374f
EDMEulerScheduler accept sigmas, add final_sigmas_type (#10734)
hlky Feb 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
3 changes: 2 additions & 1 deletion .github/workflows/build_docker_images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
id: file_changes
uses: jitterbit/get-changed-files@v1
with:
format: 'space-delimited'
format: "space-delimited"
token: ${{ secrets.GITHUB_TOKEN }}

- name: Build Changed Docker Images
Expand Down Expand Up @@ -67,6 +67,7 @@ jobs:
- diffusers-pytorch-cuda
- diffusers-pytorch-compile-cuda
- diffusers-pytorch-xformers-cuda
- diffusers-pytorch-minimum-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu
Expand Down
72 changes: 67 additions & 5 deletions .github/workflows/nightly_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -235,15 +235,73 @@ jobs:
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

torch_minimum_version_cuda_tests:
name: Torch Minimum Version CUDA Tests
runs-on:
group: aws-g4dn-2xlarge
container:
image: diffusers/diffusers-pytorch-minimum-cuda
options: --shm-size "16gb" --ipc host --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git

- name: Environment
run: |
python utils/print_env.py

- name: Run PyTorch CUDA tests
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_minimum_version_cuda \
tests/models/test_modeling_common.py \
tests/pipelines/test_pipelines_common.py \
tests/pipelines/test_pipeline_utils.py \
tests/pipelines/test_pipelines.py \
tests/pipelines/test_pipelines_auto.py \
tests/schedulers/test_schedulers.py \
tests/others

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_minimum_version_cuda_stats.txt
cat reports/tests_torch_minimum_version_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_minimum_version_cuda_test_reports
path: reports

run_flax_tpu_tests:
name: Nightly Flax TPU Tests
runs-on: docker-tpu
runs-on:
group: gcp-ct5lp-hightpu-8t
if: github.event_name == 'schedule'

container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache
defaults:
run:
shell: bash
Expand Down Expand Up @@ -356,6 +414,10 @@ jobs:
config:
- backend: "bitsandbytes"
test_location: "bnb"
- backend: "gguf"
test_location: "gguf"
- backend: "torchao"
test_location: "torchao"
runs-on:
group: aws-g6e-xlarge-plus
container:
Expand Down Expand Up @@ -443,7 +505,7 @@ jobs:
# shell: arch -arch arm64 bash {0}
# env:
# HF_HOME: /System/Volumes/Data/mnt/cache
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# run: |
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
# --report-log=tests_torch_mps.log \
Expand Down Expand Up @@ -499,7 +561,7 @@ jobs:
# shell: arch -arch arm64 bash {0}
# env:
# HF_HOME: /System/Volumes/Data/mnt/cache
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# run: |
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
# --report-log=tests_torch_mps.log \
Expand All @@ -519,4 +581,4 @@ jobs:
# if: always()
# run: |
# pip install slack_sdk tabulate
# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
134 changes: 0 additions & 134 deletions .github/workflows/pr_test_peft_backend.yml

This file was deleted.

65 changes: 65 additions & 0 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -234,3 +234,68 @@ jobs:
with:
name: pr_${{ matrix.config.report }}_test_reports
path: reports

run_lora_tests:
needs: [check_code_quality, check_repository_consistency]
strategy:
fail-fast: false

name: LoRA tests with PEFT main

runs-on:
group: aws-general-8-plus

container:
image: diffusers/diffusers-pytorch-cpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

defaults:
run:
shell: bash

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
# TODO (sayakpaul, DN6): revisit `--no-deps`
python -m pip install -U peft@git+https://github.com/huggingface/peft.git --no-deps
python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
python -m uv pip install -U tokenizers
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps

- name: Environment
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python utils/print_env.py

- name: Run fast PyTorch LoRA tests with PEFT
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \
-s -v \
--make-reports=tests_peft_main \
tests/lora/
python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \
-s -v \
--make-reports=tests_models_lora_peft_main \
tests/models/ -k "lora"

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_lora_failures_short.txt
cat reports/tests_models_lora_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: pr_main_test_reports
path: reports

Loading