-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Closed
Closed
Copy link
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
Trying to load a xlabs lora (with single blocks, ie one that upon conversion executes this line) now breaks since PR #9535 , raising the error ValueError: Adapter name(s) {'<my_adapter_name>'} not in the list of present adapters: set() here
The same run on a non xlabs LoRA worked
Reproduction
- use diffusers @ 3159e60
- run the below snippet
from diffusers import FluxPipeline
import torch
ckpt_id = "black-forest-labs/FLUX.1-schnell" # or FLUX.1-dev
pipe: FluxPipeline = FluxPipeline.from_pretrained(
ckpt_id,
torch_dtype=torch.bfloat16,
)
# optional memory optims
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()
pipe.enable_sequential_cpu_offload()
pipe.load_lora_weights("<path_to_xlab_lora_folder>", adapter_name="<some_name>")
pipe.set_adapters("<some_name>", [1])FYI: I used torch==2.4.1 , transformers==4.44.2
Reason
This is because this condition is falsy when using my xlabs LoRA, therefore nothing gets put inside the set_adapters dictionary
System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.12.0
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.0
- Transformers version: 4.44.2
- Accelerate version: 0.34.2
- PEFT version: 0.12.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A10G, 23028 MiB
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Who can help?
following-up as discussed on my PR #9581 (comment)
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working