Skip to content

CombinedTimestepGuidanceTextProjEmbeddings.forward() missing 1 required positional argument: 'pooled_projection' #9370

@KILOY00

Description

@KILOY00

Describe the bug

Hello, I am using the latest version of transformer to train Flux's Lora slider. I used 'pooled_projection' in the middle, but the following error will be reported during runtime:

File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 450, in forward
self.time_text_embed(timestep, pooled_projections)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
TypeError: CombinedTimestepGuidanceTextProjEmbeddings.forward() missing 1 required positional argument: 'pooled_projection'

Strangely, I put the 450th line of code
self.time_text_embed(timestep, After changing "self.time_text_embed(timestep, timestep, pooled_projections)" to "self.time_text_embed(timestep, timestep, pooled_projections)", the training code can run normally. However, the trained Lora has no effect, and the generated image is exactly the same as the original image. I have tried many methods but failed to solve this problem. I hope to get an answer! Thank you!

Reproduction

python /tmp/code/sliders/conceptmod/textsliders/train_lora_flux.py --name 'light' --rank 16 --alpha 1 --config_file '/tmp/code/sliders/conceptmod/textsliders/data/config.yaml' --peft_type lora

Logs

python /tmp/code/sliders/conceptmod/textsliders/train_lora_flux.py --name 'light' --rank 16 --alpha 1 --config_file '/tmp/code/sliders/conceptmod/textsliders/data/config.yaml' --peft_type lora
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 450, in forward
    self.time_text_embed(timestep, pooled_projections)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
TypeError: CombinedTimestepGuidanceTextProjEmbeddings.forward() missing 1 required positional argument: 'pooled_projection'

System Info

  • 🤗 Diffusers version: 0.31.0.dev0
  • Platform: Linux-4.15.0-189-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.13
  • PyTorch version (GPU?): 2.2.1+cu121 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.24.6
  • Transformers version: 4.44.1
  • Accelerate version: 0.26.1
  • PEFT version: 0.7.1
  • Bitsandbytes version: 0.42.0
  • Safetensors version: 0.4.1
  • xFormers version: 0.0.25
  • Accelerator: NVIDIA A100-PCIE-40GB, 40960 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions