Skip to content

Conversation

@camenduru
Copy link
Contributor

everything changed I am confused 😐 this probably not works @patrickvonplaten please help me

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jan 3, 2023

The documentation is not available anymore as the PR was closed or merged.

@camenduru
Copy link
Contributor Author

it is working like this

!git clone https://huggingface.co/camenduru/plushies
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("/content/plushies", safety_checker=None, from_flax=True).to("cpu")

but if I do

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("camenduru/plushies", safety_checker=None, from_flax=True).to("cpu")

it is downloading safetensors 😐

@camenduru
Copy link
Contributor Author

if repo contains safetensors it is downloading safetensors maybe this is not a bug maybe safetensors not implemented in flax yet I will try different model

Screenshot 2023-01-03 200059

@camenduru
Copy link
Contributor Author

if only flax model in repo it is skipping 😐

@camenduru camenduru changed the title From flax [WIP] From Flax Jan 3, 2023
@pcuenca
Copy link
Member

pcuenca commented Jan 3, 2023

Hi @camenduru!

This is a great effort, however network downloading is a bit complicated. When you clone the repo, all the files are available locally and you select the flax ones, but downloading attempts to only retrieve the files that will be needed. I haven't had enough time to study your code in depth, but I believe you should manipulate allow_patterns and ignore_patterns here when loading from flax weights. Otherwise, the function snapshot_download would not download them.

For easier debugging, I would concentrate on a repo that only has flax weights and not safetensors. When that works, I'd suggest ignoring all the safetensors logic when loading from flax, to make it simpler.

I hope that helps. I'll try to test it better tomorrow. Thanks a lot for working on this!

@camenduru
Copy link
Contributor Author

@pcuenca thanks ❤ I have a question I found this https://setuptools.pypa.io/en/latest/userguide/development_mode.html @patil-suraj taught me pip install -e ".[dev]" is it possible with this -e ".[dev]" like when I change the code I want to test without uninstalling and installing diffusers

@pcuenca
Copy link
Member

pcuenca commented Jan 4, 2023

@pcuenca thanks ❤ I have a question I found this https://setuptools.pypa.io/en/latest/userguide/development_mode.html @patil-suraj taught me pip install -e ".[dev]" is it possible with this -e ".[dev]" like when I change the code I want to test without uninstalling and installing diffusers

Absolutely! If you install with -e then any changes will become live, but you'll only see them in a new Python process. This means that if you are developing in a notebook you have to restart the kernel. If you are using a script or a debugging session in VS Code, you have to kill it and start again.

@camenduru
Copy link
Contributor Author

thanks @pcuencarestart the kernel worked 🎉 also

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("camenduru/plushies", safety_checker=None, from_flax=True).to("cpu")

working now 🎉🎊✨

@camenduru
Copy link
Contributor Author

how can I pass this test should I edit the test 🤔

=========================== short test summary info ============================
FAILED tests/test_pipelines.py::DownloadTests::test_download_only_pytorch - assert not True
 +  where True = any(<generator object DownloadTests.test_download_only_pytorch.<locals>.<genexpr> at 0x7f270e6e6d60>)
===== 1 failed, 670 passed, 257 skipped, 192 warnings in 163.49s (0:02:43) =====

@camenduru camenduru marked this pull request as ready for review January 4, 2023 19:35
@camenduru
Copy link
Contributor Author

camenduru commented Jan 5, 2023

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("camenduru/plushies", safety_checker=None, from_flax=True).to("cpu")
pipe.save_pretrained("pt")
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("/content/pt", safety_checker=None).to("cuda")
image = pipe("duck", num_inference_steps=50).images[0]
display(image)

download

from diffusers import FlaxStableDiffusionPipeline
pipe, params = FlaxStableDiffusionPipeline.from_pretrained("camenduru/plushies-pt", from_pt=True)
pipe.save_pretrained("flax", params=params)
from diffusers import FlaxStableDiffusionPipeline
pipe, params = FlaxStableDiffusionPipeline.from_pretrained("/workspaces/flax", dtype=jax.numpy.bfloat16, safety_checker=None)
params = replicate(params)

real_seed = random.randint(0, 2147483647)
prng_seed = jax.random.PRNGKey(real_seed)
prng_seed = jax.random.split(prng_seed, jax.device_count())
num_samples = jax.device_count()
prompt = "duck"
prompt_n = num_samples * [prompt]
prompt_ids = pipe.prepare_inputs(prompt_n)
prompt_ids = shard(prompt_ids)
images = pipe(prompt_ids, params, prng_seed, jit=True).images
images = pipe.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
display(images[0])

85b92936-6823-4788-ab59-a4237a50cf31

@camenduru camenduru requested review from patrickvonplaten and removed request for patil-suraj and pcuenca January 5, 2023 20:42
@camenduru
Copy link
Contributor Author

oh no 😐 I did something wrong camenduru requested review from patrickvonplaten and removed request for pcuenca and patil-suraj now

@camenduru
Copy link
Contributor Author

how can I add reviewers back oops sorry

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tried it out, but the changes to modeling_utils.py look good to me! Also all tests are passing 😍

Great job @camenduru - this was one of the harder PRs!

@pcuenca @patil-suraj mind taking a look here as well. Would like to have 2 more 👀 on this one as it touches core functionality.

Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks a lot @camenduru! I just left a few minor comments. I'll try to test it tomorrow, but I agree with Patrick that it'd be really cool if we could add a simple test that shows how this works.

Great job!

Comment on lines 154 to 155
else:
logger.warning(f"All Flax model weights were used when initializing {pt_model.__class__.__name__}.\n")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the ok case, right? I wonder if we should just skip this warning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Comment on lines 162 to 167
else:
logger.warning(
f"All the weights of {pt_model.__class__.__name__} were initialized from the Flax model.\n"
"If your task is similar to the task the model of the checkpoint was trained on, "
f"you can already use {pt_model.__class__.__name__} for predictions without further training."
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here, this means everything went well, doesn't it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes deleted

@camenduru
Copy link
Contributor Author

thanks @pcuenca

if we could add a simple test that shows how this works.

I have a question about test should I write test other then this #1900 (comment)

@patrickvonplaten patrickvonplaten changed the title [WIP] From Flax From Flax Jan 12, 2023
@patrickvonplaten patrickvonplaten changed the title From Flax Allow converting Flax to PyTorch by adding a "from_flax" keyword Jan 12, 2023
@patrickvonplaten
Copy link
Contributor

patrickvonplaten commented Jan 12, 2023

Added a test, conversion seems to be working perfectly! Think we can merge this one :-)

Amazing job @camenduru !

@patrickvonplaten patrickvonplaten merged commit f73ed17 into huggingface:main Jan 12, 2023
@camenduru
Copy link
Contributor Author

woohoo 🥳 thanks @patrickvonplaten@pcuenca@patil-suraj ❤ 🥳 🎉 🎊

yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
…gingface#1900)

* from_flax

* oops

* oops

* make style with pip install -e ".[dev]"

* oops

* now code quality happy 😋

* allow_patterns += FLAX_WEIGHTS_NAME

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <[email protected]>

* for test

* bye bye is_flax_available()

* oops

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <[email protected]>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <[email protected]>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <[email protected]>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <[email protected]>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <[email protected]>

* make style

* add test

* finihs

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants