Skip to content

Conversation

@pcuenca
Copy link
Member

@pcuenca pcuenca commented Nov 7, 2022

No description provided.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Nov 7, 2022

The documentation is not available anymore as the PR was closed or merged.

Comment on lines +358 to +359
self.scheduler.set_timesteps(num_inference_steps, device=self.device)
timesteps_tensor = self.scheduler.timesteps
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be simplified if we returned the timesteps tensor from the scheduler.

@pcuenca pcuenca requested review from anton-l, patil-suraj and patrickvonplaten and removed request for patil-suraj and patrickvonplaten November 7, 2022 12:32
sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
self.sigmas = torch.from_numpy(sigmas).to(device=device)
self.timesteps = torch.from_numpy(timesteps).to(device=device)
if str(device).startswith("mps"):
Copy link
Member

@anton-l anton-l Nov 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can it be mps:1, is that why there's a startswith? 🤯

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, I found mps:0 when testing on my computer :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other places we use device.type == "mps", but the method signature allows strings too.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing!

Just curious did you try out moving everything to float32? The weights of the models are always in float32 so I'd assume during the forward pass everything is downcasted to float32 anyways.

Let's maybe leave it as is for now and once we have finished our whole test migration to high precision few steps pipelines we could maybe revisit this PR and possibly just always do float32?

@pcuenca
Copy link
Member Author

pcuenca commented Nov 8, 2022

Just curious did you try out moving everything to float32? The weights of the models are always in float32 so I'd assume during the forward pass everything is downcasted to float32 anyways.

I thought about it but didn't want to change anything more than necessary. I agree we should verify it :)

Let's maybe leave it as is for now and once we have finished our whole test migration to high precision few steps pipelines we could maybe revisit this PR and possibly just always do float32?

Agreed!

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks a lot for fixing this!

@pcuenca pcuenca merged commit 813744e into main Nov 8, 2022
@pcuenca pcuenca deleted the fix-scheduler-mps branch November 8, 2022 12:11
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* Schedulers: don't use float64 on mps

* Test set_timesteps() on device (float schedulers).

* SD pipeline: use device in set_timesteps.

* SD in-painting pipeline: use device in set_timesteps.

* Tests: fix mps crashes.

* Skip test_load_pipeline_from_git on mps.

Not compatible with float16.

* Use device.type instead of str in Euler schedulers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants