Skip to content

Commit 1a79969

Browse files
authored
Initial ONNX doc (TODO: Installation) (#426)
1 parent f55190b commit 1a79969

File tree

1 file changed

+21
-10
lines changed

1 file changed

+21
-10
lines changed

docs/source/optimization/onnx.mdx

Lines changed: 21 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,22 +11,33 @@ specific language governing permissions and limitations under the License.
1111
-->
1212

1313

14+
# How to use the ONNX Runtime for inference
1415

15-
# Quicktour
16+
🤗 Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available.
1617

17-
Start using Diffusers🧨 quickly!
18-
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
18+
## Installation
1919

20-
```
21-
pip install diffusers
22-
```
20+
- TODO
2321

24-
## Main classes
22+
## Stable Diffusion Inference
2523

26-
### Models
24+
The snippet below demonstrates how to use the ONNX runtime. You need to use `StableDiffusionOnnxPipeline` instead of `StableDiffusionPipeline`. You also need to download the weights from the `onnx` branch of the repository, and indicate the runtime provider you want to use.
2725

28-
### Schedulers
26+
```python
27+
# make sure you're logged in with `huggingface-cli login`
28+
from diffusers import StableDiffusionOnnxPipeline
2929

30-
### Pipeliens
30+
pipe = StableDiffusionOnnxPipeline.from_pretrained(
31+
"CompVis/stable-diffusion-v1-4",
32+
revision="onnx",
33+
provider="CUDAExecutionProvider",
34+
use_auth_token=True,
35+
)
36+
37+
prompt = "a photo of an astronaut riding a horse on mars"
38+
image = pipe(prompt).images[0]
39+
```
3140

41+
## Known Issues
3242

43+
- Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.

0 commit comments

Comments
 (0)