You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/getting_started/_index.en.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -155,6 +155,8 @@ You can run `local-ai` directly with a model name, and it will download the mode
155
155
| Embeddings | bert-cpp |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp```|
156
156
| Embeddings | all-minilm-l6-v2 |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2```|
157
157
| Audio to Text | whisper-base |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base```|
158
+
| Text to Audio | rhasspy-voice-en-us-amy |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy```|
159
+
158
160
159
161
{{% /tab %}}
160
162
{{% tab name="GPU (CUDA 11)" %}}
@@ -169,6 +171,7 @@ You can run `local-ai` directly with a model name, and it will download the mode
169
171
| Embeddings | bert-cpp |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp```|
170
172
| Embeddings | all-minilm-l6-v2 |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2```|
171
173
| Audio to Text | whisper-base |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base```|
174
+
| Text to Audio | rhasspy-voice-en-us-amy |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy```|
0 commit comments