Skip to content

Commit e285810

Browse files
committed
docs: updates
1 parent b0fda36 commit e285810

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

docs/content/getting_started/_index.en.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,8 @@ You can run `local-ai` directly with a model name, and it will download the mode
155155
| Embeddings | bert-cpp | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp``` |
156156
| Embeddings | all-minilm-l6-v2 | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2``` |
157157
| Audio to Text | whisper-base | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base``` |
158+
| Text to Audio | rhasspy-voice-en-us-amy | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy``` |
159+
158160

159161
{{% /tab %}}
160162
{{% tab name="GPU (CUDA 11)" %}}
@@ -169,6 +171,7 @@ You can run `local-ai` directly with a model name, and it will download the mode
169171
| Embeddings | bert-cpp | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp``` |
170172
| Embeddings | all-minilm-l6-v2 | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2``` |
171173
| Audio to Text | whisper-base | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base``` |
174+
| Text to Audio | rhasspy-voice-en-us-amy | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy``` |
172175

173176
{{% /tab %}}
174177

0 commit comments

Comments
 (0)