Skip to content

Commit d89fa27

Browse files
committed
docs: updates
1 parent b0fda36 commit d89fa27

File tree

2 files changed

+10
-0
lines changed

2 files changed

+10
-0
lines changed

docs/content/getting_started/_index.en.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,8 @@ You can run `local-ai` directly with a model name, and it will download the mode
155155
| Embeddings | bert-cpp | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp``` |
156156
| Embeddings | all-minilm-l6-v2 | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2``` |
157157
| Audio to Text | whisper-base | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base``` |
158+
| Text to Audio | rhasspy-voice-en-us-amy | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy``` |
159+
158160

159161
{{% /tab %}}
160162
{{% tab name="GPU (CUDA 11)" %}}
@@ -169,6 +171,7 @@ You can run `local-ai` directly with a model name, and it will download the mode
169171
| Embeddings | bert-cpp | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp``` |
170172
| Embeddings | all-minilm-l6-v2 | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2``` |
171173
| Audio to Text | whisper-base | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base``` |
174+
| Text to Audio | rhasspy-voice-en-us-amy | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy``` |
172175

173176
{{% /tab %}}
174177

embedded/models/bert-cpp.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,10 @@ download_files:
1414
sha256: "a5a174d8772c8a569faf9f3136c441f2c3855b5bf35ed32274294219533feaad"
1515
uri: "https://huggingface.co/mudler/all-MiniLM-L6-v2/resolve/main/ggml-model-q4_0.bin"
1616

17+
usage: |
18+
You can test this model with curl like this:
19+
20+
curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json" -d '{
21+
"input": "Your text string goes here",
22+
"model": "bert-cpp-minilm-v6"
23+
}'

0 commit comments

Comments
 (0)