Skip to content

Commit 19639d4

Browse files
committed
Update path to sentencetransformers backend for local execution
Signed-off-by: Marcus Köhler <[email protected]>
1 parent b1a20ef commit 19639d4

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/content/features/embeddings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json
6161

6262
## Huggingface embeddings
6363

64-
To use `sentence-formers` and models in `huggingface` you can use the `huggingface` embedding backend.
64+
To use `sentence-transformers` and models in `huggingface` you can use the `huggingface` embedding backend.
6565

6666
```yaml
6767
name: text-embedding-ada-002
@@ -75,7 +75,7 @@ The `huggingface` backend uses Python [sentence-transformers](https://github.com
7575

7676
{{% notice note %}}
7777

78-
- The `huggingface` backend is an optional backend of LocalAI and uses Python. If you are running `LocalAI` from the containers you are good to go and should be already configured for use. If you are running `LocalAI` manually you must install the python dependencies (`pip install -r /path/to/LocalAI/extra/requirements`) and specify the extra backend in the `EXTERNAL_GRPC_BACKENDS` environment variable ( `EXTERNAL_GRPC_BACKENDS="huggingface-embeddings:/path/to/LocalAI/extra/grpc/huggingface/huggingface.py"` ) .
78+
- The `huggingface` backend is an optional backend of LocalAI and uses Python. If you are running `LocalAI` from the containers you are good to go and should be already configured for use. If you are running `LocalAI` manually you must install the python dependencies (`pip install -r /path/to/LocalAI/extra/requirements`) and specify the extra backend in the `EXTERNAL_GRPC_BACKENDS` environment variable ( `EXTERNAL_GRPC_BACKENDS="huggingface-embeddings:/path/to/LocalAI/backend/python/sentencetransformers/sentencetransformers.py"` ) .
7979
- The `huggingface` backend does support only embeddings of text, and not of tokens. If you need to embed tokens you can use the `bert` backend or `llama.cpp`.
8080
- No models are required to be downloaded before using the `huggingface` backend. The models will be downloaded automatically the first time the API is used.
8181

0 commit comments

Comments
 (0)