You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: allow to pass by models via args
* expose it also as an env/arg
* docs: enhancements to build/requirements
* do not display status always
* print download status
* not all mesages are debug
Copy file name to clipboardExpand all lines: docs/content/advanced/_index.en.md
-8Lines changed: 0 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -359,15 +359,7 @@ docker run --env REBUILD=true localai
359
359
docker run --env-file .env localai
360
360
```
361
361
362
-
### Build only a single backend
363
362
364
-
You can control the backends that are built by setting the `GRPC_BACKENDS` environment variable. For instance, to build only the `llama-cpp` backend only:
365
-
366
-
```bash
367
-
make GRPC_BACKENDS=backend-assets/grpc/llama-cpp build
Building on Mac (M1 or M2) works, but you may need to install some prerequisites using `brew`.
60
97
@@ -188,6 +225,16 @@ make BUILD_TYPE=metal build
188
225
# Note: only models quantized with q4_0 are supported!
189
226
```
190
227
228
+
### Build only a single backend
229
+
230
+
You can control the backends that are built by setting the `GRPC_BACKENDS` environment variable. For instance, to build only the `llama-cpp` backend only:
231
+
232
+
```bash
233
+
make GRPC_BACKENDS=backend-assets/grpc/llama-cpp build
234
+
```
235
+
236
+
By default, all the backends are built.
237
+
191
238
### Windows compatibility
192
239
193
240
Make sure to give enough resources to the running container. See https://github.com/go-skynet/LocalAI/issues/2
0 commit comments