Skip to content

Audio File not created when using /tts with bark backend and custom model #1173

@mutschler

Description

@mutschler

LocalAI version:
Docker Image: quay.io/go-skynet/local-ai:v1.30.0-ffmpeg

Environment, CPU architecture, OS, and Version:
Linux Debian-1201-bookworm-amd64-base 6.1.0-12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux

Describe the bug
Calling tts with bark and a custom model doesn't produce any output

Request:

curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{
     "backend": "bark",
     "input":"Hello!",
     "model": "v2/en_speaker_4"
   }'

Response:

{"error":{"code":404,"message":"sendfile: file /tmp/generated/audio/piper.wav not found","type":""}}

Getting rid of the model in the request works

To Reproduce
docker-compose.yml

services:
  api:
    image: quay.io/go-skynet/local-ai:v1.30.0-ffmpeg
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
      - ./images/:/tmp/generated/images/
      - ./audio/:/tmp/generated/audio/
    command: ["/usr/bin/local-ai" ]

.env

THREADS=7
MODELS_PATH=/models
DEBUG=true

run docker-compose up and the request from above

Expected behavior
The request should return a wav file like it does without a model

Logs

api_1  | 7:54PM DBG Loading model bark from v2/en_speaker_4
api_1  | 7:54PM DBG Loading model in memory from file: /models/v2/en_speaker_4
api_1  | 7:54PM DBG Loading GRPC Model bark: {backendString:bark model:v2/en_speaker_4 threads:0 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc000102820 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py exllama:/build/extra/grpc/exllama/exllama.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py vall-e-x:/build/extra/grpc/vall-e-x/ttsvalle.py vllm:/build/extra/grpc/vllm/backend_vllm.py] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
api_1  | 7:54PM DBG Loading external backend: /build/extra/grpc/bark/ttsbark.py
api_1  | 7:54PM DBG Loading GRPC Process: /build/extra/grpc/bark/ttsbark.py
api_1  | 7:54PM DBG GRPC Service for v2/en_speaker_4 will be running at: '127.0.0.1:35093'
api_1  | 7:54PM DBG GRPC Service state dir: /tmp/go-processmanager4190610619
api_1  | 7:54PM DBG GRPC Service Started
api_1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:35093: connect: connection refused"
api_1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:35093: connect: connection refused"
api_1  | 7:54PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr Server started. Listening on: 127.0.0.1:35093
api_1  | 7:54PM DBG GRPC Service Ready
api_1  | 7:54PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:v2/en_speaker_4 ContextSize:0 Seed:0 NBatch:0 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:0 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/v2/en_speaker_4 Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
api_1  | 7:54PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr Preparing models, please wait
api_1  | 7:54PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr No GPU being used. Careful, inference might be very slow!
api_1  | [127.0.0.1]:53960 200 - GET /readyz
Downloading text_2.pt: 100%|██████████| 5.35G/5.35G [00:46<00:00, 115MB/s]
Downloading (…)solve/main/vocab.txt: 100%|██████████| 996k/996k [00:00<00:00, 2.73MB/s]
Downloading (…)okenizer_config.json: 100%|██████████| 29.0/29.0 [00:00<00:00, 17.3kB/s]
Downloading (…)lve/main/config.json: 100%|██████████| 625/625 [00:00<00:00, 347kB/s]
api_1  | [127.0.0.1]:44484 200 - GET /readyz
Downloading coarse_2.pt: 100%|██████████| 3.93G/3.93G [00:35<00:00, 112MB/s]
api_1  | [127.0.0.1]:59300 200 - GET /readyz
Downloading fine_2.pt: 100%|██████████| 3.74G/3.74G [00:32<00:00, 114MB/s]
api_1  | 7:56PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr Downloading: "https://dl.fbaipublicfiles.com/encodec/v0/encodec_24khz-d7cc33bc.th" to /root/.cache/torch/hub/checkpoints/encodec_24khz-d7cc33bc.th
100%|██████████| 88.9M/88.9M [00:00<00:00, 113MB/s] 5093): stderr
api_1  | 7:56PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr text: "Hello!"
api_1  | 7:56PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr model: "/models/v2/en_speaker_4"
api_1  | 7:56PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr dst: "/tmp/generated/audio/piper.wav"
api_1  | 7:56PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35093): stderr
api_1  | [172.30.0.1]:47090 404 - POST /tts
api_1  | [127.0.0.1]:40430 200 - GET /readyz

Additional context
Seems like auto_gptq is missing in the image as well the first time i tried it i've got an error which could be fixed by running pip install auto_gptq inside the container manually

api_1  | 7:49PM DBG Loading model bark from v2/en_speaker_4
api_1  | 7:49PM DBG Loading model in memory from file: /models/v2/en_speaker_4
api_1  | 7:49PM DBG Loading GRPC Model bark: {backendString:bark model:v2/en_speaker_4 threads:0 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc000434340 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py exllama:/build/extra/grpc/exllama/exllama.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py vall-e-x:/build/extra/grpc/vall-e-x/ttsvalle.py vllm:/build/extra/grpc/vllm/backend_vllm.py] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
api_1  | 7:49PM DBG Loading external backend: /build/extra/grpc/bark/ttsbark.py
api_1  | 7:49PM DBG Loading GRPC Process: /build/extra/grpc/bark/ttsbark.py
api_1  | 7:49PM DBG GRPC Service for v2/en_speaker_4 will be running at: '127.0.0.1:35829'
api_1  | 7:49PM DBG GRPC Service state dir: /tmp/go-processmanager3515958530
api_1  | 7:49PM DBG GRPC Service Started
api_1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:35829: connect: connection refused"
api_1  | 7:49PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35829): stderr Traceback (most recent call last):
api_1  | 7:49PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35829): stderr   File "/build/extra/grpc/bark/ttsbark.py", line 11, in <module>
api_1  | 7:49PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35829): stderr     from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
api_1  | 7:49PM DBG GRPC(v2/en_speaker_4-127.0.0.1:35829): stderr ModuleNotFoundError: No module named 'auto_gptq' 

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions