-
Notifications
You must be signed in to change notification settings - Fork 583
Description
i have this issue when i try to run vicuna-13b model
Attempting to Load...
System Info: AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
llama.cpp: loading model from G:\AI_and_data\LLAMA_models\LLama_for_windows\ggml-vicuna-13b-4bit.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
Legacy LLAMA GGJT compatability changes triggered.
llama_model_load_internal: ggml ctx size = 90.75 KB
llama_model_load_internal: mem required = 9807.49 MB (+ 1608.00 MB per state)
Traceback (most recent call last):
File "koboldcpp.py", line 648, in
File "koboldcpp.py", line 578, in main
File "koboldcpp.py", line 161, in load_model
OSError: [WinError -1073741795] Windows Error 0xc000001d
[340] Failed to execute script 'koboldcpp' due to unhandled exception!