-
Notifications
You must be signed in to change notification settings - Fork 13.1k
Closed
Labels
SYCLhttps://en.wikipedia.org/wiki/SYCL - GPU programming languagehttps://en.wikipedia.org/wiki/SYCL - GPU programming languagebug-unconfirmedmedium severityUsed to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)stale
Description
What happened?
Currently on Fedora 40 with Intel Arc A750.
Running the following:
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-server \
-t 10 \
-ngl 20 \
-b 512 \
--ctx-size 16384 \
-m ~/llama-models/llama-2-7b.Q4_0.gguf \
--color -c 3400 \
--seed 42 \
--temp 0.8 \
--top_k 5 \
--repeat_penalty 1.1 \
--host :: \
--port 8080 \
-n -1 \
-sm none -mg 0
gives the following output:
INFO [ main] build info | tid="140466031364096" timestamp=1721277895 build=3411 commit="e02b597b"
INFO [ main] system info | tid="140466031364096" timestamp=1721277895 n_threads=10 n_threads_batch=-1 total_threads=28 system_info="AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | "
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /home/jacob/llama-models/llama-2-7b.Q4_0.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
ggml_sycl_init: GGML_SYCL_FORCE_MMQ: no
ggml_sycl_init: SYCL_USE_XMX: yes
ggml_sycl_init: found 5 SYCL devices:
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 20 repeating layers to GPU
llm_load_tensors: offloaded 20/33 layers to GPU
llm_load_tensors: SYCL0 buffer size = 2171.88 MiB
llm_load_tensors: CPU buffer size = 3647.87 MiB
.................................................................................................
llama_new_context_with_model: n_ctx = 3424
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
[SYCL] call ggml_check_sycl
ggml_check_sycl: GGML_SYCL_DEBUG: 0
ggml_check_sycl: GGML_SYCL_F16: no
found 5 SYCL devices:
| | | | |Max | |Max |Global | |
| | | | |compute|Max work|sub |mem | |
|ID| Device Type| Name|Version|units |group |group|size | Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]| Intel Arc A750 Graphics| 1.3| 448| 1024| 32| 8096M| 1.3.28717|
| 1| [level_zero:gpu:1]| Intel UHD Graphics 770| 1.3| 32| 512| 32| 62662M| 1.3.28717|
| 2| [opencl:gpu:0]| Intel Arc A750 Graphics| 3.0| 448| 1024| 32| 8096M| 24.09.28717.17|
| 3| [opencl:gpu:1]| Intel UHD Graphics 770| 3.0| 32| 512| 32| 62662M| 24.09.28717.17|
| 4| [opencl:cpu:0]| Intel Core i7-14700K| 3.0| 28| 8192| 64| 67164M|2024.18.6.0.02_160000|
llama_kv_cache_init: SYCL0 KV buffer size = 1070.00 MiB
llama_kv_cache_init: SYCL_Host KV buffer size = 642.00 MiB
llama_new_context_with_model: KV self size = 1712.00 MiB, K (f16): 856.00 MiB, V (f16): 856.00 MiB
llama_new_context_with_model: SYCL_Host output buffer size = 0.24 MiB
llama_new_context_with_model: SYCL0 compute buffer size = 300.50 MiB
llama_new_context_with_model: SYCL_Host compute buffer size = 22.69 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 136
InvalidModule: Invalid SPIR-V module: input SPIR-V module uses extension 'SPV_INTEL_memory_access_aliasing' which were disabled by --spirv-ext option
I'm not sure where the --spirv-ext
option is set, but it seems like a compiler flag. What can I do to fix this?
I ran the following to set this up, and the build did not fail:
# Export relevant ENV variables
source /opt/intel/oneapi/setvars.sh
# Build LLAMA with MKL BLAS acceleration for intel GPU
# Option 1: Use FP32 (recommended for better performance in most cases)
cmake -B build -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
# Option 2: Use FP16
cmake -B build -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL_F16=ON
# build all binary
cmake --build build --config Release -j -v
This is my result for clinfo
:
Platform #0: Intel(R) OpenCL
`-- Device #0: Intel(R) Core(TM) i7-14700K
Platform #1: Intel(R) OpenCL Graphics
`-- Device #0: Intel(R) Arc(TM) A750 Graphics
Platform #2: Intel(R) OpenCL Graphics
`-- Device #0: Intel(R) UHD Graphics 770
This is my result for ./build/bin/llama-ls-sycl-device
:
found 5 SYCL devices:
| | | | |Max | |Max |Global | |
| | | | |compute|Max work|sub |mem | |
|ID| Device Type| Name|Version|units |group |group|size | Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]| Intel Arc A750 Graphics| 1.3| 448| 1024| 32| 8096M| 1.3.28717|
| 1| [level_zero:gpu:1]| Intel UHD Graphics 770| 1.3| 32| 512| 32| 62662M| 1.3.28717|
| 2| [opencl:gpu:0]| Intel Arc A750 Graphics| 3.0| 448| 1024| 32| 8096M| 24.09.28717.17|
| 3| [opencl:gpu:1]| Intel UHD Graphics 770| 3.0| 32| 512| 32| 62662M| 24.09.28717.17|
| 4| [opencl:cpu:0]| Intel Core i7-14700K| 3.0| 28| 8192| 64| 67164M|2024.18.6.0.02_160000|
Name and Version
~/llama.cpp$ ./build/bin/llama-server --version
version: 3411 (e02b597)
built with Intel(R) oneAPI DPC++/C++ Compiler 2024.2.0 (2024.2.0.20240602) for x86_64-unknown-linux-gnu
What operating system are you seeing the problem on?
Linux
Relevant log output
No response
Metadata
Metadata
Assignees
Labels
SYCLhttps://en.wikipedia.org/wiki/SYCL - GPU programming languagehttps://en.wikipedia.org/wiki/SYCL - GPU programming languagebug-unconfirmedmedium severityUsed to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)stale