Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility #2032
-
Hello, Issue Summary Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility Environment Details OS: Ubuntu Linux Root Cause The CUDA library libcublasLt.so.12 requires GLIBC symbols: log2f@GLIBC_2.27 (from GLIBC 2.27+) Build Command Used CMAKE_ARGS=“-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native” pip install -e . --verbose What’s Been Tried Using system CUDA instead of conda CUDA - same error How to resolve GLIBC version conflicts when building llama-cpp-python with CUDA? The build progresses successfully until the final linking stage for vision tools Regards, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
I made a post with a guide on compiling/installing for (Fedora) linux, might be helpful. |
Beta Was this translation helpful? Give feedback.
-
Ugh I actually ran into this exact issue a while back while playing around with llama-cpp-python and trying to enable CUDA support. I didn’t realize how tricky the GLIBC version stuff can get on some distros. Did anyone here manage to work around it without upgrading the whole system? I’d love to hear what worked for others. |
Beta Was this translation helpful? Give feedback.
-
(tf) oba@mail:~/llama-cpp$ git clone --recursive https://github.com/JamePeng/llama-cpp-python.git |
Beta Was this translation helpful? Give feedback.
Try my solution, should work for any distro since I'm using a Conda env with the right tools: #2043