chore(deps): update dependency llama-cpp-python to v0.2.87 #44
Annotations
4 errors
Build image (./chat/base/Containerfile, ai-lab-playground-chat, amd64, arm64)
Error: buildah exited with code 1
Trying to pull registry.access.redhat.com/ubi9-minimal:9.4-1194...
Getting image source signatures
Copying blob sha256:247c2d03e9487628cb6754ff5385a670df160f7bba36af8fc1f2066e461dc36e
Copying blob sha256:247c2d03e9487628cb6754ff5385a670df160f7bba36af8fc1f2066e461dc36e
Copying config sha256:9e74a726167b225b2c3d295684801426149133510ce1a75cee354b408f8f4973
Writing manifest to image destination
Storing signatures
(microdnf:2): librhsm-WARNING **: 00:40:41.668: Found 0 entitlement certificates
(microdnf:2): librhsm-WARNING **: 00:40:41.671: Found 0 entitlement certificates
Created symlink /etc/systemd/system/sockets.target.wants/dbus.socket → /usr/lib/systemd/system/dbus.socket.
Created symlink /etc/systemd/user/sockets.target.wants/dbus.socket → /usr/lib/systemd/user/dbus.socket.
Created symlink /etc/systemd/system/dbus.service → /usr/lib/systemd/system/dbus-broker.service.
Created symlink /etc/systemd/user/dbus.service → /usr/lib/systemd/user/dbus-broker.service.
(microdnf:1): librhsm-WARNING **: 00:40:46.901: Found 0 entitlement certificates
(microdnf:1): librhsm-WARNING **: 00:40:46.904: Found 0 entitlement certificates
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 /usr/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp1zm1pbcw
cwd: /tmp/pip-install-wvkso1q9/llama-cpp-python_8bd71cf518d04f7b98f73e3b630a4153
Complete output (85 lines):
*** scikit-build-core 0.10.0 using CMake 3.30.2 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpuvls29hd/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.1
-- The CXX compiler identification is GNU 11.4.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Could NOT find Git (missing: GIT_EXECUTABLE)
CMake Warning at vendor/llama.cpp/cmake/build-info.cmake:14 (message):
Git not found. Build info will not be accurate.
Call Stack (most recent call first):
vendor/llama.cpp/CMakeLists.txt:74 (include)
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using llamafile
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:17 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:74 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:17 (install):
Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:74 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev
|
Build image (./chat/vulkan/amd64/Containerfile, ai-lab-playground-chat-vulkan, amd64)
Error: buildah exited with code 1
Trying to pull registry.access.redhat.com/ubi9/python-311:1-72.1722518949...
Getting image source signatures
Copying blob sha256:c4660e853dd713329dc8ce01d6d31e9bf08d8773b0bb31c21ccfae5fcab3dc5d
Copying blob sha256:cc296d75b61273dcb0db7527435a4c3bd03f7723d89a94d446d3d52849970460
Copying blob sha256:db22e630b1c7cf081461536c489254a8d1b39ceda32f8f3025314f032860d984
Copying blob sha256:9f2fbf79f82326e925352def0dbdccc800cb9da2fe0125c4d1c33a9cbfc1b629
Copying blob sha256:cc296d75b61273dcb0db7527435a4c3bd03f7723d89a94d446d3d52849970460
Copying blob sha256:9f2fbf79f82326e925352def0dbdccc800cb9da2fe0125c4d1c33a9cbfc1b629
Copying blob sha256:c4660e853dd713329dc8ce01d6d31e9bf08d8773b0bb31c21ccfae5fcab3dc5d
Copying blob sha256:db22e630b1c7cf081461536c489254a8d1b39ceda32f8f3025314f032860d984
Copying config sha256:4147737e222a93ee41758c40abfb2ded77ecd626a9a94604fa5ccb2640cc0d2a
Writing manifest to image destination
Storing signatures
Enabling a Copr repository. Please note that this repository is not part
of the main distribution, and quality may vary.
The Fedora Project does not exercise any power over the contents of
this repository beyond the rules outlined in the Copr FAQ at
<https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>,
and packages are not held to any quality or security level.
Please do not file bug reports about these packages in Fedora
Bugzilla. In case of problems, contact the owner of this repository.
Importing GPG key 0xFA8FEACD:
Userid : "ligenix_enterprise-sandbox (None) <ligenix#[email protected]>"
Fingerprint: 4845 01A6 F5CF C114 350F ED13 8DA2 CDEA FA8F EACD
From : https://download.copr.fedorainfracloud.org/results/ligenix/enterprise-sandbox/pubkey.gpg
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [80 lines of output]
*** scikit-build-core 0.10.0 using CMake 3.26.5 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpwb57prkj/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.1
-- The CXX compiler identification is GNU 11.4.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.43.5")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using llamafile
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:17 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack
|
Build image (./chat/vulkan/arm64/Containerfile, ai-lab-playground-chat-vulkan, arm64)
Error: buildah exited with code 1
Trying to pull registry.access.redhat.com/ubi9/python-311:1-72.1722518949...
Getting image source signatures
Copying blob sha256:f37989a2a8585202cf1fc1f51d1629a4c4695a808ab938dd08627a9a8d291f10
Copying blob sha256:c6f5847a7d5ebb6d4fe76423af992ea0acefca3cbf204f5e1004116397e1f65e
Copying blob sha256:7594a545ac0ef3eb3c8367a508f6976ea1490cb83c32b11c01a87fef6723c8f7
Copying blob sha256:679da2159f24d706632e5ced43625e0acf98deaea69b0a3d52848943764c39a1
Copying blob sha256:7594a545ac0ef3eb3c8367a508f6976ea1490cb83c32b11c01a87fef6723c8f7
Copying blob sha256:c6f5847a7d5ebb6d4fe76423af992ea0acefca3cbf204f5e1004116397e1f65e
Copying blob sha256:f37989a2a8585202cf1fc1f51d1629a4c4695a808ab938dd08627a9a8d291f10
Copying blob sha256:679da2159f24d706632e5ced43625e0acf98deaea69b0a3d52848943764c39a1
Copying config sha256:97dd5978fad88a4a9ff4da8ad54cc23235a426688860afa79ed73fb6610f744b
Writing manifest to image destination
Storing signatures
Enabling a Copr repository. Please note that this repository is not part
of the main distribution, and quality may vary.
The Fedora Project does not exercise any power over the contents of
this repository beyond the rules outlined in the Copr FAQ at
<https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>,
and packages are not held to any quality or security level.
Please do not file bug reports about these packages in Fedora
Bugzilla. In case of problems, contact the owner of this repository.
Importing GPG key 0x7BA6947F:
Userid : "slp_mesa-krunkit (None) <slp#[email protected]>"
Fingerprint: C962 C887 AE35 6588 B601 6773 6E54 C94F 7BA6 947F
From : https://download.copr.fedorainfracloud.org/results/slp/mesa-krunkit/pubkey.gpg
Importing GPG key 0x3228467C:
Userid : "Fedora (epel9) <[email protected]>"
Fingerprint: FF8A D134 4597 106E CE81 3B91 8A38 72BF 3228 467C
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [82 lines of output]
*** scikit-build-core 0.10.0 using CMake 3.26.5 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpoo43y31o/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.1
-- The CXX compiler identification is GNU 11.4.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.43.5")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using llamafile
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:17 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first
|
Build image (./chat/cuda/amd64/Containerfile, ai-lab-playground-chat-cuda, amd64)
Error: buildah exited with code 1
Trying to pull quay.io/opendatahub/workbench-images:cuda-ubi9-python-3.9-20231206...
Getting image source signatures
Copying blob sha256:bee86f8257632eeaa07cebb2436ccab03b967017e1ef485a4525ae5991f0ee33
Copying blob sha256:7cb554c593ec96d7901a6f99e4c4d3b45976d92e0aa4f24db8f876ba68903fcb
Copying blob sha256:b824f4b30c465e487e640bdc22e46bafd6983e4e0eabf30085cacf945c261160
Copying blob sha256:5f7d6ade1ce7871adb550730033e6696e928aaafea518b98a7f2c7cb89eda124
Copying blob sha256:2c6c6493f94e4d1f481a0976dec432e3b7c95f1ba764f7a0033b995670112ad7
Copying blob sha256:a64827a24ae8ee62038a21834d13766a06c1526b54c86ac6b260c699820220bc
Copying blob sha256:bee86f8257632eeaa07cebb2436ccab03b967017e1ef485a4525ae5991f0ee33
Copying blob sha256:b824f4b30c465e487e640bdc22e46bafd6983e4e0eabf30085cacf945c261160
Copying blob sha256:a64827a24ae8ee62038a21834d13766a06c1526b54c86ac6b260c699820220bc
Copying blob sha256:2c6c6493f94e4d1f481a0976dec432e3b7c95f1ba764f7a0033b995670112ad7
Copying blob sha256:7cb554c593ec96d7901a6f99e4c4d3b45976d92e0aa4f24db8f876ba68903fcb
Copying blob sha256:5f7d6ade1ce7871adb550730033e6696e928aaafea518b98a7f2c7cb89eda124
Copying config sha256:1c79c1fc89c6ffbe76e2b62ee55f965fc87fdca64203dda12444b33b9bb1e147
Writing manifest to image destination
Storing signatures
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [194 lines of output]
*** scikit-build-core 0.10.0 using CMake 3.30.2 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpzpycgt7s/build/CMakeInit.txt
-- The C compiler identification is GNU 13.3.1
-- The CXX compiler identification is GNU 13.3.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /opt/rh/gcc-toolset-13/root/usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/rh/gcc-toolset-13/root/usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.39.3")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using llamafile
-- Found CUDAToolkit: /usr/local/cuda/targets/x86_64-linux/include (found version "11.8.89")
-- CUDA found
-- Using CUDA architectures: 52;61;70;75
-- The CUDA compiler identification is NVIDIA 11.8.89
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- CUDA host compiler is GNU 11.4.1
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:17 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:
|