Skip to content

chore: ⬆️ Update ggml-org/llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0 #6685

chore: ⬆️ Update ggml-org/llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0

chore: ⬆️ Update ggml-org/llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0 #6685

Triggered via pull request April 7, 2025 20:07
Status Failure
Total duration 1h 9m 21s
Artifacts 1

image-pr.yml

on: pull_request
Matrix: extras-image-build

Annotations

1 error
extras-image-build (cublas, 12, 0, linux/amd64, false, -cublas-cuda12-ffmpeg, true, extras, arc-r... / reusable_image-build
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${BUILD_TYPE}\" = \"cublas\" ] || [ \"${BUILD_TYPE}\" = \"hipblas\" ]; then SKIP_GRPC_BACKEND=\"backend-assets/grpc/llama-cpp-avx512 backend-assets/grpc/llama-cpp-avx backend-assets/grpc/llama-cpp-avx2\" make build; else make build; fi" did not complete successfully: exit code: 2

Artifacts

Produced during runtime
Name Size Digest
mudler~LocalAI~JKN5MK.dockerbuild
188 KB
sha256:453db50e14ee3101cfad49047e6db09195a3114c95e40507549ba2748e0a7804