Skip to content

chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6 #6672

chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6

chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6 #6672

Triggered via pull request April 5, 2025 20:06
Status Failure
Total duration 1h 10m 21s
Artifacts 1

image-pr.yml

on: pull_request
Matrix: extras-image-build
Fit to window
Zoom out
Zoom in

Annotations

1 error
extras-image-build (cublas, 12, 0, linux/amd64, false, -cublas-cuda12-ffmpeg, true, extras, arc-r... / reusable_image-build
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${BUILD_TYPE}\" = \"cublas\" ] || [ \"${BUILD_TYPE}\" = \"hipblas\" ]; then SKIP_GRPC_BACKEND=\"backend-assets/grpc/llama-cpp-avx512 backend-assets/grpc/llama-cpp-avx backend-assets/grpc/llama-cpp-avx2\" make build; else make build; fi" did not complete successfully: exit code: 2

Artifacts

Produced during runtime
Name Size Digest
mudler~LocalAI~H72M4Y.dockerbuild
189 KB
sha256:9e84a506e1a3108be791c377f03df80e68e40e4ced8c02909e57a4a1083fcb2d