chore: ⬆️ Update ggml-org/llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0
#6685
Annotations
1 error
extras-image-build (cublas, 12, 0, linux/amd64, false, -cublas-cuda12-ffmpeg, true, extras, arc-r... / reusable_image-build
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${BUILD_TYPE}\" = \"cublas\" ] || [ \"${BUILD_TYPE}\" = \"hipblas\" ]; then SKIP_GRPC_BACKEND=\"backend-assets/grpc/llama-cpp-avx512 backend-assets/grpc/llama-cpp-avx backend-assets/grpc/llama-cpp-avx2\" make build; else make build; fi" did not complete successfully: exit code: 2
|
Artifacts
Produced during runtime
Name | Size | Digest | |
---|---|---|---|
mudler~LocalAI~JKN5MK.dockerbuild
|
188 KB |
sha256:453db50e14ee3101cfad49047e6db09195a3114c95e40507549ba2748e0a7804
|
|