chore: ⬆️ Update ggml-org/llama.cpp to 8e186ef0e764c7a620e402d1f76ebad60bf31c49
#11542
release.yaml
on: pull_request
build-linux-arm
30m 6s
build-linux
2h 59m
build-macOS-x86_64
47m 21s
build-macOS-arm64
22m 37s
Artifacts
Produced during runtime
Name | Size | Digest | |
---|---|---|---|
LocalAI-MacOS-arm64
|
77.4 MB |
sha256:8a15da93db053350e3a377051fa94a8c61c2ff207966d4537a1f3d97eb2e2f0e
|
|
LocalAI-MacOS-x86_64
|
76.1 MB |
sha256:fbf78d6ed9a087fde3f98d8515a817c52e5ad1cfafb7a0c79be557d768ea6994
|
|
LocalAI-linux
|
467 MB |
sha256:22feabe6dd99067626edd73de57ee36ace7659645b714a0796f602ce2a695021
|
|
LocalAI-linux-arm64
|
52 MB |
sha256:c93368bbf4a4ba9ce6141a2b36b05b9e67594fa6ab32b174d92078c432c75024
|
|