Skip to content

cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION #14362

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

mbaudier
Copy link
Contributor

Use the CMake variable LLAMA_BUILD_NUMBER (which can be set externally) instead of BUILD_NUMBER when defining LLAMA_INSTALL_VERSION.

@github-actions github-actions bot added the build Compilation issues label Jun 24, 2025
@ckastner ckastner self-requested a review June 24, 2025 11:52
Copy link
Collaborator

@ckastner ckastner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct. I overlooked this in #14167.

@ckastner ckastner merged commit c148cf1 into ggml-org:master Jun 24, 2025
48 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jun 27, 2025
* mamba2-sync: (22 commits)
recurrent : call balloc split_reset() in init_batch() (ggml-org#14414)
ggml : add ggml_set_rows (ggml-org#14274)
convert : fix broken sentencepiece vocab (ggml-org#14416)
mamba : fix mismatched new and delete size for llm_build_mamba
model : gemma3n text-only (ggml-org#14400)
cmake: regen vulkan shaders when shaders-gen sources change (ggml-org#14398)
llama : return mistral-v7-tekken as default template only (ggml-org#14390)
metal : add special-case mat-vec mul for ne00 == 4 (ggml-org#14385)
metal : batch rows copy in a single threadgroup (ggml-org#14384)
docs: update s390x documentation + add faq (ggml-org#14389)
musa: enable fp16 mma (all) and cublas on qy2 (ggml-org#13842)
ggml-cpu: enable IBM NNPA Vector Intrinsics (ggml-org#14317)
ggml : do not output unprintable characters on GGUF load failure (ggml-org#14381)
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (ggml-org#13973)
opencl: ref count `ggml_backend_opencl_context` and refactor profiling (ggml-org#14254)
batch : fix check for empty sequences in memory (ggml-org#14364)
cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION (ggml-org#14362)
server : move no API key doc to /health (ggml-org#14352)
main : honor --verbose-prompt on interactive prompts (ggml-org#14350)
jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (ggml-org#14349)
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Compilation issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants