Skip to content

cmake: regen vulkan shaders when shaders-gen sources change #14398

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 26, 2025

Conversation

bandoti
Copy link
Collaborator

@bandoti bandoti commented Jun 26, 2025

Because adding an external project in cmake does not provide granular source-level dependency tracking, it is required to explicitly add the C++ sources of vulkan-shaders-gen to ensure changes cascade and force shader regeneration.

This is because the external targets provide a generic, higher-level means of building external targets. The vulkan-shaders-gen project is is conceptually an "in-source" build, but external to allow it to work when cross-compiling.

@bandoti bandoti requested a review from jeffbolznv June 26, 2025 15:40
@bandoti
Copy link
Collaborator Author

bandoti commented Jun 26, 2025

Note that I originally tried to add the step-targets back, but after some experimentation, found they are still not necessary. My original thinking was more in-line with a "regular" cmake target. Hopefully this fix does the trick!

@jeffbolznv
Copy link
Collaborator

I tested touching vulkan-shaders-gen.cpp and a shader (separately) and each did the correct rebuild, so this works for me.

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jun 26, 2025
@bandoti bandoti merged commit a01047b into ggml-org:master Jun 26, 2025
45 of 48 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jun 27, 2025
* mamba2-sync: (22 commits)
recurrent : call balloc split_reset() in init_batch() (ggml-org#14414)
ggml : add ggml_set_rows (ggml-org#14274)
convert : fix broken sentencepiece vocab (ggml-org#14416)
mamba : fix mismatched new and delete size for llm_build_mamba
model : gemma3n text-only (ggml-org#14400)
cmake: regen vulkan shaders when shaders-gen sources change (ggml-org#14398)
llama : return mistral-v7-tekken as default template only (ggml-org#14390)
metal : add special-case mat-vec mul for ne00 == 4 (ggml-org#14385)
metal : batch rows copy in a single threadgroup (ggml-org#14384)
docs: update s390x documentation + add faq (ggml-org#14389)
musa: enable fp16 mma (all) and cublas on qy2 (ggml-org#13842)
ggml-cpu: enable IBM NNPA Vector Intrinsics (ggml-org#14317)
ggml : do not output unprintable characters on GGUF load failure (ggml-org#14381)
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (ggml-org#13973)
opencl: ref count `ggml_backend_opencl_context` and refactor profiling (ggml-org#14254)
batch : fix check for empty sequences in memory (ggml-org#14364)
cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION (ggml-org#14362)
server : move no API key doc to /health (ggml-org#14352)
main : honor --verbose-prompt on interactive prompts (ggml-org#14350)
jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (ggml-org#14349)
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants