Skip to content

vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO #14427

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 28, 2025

Conversation

jeffbolznv
Copy link
Collaborator

This setting needs to be passed through to vulkan-shaders-gen

This setting needs to be passed through to vulkan-shaders-gen
@jeffbolznv jeffbolznv requested review from 0cc4m and bandoti June 27, 2025 20:10
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jun 27, 2025
@jeffbolznv jeffbolznv merged commit ceb1bf5 into ggml-org:master Jun 28, 2025
117 of 132 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jun 30, 2025
* origin/master:
metal : disable fast-math for some cpy kernels (ggml-org#14460)
ggml-cpu: sycl: Re-enable exp f16 (ggml-org#14462)
test-backend-ops : disable llama test (ggml-org#14461)
cmake : Remove redundant include path in CMakeLists.txt (ggml-org#14452)
scripts : make the shell scripts cross-platform (ggml-org#14341)
server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (ggml-org#13196)
server : fix appearance of the chats list context menu for Safari (ggml-org#14322)
SYCL: disable faulty fp16 exp kernel (ggml-org#14395)
ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (ggml-org#14443)
ggml : implement REGLU/GEGLU/SWIGLU ops (ggml-org#14158)
vulkan: Add fusion support for RMS_NORM+MUL (ggml-org#14366)
CUDA: add bf16 and f32 support to cublas_mul_mat_batched (ggml-org#14361)
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (ggml-org#14378)
vulkan: lock accesses of pinned_memory vector (ggml-org#14333)
model : add support for ERNIE 4.5 0.3B model (ggml-org#14408)
fix async_mode bug (ggml-org#14432)
ci : fix windows build and release (ggml-org#14431)
vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (ggml-org#14427)
graph : make llm_graph_context destructor virtual (ggml-org#14410)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants