Skip to content

graph : make llm_graph_context destructor virtual #14410

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 27, 2025

Conversation

ggerganov
Copy link
Member

@ggerganov ggerganov requested a review from compilade June 27, 2025 06:42
Copy link
Collaborator

@compilade compilade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can confirm, this does seem to fix the problem.

@ggerganov ggerganov merged commit 72babea into master Jun 27, 2025
49 of 56 checks passed
@ggerganov ggerganov deleted the gg/graph-virt-destr branch June 27, 2025 18:42
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jun 30, 2025
* origin/master:
metal : disable fast-math for some cpy kernels (ggml-org#14460)
ggml-cpu: sycl: Re-enable exp f16 (ggml-org#14462)
test-backend-ops : disable llama test (ggml-org#14461)
cmake : Remove redundant include path in CMakeLists.txt (ggml-org#14452)
scripts : make the shell scripts cross-platform (ggml-org#14341)
server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (ggml-org#13196)
server : fix appearance of the chats list context menu for Safari (ggml-org#14322)
SYCL: disable faulty fp16 exp kernel (ggml-org#14395)
ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (ggml-org#14443)
ggml : implement REGLU/GEGLU/SWIGLU ops (ggml-org#14158)
vulkan: Add fusion support for RMS_NORM+MUL (ggml-org#14366)
CUDA: add bf16 and f32 support to cublas_mul_mat_batched (ggml-org#14361)
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (ggml-org#14378)
vulkan: lock accesses of pinned_memory vector (ggml-org#14333)
model : add support for ERNIE 4.5 0.3B model (ggml-org#14408)
fix async_mode bug (ggml-org#14432)
ci : fix windows build and release (ggml-org#14431)
vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (ggml-org#14427)
graph : make llm_graph_context destructor virtual (ggml-org#14410)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants