Skip to content

Feature Request: Optimize for Nvidia Jetson Series' truly Unified Memory Architecture #13856

Open
@Yangxiaoz

Description

@Yangxiaoz

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Currently, the code does not differentiate for Nvidia Jetson series edge boards where the CPU and GPU share unified physical memory. Regardless of whether the GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 flag is enabled, weight tensors are still copied via Host-to-Device transfers. This appears unnecessary for Unified Memory (UM) or pinned memory scenarios, as both architectures allow direct access without explicit duplication

Motivation

Image Image

As shown in the figure above, even after enabling GGML_CUDA_ENABLE_UNIFIED_MEMORY=1, pointers allocated via cudaMallocManaged remain accessible to the CPU, yet explicit copy operations (MemcpyHostToDevice)) still occur in this code path. For Jetson's UMA architecture, could we optimize this behavior to eliminate redundant transfers and achieve true zero-copy access?

Possible Implementation

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions