Skip to content

Make -DLLAMA_HIP_UMA a dynamic setting. #7145

@sebastian-philipp

Description

@sebastian-philipp

Feature Description

Please provide a detailed written description of what you were trying to do, and what you expected llama.cpp to do as an enhancement.

Ollama uses a compiled version of llama.cpp . Letting end-uses re-compile ollama and llama.cpp in order to enable the usage of integrated GPUs is problematic. I would enjoy LLAMA_HIP_UMA to be a dynamic setting that can be enabled regardless of the compile time flags.

I think right now there are three ways to get iGPUs working in ollama:

  1. re-compile llama.cpp
  2. use something like https://github.com/segurac/force-host-alloction-APU to force the use of hipHostMalloc circumventing the already built-in feature.
  3. change the Dedicated video memory in the BIOS.

See also ollama/ollama#2637

Possible Implementation

std::getenv("LLAMA_HIP_UMA")

?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions