feat: unify and propagate CMAKE_ARGS to GGML-based backends#4367
Closed
feat: unify and propagate CMAKE_ARGS to GGML-based backends#4367
Conversation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This was referenced Jan 14, 2025
nareshrajkumar866
approved these changes
May 27, 2025
|
This PR is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
|
This PR was closed because it has been stalled for 10 days with no activity. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This pull request centralizes CMAKE_ARGS composition as such is shared between backends that are ggml-based. llama.cpp cmake args can be used for instance with bark.cpp and stable-diffusion.cpp (ggml based). This aims to enable cuda and hipblas support on bark.cpp and stablediffusion.cpp (ggml variant).
For now this doesn't aim to be smart and share this in a common way (maybe using cmake, or a makefile that is called by both to generate the cmake args). The attempt of this pr is to understand any changes that might be required by enabling the flags for the respective backends. I'm not sure that bark and stablediffusion currently linking processes are correct in term of GPU support.
Notes for Reviewers
Signed commits