Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update GGML_HIP_UMA #473

Merged
merged 1 commit into from
Jun 20, 2024
Merged

update GGML_HIP_UMA #473

merged 1 commit into from
Jun 20, 2024

Conversation

Djip007
Copy link
Contributor

@Djip007 Djip007 commented Jun 14, 2024

add UMA config for higher speed like in (ggerganov/llama.cpp#7414) but made 2 changes:

  • remove UMA build option
  • use it in all case if hipalloc failed with 'not have enough memory'

an other change is look for 'hipcc' on linux and not 'amdclang++'

(1 possible solution for #439 / #468)

add UMA config for higher speed like in (ggerganov/llama.cpp#7414)
but made 2 changes:
- remove UMA build option
- use it in all case if hipalloc failed with 'not have enough memory'

an other change is look for 'hipcc' on linux and not 'amdclang++'
Copy link
Collaborator

@jart jart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@jart jart merged commit a28250b into Mozilla-Ocho:main Jun 20, 2024
2 checks passed
@Djip007
Copy link
Contributor Author

Djip007 commented Aug 2, 2024

I think I need some update after e9ee3f9
(the UMA patch was not 100% same because we can't decide with llamafile if/when we have to activate "UMA")

=> I'll check that in the next few days.

13/08/2024: This is the case => new patch: #536

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants