Skip to content

Conversation

@Chung1045
Copy link

To conserve valuable resources from the AI company and contribute to reducing the world’s electricity consumption and the AI slops, which hopefully could lead to a decrease in RAM prices. A local approach with Ollama is created to utilize the GPU power of the user's computer locally.

Documentations can be found in Ollama's Python Library Repository in GitHub.

Chung1045 and others added 2 commits January 7, 2026 13:08
Co-authored-by: Kelvin <cychandt@connect.ust.hk>
Co-authored-by: Kelvin <cychandt@connect.ust.hk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants