diff --git a/README.md b/README.md index 58b50ac..39927f4 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,16 @@ This assistant can run offline on your local machine, and it respects your priva ![Screenshot](https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/screenshot.png) +## TODO + +- [ ] Support other text models: Llama 3.x. +- [ ] Support multimodal models: LLaVA, Llama 3.2 + Vision. +- [ ] Add offline STT support: WhisperCPP. +- [ ] Add wake word detection: "Hey Llama!". +- [ ] Knowledge database. +- [ ] Video interaction support. +- [ ] Plugin system for extensibility. + ## Features - 🎙️ Voice recognition for hands-free interaction @@ -43,6 +53,7 @@ This assistant can run offline on your local machine, and it respects your priva ```bash pip install llama-assistant +pip install pyaudio ``` **Or install from source:** @@ -118,6 +129,5 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file ## Contact -Viet-Anh Nguyen - [@vietanhdev](https://github.com/vietanhdev) - -Project Link: [https://github.com/vietanhdev/llama-assistant](https://github.com/vietanhdev/llama-assistant) +- Viet-Anh Nguyen - [vietanhdev](https://github.com/vietanhdev), [contact form](https://www.vietanh.dev/contact). +- Project Link: [https://github.com/vietanhdev/llama-assistant](https://github.com/vietanhdev/llama-assistant) diff --git a/pyproject.toml b/pyproject.toml index 53970d4..6e8fbd6 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "llama-assistant" -version = "0.1.12" +version = "0.1.14" authors = [ {name = "Viet-Anh Nguyen", email = "vietanh.dev@gmail.com"}, ] diff --git a/requirements.txt b/requirements.txt index a093266..b3345e3 100644 --- a/requirements.txt +++ b/requirements.txt @@ -3,4 +3,5 @@ SpeechRecognition==3.10.4 markdown==3.7 pynput==1.7.7 llama-cpp-python -huggingface_hub==0.25.1 \ No newline at end of file +huggingface_hub==0.25.1 +pyaudio==0.2.14 \ No newline at end of file