LangChain based LLama model with microphone input and voice output.
OLLama is a framework that allows you to get up and running with large language models like Llama 2 locally on your machine. It’s designed to be lightweight, extensible, and user-friendly.
- Homepage: OLLama GitHub Repository
- Model Library: OLLama Model Library
- Installation Guide: OLLama Installation
To install OLLama, you can use the following command for Linux & WSL2:
curl https://ollama.ai/install.sh | shFor detailed manual installation instructions, please refer to the installation guide.
Here’s how you can interact with OLLama models:
To download or update a model:
ollama pull <model-name>
To run a model:
ollama run <model-name>
To list all available models:
ollama list
To remove a model from your system:
ollama delete <model-name>
For more detailed usage and commands, please visit the OLLama GitHub Repository.
To use LangChain with OLLama, you need to install the LangChain package and set up the desired language model. Here’s a quick guide:
First, install the LangChain package:
pip install langchain
Define your model with the OLLama binding:
from langchain.llms import Ollama
# Set your model, for example, Llama 2 7B
llm = Ollama(model="llama2:7b")For more detailed information on setting up and using OLLama with LangChain, please refer to the OLLama documentation and LangChain GitHub repository.
A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments.
Prerequisites: Installation manuals can be found here:
Install Python 3.11.4
With pyenv installed, you can now install Python 3.11.4:
pyenv install 3.11.4
Set up the virtual environment
Next, you’ll want to create a virtual environment using the installed Python version:
pyenv virtualenv 3.11.4 otalk
Activate the virtual environment
To activate the ‘otalk’ virtual environment:
pyenv activate otalk
You should now be using Python 3.11.4 within your ‘otalk’ virtual environment. To verify, you can check the Python version:
python --version
The output should show Python 3.11.4. When you’re done working in the virtual environment, you can deactivate it:
pyenv deactivate
For more detailed instructions and troubleshooting, please refer to the pyenv documentation and the virtualenv documentation.
git clone
pip install -r requirements.txt
Samantha AI
Samantha AI is an artificial intelligence system that has been depicted in various forms of media and technology. It is often personified through a female voice and is designed to interact with users in a natural and intuitive way.
Usage
The concept of Samantha AI is used to inspire the development of advanced AI systems that can assist with a variety of tasks, from providing information to offering emotional support. It represents the aspiration to create AI that can understand and respond to human emotions and behaviors in a meaningful way.
For more information on the development and capabilities of AI systems like Samantha, you can refer to the BBC article on virtual assistants and the Wikipedia page for the film “Her”. Enjoy exploring the fascinating world of artificial intelligence! 😊
You can find more information about Samantha AI at the following link: Meet SAMANTHA AI. This website provides insights into the concept and applications of Samantha AI.
We are using the Voice Detection from VOSK. The tested model can be found below, feel free to try out different once from VOSK Models.
| Model | Link |
|---|---|
| vosk-model-de-0.21 | https://alphacephei.com/vosk/models/[vosk-model-de-0.21](https://alphacephei.com/vosk/models/vosk-model-de-0.21.zip) |
Unpack and move it to the \languagemodels.
The gTTS (Google Text-to-Speech) is a Python library and CLI tool that interfaces with Google Translate’s text-to-speech API. It allows you to convert text into spoken audio, which can be saved as an MP3 file.
Features
- Customizable speech-specific sentence tokenizer.
- Customizable text pre-processors for pronunciation corrections.
- Supports multiple languages.
Installation To install gTTS, run the following command:
pip install gTTS (already done in requirements.txt)Here's a simple example of how to use gTTS:
from gtts import gTTS
tts = gTTS('hello')
tts.save('hello.mp3')For more information and examples, visit the gTTS documentation.
License
The MIT License (MIT)
Don't forget to adapt your models and pathes at the marked lines in talkollama.py.
python controlcenter.py
Written with StackEdit.
