This repository contains server code for the Luma-1 drum machine PDF documentation, enabling the ability to chat against it in real time.
- Environment Variables
- Doing Embeddings
- Running the Server With Docker
- Running the Server Without Docker
- Sending Requests
All functionality of this server requires:
- An OpenAI API key (model access)
- A Pinecone API key (vector store access)
Start by going to Pinecone to create a new account. Create a new index with the name of luma-1-index
and a dimension of 1536
. Next, create a new API key in Pinecone and then create an .env
file at the root of this project with the following environment variable in it:
PINECONE_API_KEY="<api key here>"
Next, go to OpenAI's developer dashboard and get an API key. Put this in your .env
file as well like this:
OPENAI_API_KEY="<api key here>"
Before you can chat against the document, you'll need to embed it in a vector store. Assuming that you've done the Environment Variables section above, you can run the following command to embed the PDF document data:
make embed
NOTE: This section requires that you have Docker installed locally. Also, make sure you've completed the Environment Variables section above before continuing on.
First, set an environment variable in your .env
file for your server host:
SERVER_HOST=0.0.0.0
Run the following commands to build a new docker container locally:
make build
Once that's done, run the docker container with this command:
make run-docker
You'll now be able to hit http://localhost:3200/v1/chat to ask questions against the docs.
Make sure you've completed the Environment Variables section above before continuing on.
First, set an environment variable in your .env
file for your server host:
SERVER_HOST=127.0.0.1
First, create a new virtual environment:
python3 -m venv venv
Next, activate it:
source /venv/bin/activate
Finally, go ahead and install the required dependencies with this make command:
make install
Once you've done this, you can run the server locally with this command:
make run
You'll now be able to hit http://localhost:3200/v1/chat
to ask questions against the docs.
Once you have the server running, you can send requests to http://localhost:3200/v1/chat
like this:
Request:
{
"prompt": "How can I add new sounds?"
}
Response:
{
"result": "To add new sounds to the Luma-1, you can use the Load Voice Bank command to load downloaded sounds onto the device via its internal SD card or the Luma-1 Web Application. You can put your sample files into specific folders for different drum sounds, ensuring they are in u-law format and no larger than 32KB. You can also build a custom Voice Bank from EPROM and SysEx sounds and save it to the SD card. To load your new Drum Bank, navigate through the menu and select the folder containing your loaded sounds. The Luma-1 will process the new samples and return to normal drum machine operation."
}