This project is a VST plugin created using C++ and Projucer.
Follow the instructions below to set up the environment and install the necessary dependencies to get everything working.
PLEASE, note that as of now, I'd say that the samples are usable ~1/8 of the time, the current models absolutely want to generate a "whole music", which is crap, but whatever.
- External Libraries Used
- Installation
- Installing the VST
- System Prompt Edition
- Usage
- Editing the VST
- TODO List
This project uses the following external libraries to function properly:
- Audiocraft: A library developed by Facebook Research for high-quality audio synthesis and music generation.
- PyTorch: A deep learning framework used to support CUDA.
- FFmpeg: A multimedia framework used for handling audio and video processing. Probably one of the best tool ever made.
To ensure everything works as expected, follow the setup guide for your operating system.
-
Install Anaconda
- Download and install Anaconda if you haven't already.
-
Add Conda Path to Environment Variables
- Add the following path to your system's environment variables:
C:\Users\<your_username>\anaconda3\Scripts
- Replace
<your_username>
with your actual username, or replace the whole path with where your installation is.
- Add the following path to your system's environment variables:
-
Create and Activate Conda Environment
- Open a command prompt and run the following commands:
conda create -n MusicGen python=3.9 conda activate MusicGen
- Open a command prompt and run the following commands:
-
Install Dependencies
- Install
ffmpeg
via conda:conda install ffmpeg
- Install the
audiocraft
library from GitHub:pip install git+https://github.com/facebookresearch/audiocraft.git
- Install
-
Install PyTorch
- Uninstall any previous versions of
torch
to avoid conflicts:pip uninstall torch pip cache purge
- Install PyTorch 2.1.0 with CUDA support:
conda install pytorch==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
- Uninstall any previous versions of
The process for Mac is similar to Windows, but the paths may vary (not verified!!!) :
-
Install Anaconda
-
Create and Activate Conda Environment
- Open a terminal and run the following commands:
conda create -n MusicGen python=3.9 conda activate MusicGen
- Open a terminal and run the following commands:
-
Install Dependencies
- Install
ffmpeg
via conda:conda install ffmpeg
- Install the
audiocraft
library from GitHub:pip install git+https://github.com/facebookresearch/audiocraft.git
- Install
-
Install PyTorch
- Uninstall any previous versions of
torch
to avoid conflicts:pip uninstall torch pip cache purge
- Install PyTorch 2.1.0:
conda install pytorch==2.1.0 -c pytorch
- Uninstall any previous versions of
Once you have followed the setup instructions, you can take the .vst3
file from the /VST
directory and install it in your DAW (Digital Audio Workstation) as you would with any VST plugin.
-
For Windows Users: Copy the
.vst3
file to your VST plugins folder, which is usually located at:C:\Program Files\Common Files\VST3
-
For Mac Users: Copy the
.vst3
file to your VST plugins folder, usually at:/Library/Audio/Plug-Ins/VST3
To edit the system prompt, you need to modify line 40 of the Generate.py
script. This script will be located in your Documents
folder AFTER the first generation.
It can be found within the project directory BEFORE building the VST IF you edit it yourself.
After installing the VST plugin, open your DAW and add the plugin as an instrument (generator). It should now be ready to use!
Feel free to explore and tweak the settings to get the desired sound.
To edit the VST plugin, open the AiAudioPlugin.jucer
file using Projucer. Everything should be preconfigured from there, allowing you to make adjustments to the plugin as needed.
Here are some tasks that are left to complete:
-
Change UI- I thrive in the back, who cares about the UI -uhh ... ;)
-
Display the root note in another color (~UI thing, yuk!)
-
React to stop message from the DAW (Important)
-
Add a sample display
-
Wait for an actually good open source AI music sample model, rather than these "music" generating models, that want to generate drums every time.
-
Sample Modification:- > To be able to modify the sample directly in the plugin, but eh, you can do that in any DAW right?
- Add "start sample"
- Add "end sample"
- Implement pitch control (?pitch?)
- Implement time control (?time?)
- Add fade in
- Add fade out
- Implement envelope functionality (?)
If you encounter any issues or have suggestions for improvements, please open an issue on the GitHub repository, I certainly will not read it, just like you may not read this note :D !
Don't forget that I DID NOT make these models, i've just setup a way to talk with them directly from within your DAW.
Enjoy making "music" with AI !
Oh and if you use parts of my code, well, noice! But don't lie to yourself saying you made it, it will mostly hurt you rather than anyone else.
Love.
DBAT