You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+30-23Lines changed: 30 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -21,6 +21,27 @@ The simplest (and probably most utilized) use case for this package is to separa
21
21
22
22
## Installation 🛠️
23
23
24
+
### 🐳 Docker
25
+
26
+
If you're able to use docker, you don't actually need to _install_ anything - there are [images published on Docker Hub](https://hub.docker.com/r/beveradb/audio-separator/tags) for GPU (CUDA) and CPU inferencing, for both `amd64` and `arm64` platforms.
27
+
28
+
You probably want to volume-mount a folder containing whatever file you want to separate, which can then also be used as the output folder.
29
+
30
+
For instance, if your current directory has the file `input.wav`, you could execute `audio-separator` as shown below (see [usage](#usage-) section for more details):
31
+
32
+
```
33
+
docker run -it -v `pwd`:/workdir beveradb/audio-separator input.wav
34
+
```
35
+
36
+
If you're using a machine with a GPU, you'll want to use the GPU specific image and pass in the GPU device to the container, like this:
37
+
38
+
```
39
+
docker run -it --gpus all -v `pwd`:/workdir beveradb/audio-separator:gpu input.wav
40
+
```
41
+
42
+
If the GPU isn't being detected, make sure your docker runtime environment is passing through the GPU correctly - there are [various guides](https://www.celantur.com/blog/run-cuda-in-docker-on-linux/) online to help with that.
43
+
44
+
24
45
### 🎮 Nvidia GPU with CUDA or 🧪 Google Colab
25
46
26
47
💬 If successfully configured, you should see this log message when running `audio-separator --env_info`:
@@ -90,32 +111,26 @@ You can install the CUDA 11 libraries _alongside_ CUDA 12 like so:
90
111
91
112
> Note: if anyone knows how to make this cleaner so we can support both different platform-specific dependencies for hardware acceleration without a separate installation process for each, please let me know or raise a PR!
92
113
93
-
## Usage in Docker 🐳
94
114
95
-
There are [images published on Docker Hub](https://hub.docker.com/r/beveradb/audio-separator/tags) for GPU (CUDA) and CPU inferencing, for both `amd64` and `arm64` platforms.
96
-
97
-
You probably want to volume-mount a folder containing whatever file you want to separate, which can then also be used as the output folder.
98
-
99
-
For example, if the current directory contains your input file `input.wav`, you could run `audio-separator` like so:
115
+
## Usage 🚀
100
116
101
-
```
102
-
docker run -it -v `pwd`:/workdir beveradb/audio-separator input.wav
103
-
```
117
+
### Command Line Interface (CLI)
104
118
105
-
If you're using a machine with a GPU, you'll want to use the GPU specific image and pass in the GPU device to the container, like this:
119
+
You can use Audio Separator via the command line, for example:
106
120
107
121
```
108
-
docker run -it --gpus all -v `pwd`:/workdir beveradb/audio-separator:gpu input.wav
If the GPU isn't being detected, make sure your docker runtime environment is passing through the GPU correctly - there are [various guides](https://www.celantur.com/blog/run-cuda-in-docker-on-linux/) online to help with that.
125
+
This command will download the specified model file, process the `audio.wav` input audio and generate two new files in the current directory, one containing vocals and one containing instrumental.
112
126
127
+
**Note:** You do not need to download any files yourself - audio-separator does that automatically for you!
113
128
114
-
## Usage 🚀
129
+
To see a list of supported models, run `audio-separator --list_models`
115
130
116
-
### Command Line Interface (CLI)
131
+
Any file listed in the list models output can be specified (with file extension) with the model_filename parameter (e.g. `--model_filename UVR_MDXNET_KARA_2.onnx`) and it will be automatically downloaded to the `--model_file_dir` (default: `/tmp/audio-separator-models/`) folder on first usage.
0 commit comments