-
Notifications
You must be signed in to change notification settings - Fork 2
AudYoFlo: Tutorial I: Step I: Getting Started
At the beginning, we will start by reviewing the files which are there to start with the project right away. Typically, the first version of a sub-project contains:
- An implementation of the audio node and
- a simple test application to run the algorithm in real-time.
A sub-project is typically located in the folder sub-projects
which is located in the sources folder of the AudYoFlo
project:
For this tutorial, we involve the sub-project ayfstarter
. In this case, the ayfstarter
project is part of the AudYoFlo
repository. However, other sub-projects may be checked out to be part of the sub-projects
folder. The repository comprises the following sb folder and files:
In the folder, we find a CMakeLists.txt
file as the entry for a sub-project build as well as the files .jvxprj.audio
and .pass.default
. These three files are used by the build system to provide meta-information regarding the build:
-
CMakeLists.txt
: Main entry cmake build file. -
.jvxprj.audio
: Activate build of the sub-project. If the file is not present the sub-project will not be involved. -
.pass.default
: Hint for the build system to define the build order of the sub projects. If a sub-project depends on another sub-project, the dependency during build can be organized by associating the build run for this sub-project. In the example, thedefault
pass is the first build run which is the best choice for a project without any dependency.
In addition to the files, there are two folders of which the source
folder is currently of greater interest.
In the folder, there are two sub folders
The sub folder Applications
contains the source code files for the build of the qt based applications whereas the 'Components' folder contains the audio node.
When running the cmake based build step, the mentioned projects show up in Visual Studio in VS sub folders of the overall project:
AT the very first beginning, there are the two apps ayfStarterQt
and ayfStarterWeb
without any specific functionality. We can run these apps and we can also input/output audio. However, in the current version, only talkthrough is possible,
When starting the ayfStarterQt
application, a graphical user interface is opened. However, at this moment, it contains only an empty UI:
By choosing the option Configuration
-> Òpen Audio Configuration` the dialog to open an audio device is started:
Here, we can choose a technology and a device to operate for real-time processing. We may choose a wav file source to run in a non-duplex manner for the first tests:
When starting the ayfStarterWeb
application, a console window will be shown:
Proper functionality should be reviewd by typing
show(system)
In both cases, the applications can be used to realize a simple talkthrough. That is, the functionality is to simply output the signal that was received on the input:
The audio node is realized in the subfolder ayfstarter/sources/Components/AudioNodes/ayfAuNStarter
,
The ayfAuNStarter
audio node realizes class CayfAuNStarter
which is then available in the AudYoFlo
system to be selected as processing component. The class is kept rather simple and can be found in file src/CayfAuNStarter.h
:
It is derived either from the class CjvxBareNode1ioRearrange
(out-of-place processing) or the class CjvxBareNode1io
(in-place-processing with zero-copy) - depending on the desired use-case. In file target/componentEntry.cpp
the library entry is defined based on the definition of view simple defines:
Also, the implementation of our class CayfAuNStarter
is rather simple as only constructor and destructor are required. The source code can be found in file src/CayfAuNStarter.cpp
:
Our class CayfAuNStarter
is a component in the context of a full audio host implementation in AudYoFlo. It is driven by the host such that it receives input audio data on a frame-by-frame basis and forwards the data towards the output. While passing through the processing core, our component ayfAuNStarter
gets access to the passed data. However, in the current state of implementation, the input data is not changed.
Once we start the host application, at first, we need to activate an audio device to prepare processing. This process is often annoying and can be prevented by creating a configuration file. In order to do so we start the application ayfstarterQt
from within Visual Studio, we activate the project,
at first and run the application by using F5
. Then, we open the desired audio technology and the desired device via the option Audio Configuration
,
In this dialog, we chose the option Restart Selection Procedure
and choose the desired audio technology in the next step,
then, we select the desired audio device,
In the last step, we select an audio file as the source of audio data,
Once processing is ready, we can save the current configuration by using the Save
option in the file dialog,
The created configuration file is denoted as ayfstarterQt.jvx
can be found in the runtime folder,
If we start the application the next time, the current configuration will be loaded automatically. Note that the web application ayfstarterWeb
involves its own configuration file ayfstarterWeb.jvx
if it is started from within Visual Studio. We may copy ayfstarterQt.jvx
to ayfstarterWeb.jvx
to configure the web application at startup,