diff --git a/README.md b/README.md
index 9ba95fa3..a433a93b 100644
--- a/README.md
+++ b/README.md
@@ -143,15 +143,16 @@ source devel/setup.bash
Additionally to the UR5, the following devices have to be connected and configured before launching the project:
1. Logitech StreamCam (packages marimbabot_vision and marimbabot_speech)
2. Scarlett 2i2 USB Audio Interface (package marimbabot_audio)
+3. Wifi connection to the MalletHolder (package marimbabot_hardware)
#### Logitech StreamCam (required for packages marimbabot_vision and marimbabot_speech):
-Change the parameter device of the node audio_capture in the launch file
+Change the parameter device of the node audio_capture in the [launch file](marimbabot_speech/launch/command_recognition.launch) of the package marimbabot_speech:
```bash
marimbabot_speech/launch/command_recognition.launch
```
-and modify the device_id parameter in the configuration file:
+and modify the device_id parameter in the [configuration file](marimbabot_vision/config/cv_camera.yaml) of the package marimbabot_vision:
```bash
marimbabot_vision/config/cv_camera.yaml
@@ -159,12 +160,16 @@ marimbabot_vision/config/cv_camera.yaml
#### Scarlett 2i2 USB Audio Interface (required for package marimbabot_audio):
-Adjust the device parameter for the note_audio_capture node in the launch file:
+Adjust the device parameter for the note_audio_capture node in the [launch file](marimbabot_audio/launch/audio_feedback.launch) of the package marimbabot_audio:
```bash
marimbabot_audio/launch/audio_feedback.launch
```
+#### Wifi connection to the MalletHolder
+You should be connected to its Wifi. Please see the [README](marimbabot_hardware/README.md) of the package marimbabot_hardware and [README](marimbabot_bringup/README.md) of the package marimbabot_bringup for further information.
+
+
#### Launch the whole project
In order to run the whole project on the real robot, one has to run two launch files. First, the launch file that sets up the robot and its hardware:
diff --git a/marimbabot_msgs/README.md b/marimbabot_msgs/README.md
index 46ebcb9d..05bdfe81 100644
--- a/marimbabot_msgs/README.md
+++ b/marimbabot_msgs/README.md
@@ -50,6 +50,7 @@ Have a look at [README](../marimbabot_speech/README.md#5-command-examples) for m
### [CQTStamped.msg](msg/CQTStamped.msg)
This message contains information of the Constant-Q Transform(CQT). See [README](../marimbabot_audio/README.md#4-pipeline-of-music-note-detection) for more information.
+
### [HitSequence.msg](msg/HitSequence.msg)
This message contains an array of HitSequenceElement messages.
@@ -58,7 +59,9 @@ This message contains the information single element of a HitSequence message. I
### [NoteOnset.msg](msg/NoteOnset.msg)
Used for publish the detect music note.
+
### [SequenceMatchResult.msg](msg/SequenceMatchResult.msg)
For published the final evaluation of the robot performance
+
### [Speech.msg](msg/Speech.msg)
For published the transcribed test
\ No newline at end of file
diff --git a/marimbabot_vision/README.md b/marimbabot_vision/README.md
new file mode 100644
index 00000000..bf0d1384
--- /dev/null
+++ b/marimbabot_vision/README.md
@@ -0,0 +1,13 @@
+# TAMS Master Project 2022/2023 - Vision
+
+## Scripts
+For more information on the usage of the scripts, please refer [README](../marimbabot_vision/scripts/README.md).
+
+## Src
+### [vision_node.py](src/vision_node.py)
+
+This ROS node is responsible for processing images from a camera source and recognizing notes in the images using a pre-trained model. It converts the image data into a textual representation of recognized musical notes and publishes them as ROS messages.
+
+
+### [visualization_node.py](src/visualization_node.py)
+The ROS node receives recognized notes from the vision_node and generates visual representations of the musical notations. It uses the LilyPond library to create musical staff notation and publishes the resulting images as ROS messages for visualization.
\ No newline at end of file
diff --git a/marimbabot_vision/scripts/README.md b/marimbabot_vision/scripts/README.md
index 341d5123..57f768bf 100644
--- a/marimbabot_vision/scripts/README.md
+++ b/marimbabot_vision/scripts/README.md
@@ -11,7 +11,7 @@ This script generates a separate dataset including one sample folder for each ca
Executes all data generating scripts (generate_data.py, generate_hw_data.py, generate_augmented_data.py) in order to generate a full dataset. The dataset is saved in the `data`, `data_augmented`, `data_hw` and `data_hw_augmented` folders.
### `generate_data.py`
-Generates a dataset of images of the random note sheets withing given a note-specific duration restriction (e.g. use a 1/16th note as a minimum duration). The dataset is saved in the `data` folder.
+Generates a dataset of images of the random note sheets withing given a note-specific duration restriction (e.g. use a 1/16th note as a minimum duration). The dataset is saved in the `data` folder. In the current configuration, also because of limited computational resources while training, the dataset is generated with 3 bars of music.
Arguments:
- num_samples: Amount of data to be generated.