Skip to content

Commit

Permalink
Lots of tutorial updates.
Browse files Browse the repository at this point in the history
  • Loading branch information
InfantLab committed May 5, 2022
1 parent 12c1ed2 commit 3776944
Show file tree
Hide file tree
Showing 13 changed files with 645 additions and 316 deletions.
1 change: 0 additions & 1 deletion Drum.Tutorial.settings.json

This file was deleted.

31 changes: 31 additions & 0 deletions DrumTutorial/Drum.Tutorial.settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{
"batchName": "VASC Drumming Data tutorial",
"lastUpdate": "2022-03-14T09:23:08.849601",
"flags": {
"anon": false,
"includeHands": true,
"cleaned": true
},
"paths": {
"openpose": "C:\\Users\\cas\\openpose-1.5.0-binaries-win64-gpu-python-flir-3d_recommended\\",
"project": ".\\DrumTutorial",
"videos_in": ".\\DrumTutorial\\videos",
"videos_out": ".\\DrumTutorial\\",
"videos_out_openpose": ".\\DrumTutorial\\openpose",
"videos_out_timeseries": ".\\DrumTutorial\\timeseries",
"videos_out_analyses": ".\\DrumTutorial\\analyses"
},
"filenames": {
"videos_json": "videos.json",
"clean_json": "clean.json",
"alldatanpz": "allframedata.npz",
"lefthandnpz": "lefthandframedata.npz",
"righthandnpz": "righthandframedata.npz",
"cleannpz": "cleandata.npz",
"cleanrightnpz": "cleanrightdata.npz",
"cleanleftnpz": "cleanleftdata.npz",
"cleandataparquet": "cleandata.parquet",
"righthandparquet": "righthand.parquet",
"lefthandparquet": "lefthand.parquet"
}
}
Binary file modified DrumTutorial/Little Drummers. Supplementary Material.docx
Binary file not shown.
94 changes: 90 additions & 4 deletions DrumTutorial/ReadMe.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,26 +12,112 @@ This folder contains
* `videos` - a folder of 6 videos per participant (3 child, 2 adult, used with permission).
* `timeseries` - a folder where we store data arrays containing the generated movement data.


## Step 0 - Installation

In order to run the tutorial, download or clone a local copy of the VASC project including this tutorial. And follow the install instructions on the [main page](https://github.com/InfantLab/VASC).
In order to run the tutorial, download or clone a copy of the VASC project from GitHub to your local machine. You also need a copy of Python, a copy of OpenPose and some supporting software. Follow the install instructions on the [main page](https://github.com/InfantLab/VASC#installation) to install these prerequisites.


## Step 1 - Processing the videos

Open your local copy of the file [Step1.ProcessVideo.ipynb](https://github.com/InfantLab/VASC/blob/master/Step1.ProcessVideo.ipynb) from an instance of Jupyter or JupyterLab running on your local system.

This should then guide you through the process of getting OpenPose to convert each video into a set of frame by frame pose estimates.
That document should then guide you through the process of getting OpenPose to convert each video into a set of frame by frame pose estimates. Here we will show the outputs you should expect and some of the problems to watch out for.

### Step 1.1-3 Settings

To keep track of information that is used across multiple steps we load and save data to a number of text files in the JSON and other formats. The first one of these is `Drum.Tutorial.settings.json`. It contains a set of file names that we will save and where they are located. It also includes several settings flags that

For your own projects you can create a copy of this file and modify it to point to your own files.

#### Where is OpenPose

One important line in the settings file tells us where the OpenPose application is located. **You will need to edit this to match the location of this file on you computer. Note that the path needs double backslashes or it will give an error.**

'openpose': 'C:\\Users\\cas\\openpose-1.5.0-binaries-win64-gpu-python-flir-3d_recommended\\'

#### Three useful flags

```
"flags": {
"anon": false,
"includeHands": true,
"cleaned": true
},
```
The flags section of settings file tells the code several useful things.
* `anon` - whether or not to display images from video in Step 2. (Normally this is set to false, setting it true lets you process files anonymously.
* `includeHands` - tells OpenPose and VASC whether to keep track of location of hands. Improves accuracy but increases memory requirements. Set to true for this tutorial.
* `cleaned` - tells us whether Step 2 data cleaning has been completed.

### Step 1.4 Loading the videos

In this step we create another JSON file called `videos.json` which keeps track of information about the videos we are processing. The code looks to see if this file exists and if not creates a new version. Then it searchs the `videos` folder and adds all videos found, so they can be processed.

### Step 1.5 - Processing with OpenPose

This is the main step which of part 1. We loop through all the videos and for each one we pass it to the OpenPose app (OpenPoseDemo.exe / OpenPose.bin). Essentially, we are recreating in code what you would do on a command line for each individual video. So we need to know the location of the video, the location we want to output the data and what flags to use. See the [OpenPose documentation](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/01_demo.md) for more details.

### Step 1.6-7- Organising Data

For each video OpenPose will output a separate JSON file for each individual frame containing all the coordinates of all the people it has found in that frame. This data needs to be combined into a single data-structure so that it is more useful. That is the goal of these steps.

We use the standard NumPy array for this. But because these arrays can get large we save in a compressed format. The names and locations of the saved files are found in our settings.json file.


<!-- #region -->
## Step 2 - Cleaning the data

Step 2 is the most time consuming step as it involves manually checking how OpenPose has labelled teh people in the video. And relablelling them where necessary. We provide some visual inspection tools and some automated algorithms to speed up this process.

Open your copy of [Step2.OrganiseData.ipynb](https://github.com/InfantLab/VASC/blob/master/Step2.OrganiseData.ipynb) in Jupyter.

## Step 3 - Extracting Movement
### Steps 2.0-1

Open your copy of [Step3.ExtractMovement.ipynb](https://github.com/InfantLab/VASC/blob/master/Step3.ExtractMovement.ipynb) in Jupyter.
As with Step 1 we first need to load any libraries that Python uses and then open the settings.json and videos.json from Step 1.

### Step 2.2 Load data from Step 1

Here we reload into memory the data we saved in Step 1.

### Step 2.4 Cleaning data using the Control Panel

This is the main interface to select individual videos and see if any relabelling is required. Here we explain it's features.

<img src="step2.gui.png">

In the control panel the first panel is a plot of the mean horizontal location of each participant in each frame of the video. This gives you an overview of their movement, how they have been labelled by Openpose and how many people OpenPose thinks are in the video. There is vertical line to identify the current frame (chosen by the slider).

Below, you see the current frame overlaid with the OpenPose estimates of participant locations (including hand and finger location if includeHands = True). Each participant has a number corresponding to the label given to them by OpenPose. We will see below that these can often be highly inconsisten from frame to frame.
Below the frame image we have several drop-downs and buttons that let us interact with the code. There is also a slider that lets us select a different frame from the video. And finally a set of buttons that let us jump +/-1%, +/-10 frames or +/-1 frame.

1. Use the first drop down to pick which video to process.
2. Sometimes we have multiple cameras, Select camera with best view of both participants - swap this to camera 1 (if multiple cameras).
3. You might delete sets for whom all data is too poor quality. But they can also be excluded in **Step 3** by a flag in the data spreadsheet.
4. Tag the infant and adult in first frame of interest. So both individuals should be in first frame.
5. Try to automatically tag then in subsequent frames.
6. Manually fix anything the automatic process gets wrong.
7. Exclude other detected people (3rd parties & false positives)

### Step 2.4. - Fix by location example

To get around the quirks of OpenPose we to fix the labels. In the next image you see that for video `bb22644a_08-test-trials` the algorithm was highly inconsistent with it's labelling of the participants.

<img src="datatoclean.png">

Example of using the `Fix by Location` button. The algorithm starts assuming that individuals are correctly labelled in the first frame and then for each subsequent frame it assumes that the person nearest that location is also the same individual. (OpenPose doesn't compare one frame with another so it treats each on independently). In many cases `Fix by Location` will correct the assignments. But if the data is noisy, some manual intervention will also be required.

<img src="cleandata.png">


### Steps 2.5-8 Saving the data

Once the data is cleaned we save in a multi-index dataframe to make analysis steps easier.
<!-- #endregion -->

## Step 3 - Extracting Movement

Open your copy of [Step3.ExtractMovement.ipynb](https://github.com/InfantLab/VASC/blob/master/Step3.ExtractMovement.ipynb) in Jupyter.


If you have any comments or questions, either contact [Caspar Addyman <c.addyman@gold.ac.uk](mailto:c.addyman@gold.ac.uk) or submit an [issue report](https://github.com/InfantLab/VASC/issues).
Binary file added DrumTutorial/cleandata.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added DrumTutorial/datatoclean.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added DrumTutorial/step2.gui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
32 changes: 20 additions & 12 deletions Step0.GettingStarted.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@
"source": [
"### 0.1 - OpenPoseDemo application\n",
"\n",
"Next we need to download and install the [OpenPoseDemo](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/demo_overview.md) executable.\n",
"The full OpenPose software comes as source code that your computer can compile into a working application. This is unnecessary for this project. Instead, we need to download and install the [OpenPoseDemo](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/01_demo.md) executable. (`openposedemo.exe` on Windows, `openpose.bin` on Mac/Linux.)\n",
"\n",
"Additionally, you need to download the trained neural-network models that OpenPose uses. To do this go to the `models` subdirectory of OpenPose directory, and double-click / run the `models.bat` script.\n",
"\n",
"The `openposedemo` bin/exe file can be run manually from the command line. It is worth trying this first so you understand what `openposedemo` is. See [this guide](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/demo_overview.md) or open a terminal app or Windows Powershell, navigate to the openpose installation folder and then try this command\n",
"The `openposedemo` bin/exe file can be run manually from the command line. It is worth trying this first so you understand what `openposedemo` is. See [this guide](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/01_demo.md) or open a terminal app or Windows Powershell, navigate to the openpose installation folder and then try this command\n",
"\n",
"```\n",
":: Windows\n",
Expand All @@ -53,7 +53,7 @@
"source": [
"### 0.2 - Load python libraries\n",
"\n",
"There are a handful of python libraries that we use for things like image manipulation, file operations, maths and stats. Many are probably already installed by default such as `os, math, numpy, pandas, matplotlib`. Others need adding to our python environment. \n",
"Additionally, there are a handful of python libraries that we use for things like image manipulation, file operations, maths and stats. Many are probably already installed by default such as `os, math, numpy, pandas, matplotlib`. Others need adding to our python environment. \n",
"\n",
"PyArrow is a useful extension for saving Pandas and NumPy data. We need it to move the large array created in Step 2 to Step 3. \n",
"\n",
Expand All @@ -62,7 +62,7 @@
"conda install glob2 opencv pyarrow xlrd openpyxl\n",
"```\n",
"#### Troubleshooting\n",
"If when you run the code in Steps 1, 2 & 3 you might see an error like `ModuleNotFoundError: No module named 'glob'` this is because that python module needs to be installed on your computer. If you use Anaconda, the missing module can usually be installed with the command `conda install glob`."
"If when you run the code in any of the next Steps ([Step 1](Step1.ProcessVideo.ipynb), etc) you might see an error like `ModuleNotFoundError: No module named 'glob'` this is because that python module needs to be installed on your computer. If you use Anaconda, the missing module can usually be installed with the command `conda install glob`."
]
},
{
Expand All @@ -79,13 +79,7 @@
"To make these work with the newer Jupyter Lab we also need to install the widgets lab extension, like so:\n",
"\n",
"```\n",
"Jupyter 3.0 (current)\n",
"conda install -c conda-forge jupyterlab_widgets\n",
"\n",
"Jupyter 2.0 (older)\n",
"conda install -c conda-forge nodejs\n",
"jupyter labextension install @jupyter-widgets/jupyterlab-manager\n",
"jupyter labextension install @jupyter-widgets/jupyterlab-manager ipycanvas\n",
"```\n",
"\n",
"Documentation:\n",
Expand Down Expand Up @@ -121,6 +115,8 @@
"\n",
"```jupyter nbextensions_configurator enable --user```\n",
"\n",
"Documentation:\n",
"\n",
"* https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/index.html\n",
"* https://moonbooks.org/Articles/How-to-create-a-table-of-contents-in-a-jupyter-notebook-/\n"
]
Expand All @@ -129,12 +125,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 0.6 - Using Jupyter with network drives\n",
"### 0.7 - Using Jupyter with network drives\n",
"\n",
"By default Jupyter launched from Anaconda Navigator will open it in your home directory. It then might not be possible to access files on a network drive you need. To get around this first launch a command window for the correct Jupyter environment. Then use this command to launch Jupyter itself (assuming you want to access the U:/ drive). \n",
"\n",
"```jupyter lab --notebook-dir U:/```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 0.8 Done (hopefully!)\n",
"\n",
"Once everything is installed we can move onto [Step 1](Step1.ProcessVideo.ipynb).\n",
"\n",
"### 0.9 Something didn't work\n",
"If you encounter any problems or have any comments or questions, either contact [Caspar Addyman <c.addyman@gold.ac.uk](mailto:c.addyman@gold.ac.uk) or submit an [issue report](https://github.com/InfantLab/VASC/issues)."
]
}
],
"metadata": {
Expand All @@ -156,7 +164,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
"version": "3.8.12"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 3776944

Please sign in to comment.