Pipelines for converting video files and object tracking for automated behavioural coding.
The only software needed to run these pipelines is Docker. Please see the installation instructions for your platform to get started.
To build these pipelines you will also need:
- make
- CMake
- OpenCV 2.4
You should use the pre-built docker images, described below, if you possibly can.
All of our Docker images are available from the Docker Hub.
To get the latest version of the video processing image run:
$ docker pull idinteraction/video
Images are tagged every so often to denote a stable, production quality release. A specific tagged image can be pulled like so:
$ docker pull idinteraction/<image>:<tag>
The easiest way of running these pipelines is to use the tracking tools scripts. These scripts wrap the complexities of the individual pipelines into simple command-line tools.
Instructions for running the docker images directly, in Linux, are below. There may be platform specific differences that are not taken into account.
Process the video streams of the participants in our experiments in preparation for input to the object tracking pipeline.
The raw video streams are quartered, showing the participants from three directions and the TV they are watching in one frame. This pipeline takes a set of raw experiment videos and splits them into separate streams for the front, side and back view of each participant.
The directory holding the raw video streams and the directory to which the processed video streams will be saved must be specified when running the docker image. It is advisable to mount the input directory as 'read-only'.
The following command will run the video processing pipeline on any videos it finds in the input directory (edit the parts in <angle brackets>
to suit your set up):
$ docker run -it --rm --name=<name> \
-v <input-directory>:/idinteraction/in:ro \
-v <output-directory>:/idinteraction/out \
idinteraction/video
Collect metadata about the video streams to be processed and then perform object tracking.
To configure video starting position and object bounding boxes, use:
$ docker run -it --rm --name=<name> \
-v <videos-directory>:/idinteraction/videos:ro \
-v <output-directory>:/idinteraction/output \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix$DISPLAY \
idinteraction/object-tracking init
To perform object tracking only, use:
$ docker run -it --rm --name=<name> \
-v <videos-directory>:/idinteraction/videos:ro \
-v <output-directory>:/idinteraction/output \
idinteraction/object-tracking track
To create videos with the object tracking bounding boxes drawn in them, to help with validating the object tracking process, use:
$ docker run -it --rm --name=<name> \
-v <videos-directory>:/idinteraction/videos:ro \
-v <output-directory>:/idinteraction/output \
idinteraction/object-tracking replay
To perform all steps in one go:
$ docker run -it --rm --name=<name> \
-v <videos-directory>:/idinteraction/videos:ro \
-v <output-directory>:/idinteraction/output \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix$DISPLAY \
idinteraction/object-tracking
The IDInteraction Processing Pipelines were developed in the IDInteraction project, funded by the Engineering and Physical Sciences Research Council, UK through grant agreement number EP/M017133/1.
Copyright (c) 2015, 2016 The University of Manchester, UK.
Licenced under LGPL version 2.1. See LICENCE for details.