This repository contains an OpenNI2-based gesture recognizer for Kinect [1], made for a multimodal browser-based GIS. It executes keyboard and mouse controls by hand- and body gestures. Amongst others, the software also makes use of the Gesture Recognition Toolkit (GRT) by Nik Gillian [2].
A corresponding browser-based GIS application enables creating and storing spatial polygons, lines, and descriptions on a city map for urban planning [3].
The gesture recognizer was developed at the Institute for Geoinformatics (ifgi), Germany, for a study project of the master-level course 'Interacting with Geographic information' in Winter Term 2013/14; participants of the project created two different browser-based GIS applications that interface with gesture- and with voice controls.
Although the gesture recognizer is developed for this project only, it works as a stand-alone application. It would require little additional coding to combine it with other software. Please let me know if you are interested in doing so.
More information about the recognizer is available on the wiki of this repository and also contained final project report [4].
##Resources:
[1] The source-code of the map - application is available at https://github.com/MarkusKonk/Geographic-Interaction.
[2] Information about OpenNI is available at http://structure.io/openni.
[3] Information about GRT is available at http://www.nickgillian.com/wiki/pmwiki.php/GRT/GestureRecognitionToolkit.
[4] The final project report is available at https://github.com/MatthiasHinz/Gesture_Recognizer_for_Web_GIS/raw/master/Hinz_final%20report_IWGI_Feb2014.pdf?raw=true