- Demo Link (Notice- Demo recorded during the Hack-a-thon. Misses some points but gives a quick intro and working of the system.)
The application is divided into two parts.
- Image analysis using tactile interface
- Gesture controlled directional image analysis
- A user can interact with iBuddy using three buttons, Red, Green, and Yellow.
- The screen preview camera view and also shows the output of the analysis##### Use cases of buttons
- Yellow Button: Click a picture of the camera view
- Red Button: Answer No
- Green Button: Answer Yes
- User clicks a picture of image with a human face, using Yellow Button.
- The image is being sent to Azure Face API and we get the overall analysis in the returned json.
- User can listen to overall description of the image, and also see on the screen.
- iBuddy will then ask the user if he/she wants to know the emotion detected in the image.
- User can answer Yes/No using Green & Red Button.
- iBuddy will prompt the response accordingly.
Description: A man looking at camera
Emotion: disgusted
Text:
----------------------
- User clicks a picture of image with using Yellow Button.
- The image is being sent to Azure Face API and we get the overall analysis in the returned json.
- User can listen to overall description of the image, and also see on the screen.
- iBuddy will then ask the user if he/she wants to know the text recognised in the image.
- User can answer Yes/No using Green & Red Button.
- iBuddy will prompt the response accordingly.
Description: white board with black text
Emotion:
Text: NOTICE NO EATING OR DRINKING IN THIS AREA
----------------------
- The user can point to any of the four parts of the image and make a small circle with a finger pointed in that direction.
- Leap detects the gesture and iBuddy crops the image in that part
- That part of the image is sent for analysis and appropriate results are returned.
- Because this is a geture controlled system, the system will give you information according to the part of te picture that you circle.
- Python
- Leap Motion
- Raspberry-pi
- Azure
- Natural Language Processing
- Computer Vision
- Ngrok
- Socket programming
- Python text-to-speech
- Setting up R-pi
- Integrating leap motion
- Working product
- Successful gesture controlled directional analysis
- Low latency
- Good Accuracy
- 360-degree image analysis
- Live video stream analysis
- Better physical device and integration