You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository contains the core methods and models described in the paper “Represent Code as Action Sequence for Predicting Next Method Call.” It uses action sequence modeling to predict method calls in Python code based on developer intentions, treating code editing as a sequence of human-like actions.
For HAR, our novel HDAD (IGAV) dataset was constructed by performing 4 dynamic and 3 static activities with the accelerometer and gyroscope sensors of the IOS smart phone in two different positions for a total of 15 seconds. Mentioned activities were collected in real time by placing them on the waist of a total of 10 volunteers.
"Embark on a cutting-edge journey in Human Activity Recognition using a fusion of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks. This project includes model training, metric visualization, and action prediction in videos. Experience seamless interaction with a Streamlit-powered user-friendly version (at the bottom)
ActionSense is an innovative project that combines the power of computer vision with the connectivity of IoT to create a seamless human activity recognition system. Using OpenCV for accurate motion detection and pySerial for IoT integration, ActionSense transforms how environments respond to human actions, enhancing automation and interaction.