Deep learning models to predict human skeleton
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
Table of Contents
The goal of this project is to predict skeleton using Deep-learning architectures especially using FEDFormers and AutoFormers.
It relies heavily on Time-series Library from thuml
Don't forget to give the project and thuml's project a star! Thanks again!
this repo provides several features:
- you can preprocess the NTU_RGB+D dataset efficiently. The implementation is in the folder data_loader
- you can train FEDFormers and AutoFormers on this dataset thanks to exp_Long_Term_Forecast
- you can plot your results. They are stored in test_results after the test of your model. if you just want to plot the skeleton, you can look here
You can see how our model behaves on the dataset in the folder videos_example. Please notice that theirs is still a lot of possible improvements.
I added on every folder a readme to help you to grasp what functions are supposed to do.
A FAQ is as well available for any further technical questions. This comments are unfortunately in French.
If you want to use fast some function of this repo, I added a COMMANDE_UTILE.ipynb which is supposed to summarize the usual functions.
To get a local copy up and running follow these simple steps.
-
Clone the repo
git clone https://github.com/gardiens/Time-Series-Library_babygarches.git
-
install python requirement
pip install requirements.txt
-
If you want to use NTU_RGB download the dataset here
-
run txt2npy. the file .npy should be stored in dataset/NTU_RGB+D/numpyed/ and the raw data should be in dataset/NTU_RGB+D/raw/
-
build the csv for the data. it may take a while
python3 build_csv.py
- then run the main.py with your argument :) Some scripts are provided in the scripts folder. for example:
sh scripts/utils/template_script.sh
- You can deep dive on your results with several tools. Videos of some sample are stored in the folder test_results, a dataframe of the loss of each sample is stored in results and you can see your runs in the folder runs thanks to Tensorboard if you are working on the dataset NTU RGB+D you may need to download ffmpeg to see videos.
This is the roadmap if you want to push the model further, however I will not update the repo in a close future
- Insert Categorical value in the prediction.
- Insert Wavelet Transform
- rewrite the preprocess step to be easier to add new steps.
- write on Pytorch the preprocessing steps.
- Ease the fetch of new results and get faster insights on the results. it means to fetch faster the data and have more visual analysis of the models ( gradient/non zero layers..)
Project Link: https://github.com/gardiens/Time-Series-Library_babygarches
you can contact me by email ( pierrick.bournez@student-cs.fr ).
If you have new idea or findings, you can talk to Mr rambaud : philippe.rambaud@lisn.fr
Please star if you find this repo useful :)
you can have access of some insights of my experience here if you are lucky enough
Incoming
This library is constructed based on this repo :
- Time Series Library (TSlib): https://github.com/thuml/Time-Series-Library/tree/main
- you can download the dataset of NTU-RGB : https://rose1.ntu.edu.sg/dataset/actionRecognition/
- Credit to Tobias Baumgaertner for the main picture of the readme
the code is organized as follow:
- When you run main.py, it builds an instance of exp/Long_term_forecasting which is the pipeline of the training/test
- it find the dataset on dataset/your_dataset and builds the model in models/your_model. it eventually runs the training/test code
- you can fetch the result and have logs on several folder.
- In test_results you can see videos of your model after the training session,
- in results you have a results_df.csv which is a dataframe that give the loss of every sample of the model.
- in runs you have the tensorboards logs of the run.
the setting name is supposed to be a unique ID of each models run.