PG-BIG is a framework for personalized guidance in biomechanically informed generative AI, focusing on motion modeling and evaluation using VQ-VAE and surrogate models.
- Summary
- Installation
- VQ-VAE Training
- Profile Prior Training
- Surrogate Training
- Guidance
- Evaluation Metrics
- Clone the repository:
git clone https://github.com/your-org/PG-BIG.git cd PG-BIG - Install dependencies using Conda or pip:
pip install -r requirements.txtconda create env -f enviornment.yaml
You can download/create the datasets following the instructions below.
-
Navigate to
datasetand create a new directory calledthree_dimensional_motion_capture:cd dataset mkdir -p three_dimensional_motion_capture -
Go to the Figshare collection page.
Download both:
- Three-Dimensional Motion Capture Data of All Athletes
- Participants' Information and Sampling Frequency
Place both zip files inside the
three_dimensional_motion_capturedirectory. Then unzip both files:unzip Kinematic_Data.zip -d Kinematic_Data unzip Participants\ Info.zip -d Participants\ Info -
Fit the dataset markers to a Rajapoal Skeleton:
- Download the Rajapoal OpenSim Model by opening the link in your browser.
"$BROWSER" https://simtk.org/frs/?group_id=773 - Place the downloaded zip file (
FullBodyModel-4.0.zip) into thedatasetdirectory. - Unzip the model:
unzip FullBodyModel-4.0.zip -d FullBodyModel-4.0 - Run the retargeting algorithm to save skeletal models for each subject inside
dataset/183_athletes:python retarget_dataset.py - To speed up the process, you can use multiple workers:
Replace
python retarget_dataset.py --num_workers <num_workers><num_workers>with the number of workers you want to use.
- Download the Rajapoal OpenSim Model by opening the link in your browser.
Train the VQ-VAE model for motion representation:
python3 train_vq.py --batch-size 256 --lr 2e-4 --total-iter 300000 --lr-scheduler 200000 --nb-code 512 --down-t 2 --depth 3 --dilation-growth-rate 3 --out-dir output --dataname mcs --vq-act relu --quantizer ema_reset --loss-vel 0.5 --recons-loss l1_smooth --exp-name 183_athletes
If you have 2+ CUDA GPUs, you can utilize DeepSpeed for faster training. Fill in num_gpus based on the number of GPUs you'll be using for training:
deepspeed --num_gpus=<num_gpus> train_vqvae.py --batch-size 256 --window-size 512 --lr 2e-4 --total-iter 300000 --lr-scheduler 200000 --nb-code 512 --down-t 2 --depth 3 --dilation-growth-rate 3 --out-dir output --dataname 183_athletes --vq-act relu --quantizer ema_reset --loss-vel 0.5 --recons-loss l1_smooth --exp-name VQVAE
Supported datasets: 183_athletes, addbiomechanics.
Train the profile prior model for personalized guidance:
python train_profile_prior.py
Train surrogate models for biomechanical evaluation:
python train_surrogate.py --dataname <dataset>
Use the guidance module to generate or refine motions based on personalized profiles:
python guidance.py --input <motion_file> --profile <profile_file>
Evaluate generated motions using built-in metrics:
- Reconstruction Loss
- Perplexity
- Commitment Loss
- Temporal Consistency
Run evaluation:
python evaluate.py --model <model_checkpoint> --dataset <dataset>