Skip to content

wish44165/A-New-Perspective-for-Shuttlecock-Hitting-Event-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

A New Perspective for Shuttlecock Hitting Event Detection


$\large{\textbf{Abstract}}$

This article introduces a novel approach to shuttlecock-hitting event detection. Instead of depending on generic methods, we capture the hitting action of players by reasoning over a sequence of images. To learn the features of hitting events in a video clip, we specifically utilize a deep learning model known as SwingNet. This model is designed to capture the relevant characteristics and patterns associated with the act of hitting in badminton. By training SwingNet on the provided video clips, we aim to enable the model to accurately recognize and identify the instances of hitting events based on their distinctive features. Furthermore, we apply the specific video processing technique to extract the prior features from the video, significantly reducing the model's learning difficulty. The proposed method not only provides an intuitive and user-friendly approach but also presents a fresh perspective on the task of detecting badminton-hitting events. The datasets and trained models can be found here.

1. Environmental Setup

Hardware Information
  • CPU: Intel® Core™ i7-11700F
  • GPU: GeForce GTX 1660 SUPER™ VENTUS XS OC (6G)
Create Conda Environments

TrackNetv2

$ conda create -n tracknetv2 python=3.9 -y

SwingNet

$ conda create -n golfdb python=3.8 -y

ViT

$ conda create -n ViT_j python==3.9 -y

YOLOv5

$ conda create -n yolov5 python=3.7 -y

YOLOv8

$ conda create -n yolov8 python=3.7 -y
Install Required Packages

TrackNetv2

$ conda activate tracknetv2
$ git clone https://nol.cs.nctu.edu.tw:234/lukelin/TrackNetV2_pytorch.git
$ sudo apt-get install git
$ sudo apt-get install python3-pip
$ pip3 install pandas
$ pip3 install opencv-python
$ pip3 install matplotlib
$ pip3 install -U scikit-learn
$ pip3 install torch
$ pip3 install torchvision

SwingNet

$ conda activate golfdb
$ git clone https://github.com/wmcnally/golfdb.git
$ pip3 install opencv-python
$ pip3 install scipy
$ pip3 install pandas
$ pip3 install torch
$ pip3 install torchvision
$ pip3 install torchaudio

ViT

$ conda activate ViT_j
$ git clone https://github.com/jeonsworld/ViT-pytorch.git
$ cd ViT-pytorch/
$ pip3 install -r requirements.txt
$ mkdir checkpoint/
$ cd checkpoint/
$ wget https://storage.googleapis.com/vit_models/imagenet21k+imagenet2012/ViT-B_16.npz
$ git clone https://github.com/NVIDIA/apex    # A PyTorch Extension
$ cd apex/
$ python3 setup.py install

YOLOv5

$ conda activate yolov5
$ git clone https://github.com/ultralytics/yolov5.git
$ cd yolov5/
$ pip install -r requirements.txt

YOLOv8

$ conda activate yolov8
$ git clone https://github.com/ultralytics/ultralytics.git
$ cd ultralytics/
$ pip install -r requirements.txt

for Conda users

$ cd crc/colab/
$ conda env create -f environment.yml
$ conda activate shuttlecock

2. Inference Details

Datasets

The datasets and trained models can be found here.

Stage 1 dataset

Stage 2 dataset

Folder Structure on Local Machine
  • Create the following folder structure on the local machine

    Badminton/
    ├── data/
        └── part1/
            └── val/
    └── src/
        ├── TrackNetV2_pytorch/
            ├── 10-10Gray/
                ├── denoise10_custom.py
                └── predict10.py
            ├── HitFrame.py
            ├── LandingX.py
            └── event_detection_custom.py
        ├── ultralytics/
            ├── demo.py
            └── submit.py
        ├── ViT-pytorch_Backhand/
            └── submit.py
        ├── ViT-pytorch_BallHeight/
            └── submit.py
        ├── ViT-pytorch_BallType/
            └── submit.py
        ├── ViT-pytorch_Hitter/
            └── submit.py
        ├── ViT-pytorch_RoundHead/
            └── submit.py
        ├── ViT-pytorch_Winner/
            └── submit.py
        ├── postprocess/
            ├── get_hitframe_yolo.py
            └── get_hitframe.py
        ├── preprocess/
            └── rt_conversion_datasets.py
        └── yolov5/
            ├── LandingY_Hitter_Defender_Location.py
            ├── demo.py
            └── detect.py
VideoName, ShotSeq, HitFrame
  1. put Badminton/data/part2/test/00170/ .. /00399/ into Badminton/data/part1/val/
    → Badminton/data/part1/val/00001/ .. /00399/    # 1280x720
    # CodaLab
    → Badminton/data/CodaLab/testdata_track1/00170/ .. /00399/    # 1280x720
  2. convert val/+test/ to val_test_xgg/
    $ conda activate golfdb
    $ cd Badminton/src/preprocess/
    $ mkdir val_test_xgg
    $ python3 rt_conversion_datasets.py
    → Badminton/src/preprocess/val_test_xgg/    # 1280x720
    # CodaLab
    → Badminton/src/preprocess/CodaLab/testdata_track1/    # 1280x720
  3. upload val_test_xgg/ to google drive Teaching_Computer_to_Watch_Badminton_Matches_Taiwan_first_competition_combining_AI_and_sports/datasets/part1/
    → Teaching_Computer_to_Watch_Badminton_Matches_Taiwan_first_competition_combining_AI_and_sports/datasets/part1/val_test_xgg/
    → execute golfdb_xgg_inference_best.ipynb
    → src/Notebook/golfdb/golfdb_G3_fold5_iter3000_val_test_X.csv    # 0.0426
    # CodaLab
    → src/Notebook/golfdb/CodaLab_testdata_track1.csv
Hitter
  1. put golfdb_G3_fold5_iter3000_val_test_X.csv into Badminton/src/postprocess/
    → Badminton/src/postprocess/golfdb_G3_fold5_iter3000_val_test_X.csv
    # CodaLab
    → Badminton/src/postprocess/CodaLab/CodaLab_testdata_track1.csv
  2. extract hitframe from csv file
    $ cd Badminton/src/postprocess/
    $ mkdir HitFrame
    $ mkdir HitFrame/1
    $ python3 get_hitframe.py
    >> len(vns), len(hits), len(os.listdir(savePath)) = 4007, 4007, 4007
    → Badminton/src/postprocess/HitFrame/1/    # 720x720, 4007; # CodaLab: 720x720, 2408
  3. execute hitter inference
    $ conda activate ViT_j
    $ cd Badminton/src/ViT-pytorch_Hitter/
    $ python3 submit.py --model_type ["ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16"] --checkpoint ["output/fold1_Hitter_ViT-B_16_checkpoint.bin","output/fold2_Hitter_ViT-B_16_checkpoint.bin","output/fold3_Hitter_ViT-B_16_checkpoint.bin","output/fold4_Hitter_ViT-B_16_checkpoint.bin","output/fold5_Hitter_ViT-B_16_checkpoint.bin"] --img_size [480,480,480,480,480]
    → Badminton/src/ViT-pytorch_Hitter/golfdb_G3_fold5_iter3000_val_test_hitter_vote.csv    # 0.0494
    → Badminton/src/ViT-pytorch_Hitter/golfdb_G3_fold5_iter3000_val_test_hitter_mean.csv    # 0.0494
    # CodaLab
    → Badminton/src/ViT-pytorch_Hitter/CodaLab_testdata_track1_hitter_vote.csv
    → Badminton/src/ViT-pytorch_Hitter/CodaLab_testdata_track1_hitter_mean.csv
RoundHead
  1. execute roundhead inference
    $ cd Badminton/src/ViT-pytorch_RoundHead/
    $ python3 submit.py --model_type ["ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16"] --checkpoint ["output/fold1_RoundHead_ViT-B_16_checkpoint.bin","output/fold2_RoundHead_ViT-B_16_checkpoint.bin","output/fold3_RoundHead_ViT-B_16_checkpoint.bin","output/fold4_RoundHead_ViT-B_16_checkpoint.bin","output/fold5_RoundHead_ViT-B_16_checkpoint.bin"] --img_size [480,480,480,480,480]
    → Badminton/src/ViT-pytorch_Hitter/golfdb_G3_fold5_iter3000_val_test_hitter_vote_roundhead_vote.csv    # 0.0527
    → Badminton/src/ViT-pytorch_Hittergolfdb_G3_fold5_iter3000_val_test_hitter_mean_roundhead_mean.csv    # 0.0527
    # CodaLab
    → Badminton/src/ViT-pytorch_RoundHead/CodaLab_testdata_track1_hitter_vote_roundhead_vote.csv
    → Badminton/src/ViT-pytorch_RoundHead/CodaLab_testdata_track1_hitter_mean_roundhead_mean.csv
Backhand
  1. execute backhand inference
    $ cd Badminton/src/ViT-pytorch_Backhand/
    $ python3 submit.py --model_type ["ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16"] --checkpoint ["output/fold1_Backhand_ViT-B_16_checkpoint.bin","output/fold2_Backhand_ViT-B_16_checkpoint.bin","output/fold3_Backhand_ViT-B_16_checkpoint.bin","output/fold4_Backhand_ViT-B_16_checkpoint.bin","output/fold5_Backhand_ViT-B_16_checkpoint.bin"] --img_size [480,480,480,480,480]
    # CodaLab
    → Badminton/src/ViT-pytorch_Backhand/CodaLab_testdata_track1_hitter_vote_roundhead_vote_backhand_vote.csv
    → Badminton/src/ViT-pytorch_Backhand/CodaLab_testdata_track1_hitter_mean_roundhead_mean_backhand_mean.csv
BallHeight
  1. execute ballheight inference
    $ cd Badminton/src/ViT-pytorch_BallHeight/
    $ python3 submit.py --model_type ["ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16"] --checkpoint ["output/fold1_BallHeight_ViT-B_16_checkpoint.bin","output/fold2_BallHeight_ViT-B_16_checkpoint.bin","output/fold3_BallHeight_ViT-B_16_checkpoint.bin","output/fold4_BallHeight_ViT-B_16_checkpoint.bin","output/fold5_BallHeight_ViT-B_16_checkpoint.bin"] --img_size [480,480,480,480,480]
    # CodaLab
    → Badminton/src/ViT-pytorch_BallHeight/CodaLab_testdata_track1_hitter_vote_roundhead_vote_backhand_vote_ballheight_vote.csv
    → Badminton/src/ViT-pytorch_BallHeight/CodaLab_testdata_track1_hitter_mean_roundhead_mean_backhand_mean_ballheight_mean.csv
LandingX
  1. get trajectory
    $ conda activate tracknetv2
    $ cd Badminton/src/TrackNetV2_pytorch/10-10Gray/
    $ mkdir output
    $ python3 predict10_custom.py
    $ mkdir denoise
    $ python3 denoise10_custom.py
  2. execute landingx inference
    $ cd Badminton/src/TrackNetV2_pytorch/10-10Gray/
    $ (mkdir event
    $ cd Badminton/src/TrackNetV2_pytorch/
    $ python3 event_detection_custom.py
    $ python3 HitFrame.py)
    # CodaLab
    → Badminton/src/TrackNetV2_pytorch/CodaLab_tracknetv2_pytorch_10-10Gray_denoise_eventDetection_X.csv
    $ python3 LandingX.py
    # CodaLab
    → Badminton/src/TrackNetV2_pytorch/CodaLab_testdata_track1_hitter_vote_roundhead_vote_backhand_vote_ballheight_vote_LXY.csv
    → Badminton/src/TrackNetV2_pytorch/CodaLab_testdata_track1_hitter_mean_roundhead_mean_backhand_mean_ballheight_mean_LXY.csv
LandingY, HitterLocationX, HitterLocationY, DefenderLocationX, DefenderLocationY
  1. extract hitframe for yolo from csv
    $ cd Badminton/src/postprocess/
    $ mkdir HitFrame_yolo
    $ python3 get_hitframe_yolo.py
    → Badminton/src/postprocess/HitFrame_yolo/    # 1280x720, 4007; CodaLab: 1280x720, 2408
  2. execute yolov5 inference
    $ conda activate yolov5
    $ cd Badminton/src/yolov5/
    $ python3 detect.py --weights runs/train/exp/weights/best.pt --source /home/yuhsi/Badminton/src/postprocess/HitFrame_yolo/ --conf-thres 0.3 --iou-thres 0.3 --save-txt --imgsz 2880 --agnostic-nms --augment
    → Badminton/src/yolov5/runs/detect/exp/    # 4007
    # CodaLab
    $ python3 detect.py --weights runs/train/exp/weights/best.pt --source /home/yuhsi/Badminton/src/postprocess/HitFrame_yolo/ --conf-thres 0.3 --iou-thres 0.3 --save-txt --imgsz 2880 --agnostic-nms --augment
    → Badminton/src/yolov5/runs/detect/exp2/    # 2408
    ## video demo
    $ python3 detect.py --weights runs/train/exp/weights/best.pt --source /home/yuhsi/Badminton/data/CodaLab/testdata_track1/00171/00171.mp4 --conf-thres 0.3 --iou-thres 0.3 --save-txt --imgsz 2880 --agnostic-nms --augment
    $ python3 demo.py
  3. execute landingy inference
    $ mkdir runs/detect/exp_draw
    $ mkdir runs/detect/exp_draw/case1
    $ python3 LandingY_Hitter_Defender_Location.py
BallType
  1. execute balltype inference
    $ conda activate ViT_j
    $ cd Badminton/src/ViT-pytorch_BallType/
    $ python3 submit.py --model_type ["ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16"] --checkpoint ["output/fold1_BallType_ViT-B_16_checkpoint.bin","output/fold2_BallType_ViT-B_16_checkpoint.bin","output/fold3_BallType_ViT-B_16_checkpoint.bin","output/fold4_BallType_ViT-B_16_checkpoint.bin","output/fold5_BallType_ViT-B_16_checkpoint.bin"] --img_size [480,480,480,480,480]
    # CodaLab
    → Badminton/src/ViT-pytorch_BallType/CodaLab_testdata_track1_hitter_mean_roundhead_mean_backhand_mean_ballheight_mean_LX_LY_case1_HD_balltype_vote.csv
    → Badminton/src/ViT-pytorch_BallType/CodaLab_testdata_track1_hitter_vote_roundhead_vote_backhand_vote_ballheight_vote_LX_LY_case1_HD_balltype_mean.csv
Winner
  1. execute winner inference
    $ cd Badminton/src/Vit-pytorch_Winner/
    $ python3 submit.py --model_type ["ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16","ViT-B_16"] --checkpoint ["output/fold1_Winner_ViT-B_16_checkpoint.bin","output/fold2_Winner_ViT-B_16_checkpoint.bin","output/fold3_Winner_ViT-B_16_checkpoint.bin","output/fold4_Winner_ViT-B_16_checkpoint.bin","output/fold5_Winner_ViT-B_16_checkpoint.bin"] --img_size [480,480,480,480,480]
    # CodaLab
    → Badminton/src/ViT-pytorch_Winner/CodaLab_testdata_track1_hitter_mean_roundhead_mean_backhand_mean_ballheight_mean_LX_LY_case1_HD_balltype_vote_winner_mean_case1.csv
HitterLocationX, HitterLocationY, DefenderLocationX, DefenderLocationY (Updated)
  1. use yolov8x-pose-p6.pt model to execute pose estimation
    $ cd Badminton/src/ultralytics/
    $ mkdir pose_estimation
    $ python3 submit.py
    → Badminton/src/ViT-pytorch_Winner/CodaLab_testdata_track1_hitter_mean_roundhead_mean_backhand_mean_ballheight_mean_LX_LY_case1_HD_balltype_vote_winner_mean_case1_v8pose.csv
    ## video demo
    $ python3 demo.py

3. Demonstration

3.1. Optical Flow Calculation

3.2. SwingNet (MobileNetV2 + bidirectional LSTM)

SwingNet

3.3. YOLOv5 & TrackNetV2 & YOLOv8-pose

4. Leaderboard Scores

4.1. AICUP2023

Leaderboards Filename Upload time Evaluation result Ranking
Public golfdb_G3_fold5_...csv 2023-05-15 22:21:17 0.0727 11/30
Private golfdb_G3_fold5_...csv 2023-05-15 22:21:17 0.0622 11/30

4.2. CodaLab2023

Leaderboards Filename Upload time Evaluation result Ranking
Final phase CodaLab_testdata_track1_...csv 2023-06-17 16:03 0.3483 2/2

5. GitHub Acknowledgement

6. References

Citation

If you find this project helpful for your research or applications, we would appreciate it if you could give it a star and cite the paper.

@misc{chen2023new,
      title={A New Perspective for Shuttlecock Hitting Event Detection}, 
      author={Yu-Hsi Chen},
      year={2023},
      eprint={2306.10293},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{chen_2025_14677727,
  author       = {Chen, Yu-Hsi},
  title        = {A New Perspective for Shuttlecock Hitting Event
                   Detection
                  },
  month        = jan,
  year         = 2025,
  publisher    = {Zenodo},
  version      = {1.0.0},
  doi          = {10.5281/zenodo.14677727},
  url          = {https://doi.org/10.5281/zenodo.14677727},
}