Skip to content

Latest commit

 

History

History
441 lines (372 loc) · 14.3 KB

2d_face_keypoint.md

File metadata and controls

441 lines (372 loc) · 14.3 KB

2D Face Keypoint Datasets

It is recommended to symlink the dataset root to $MMPOSE/data. If your folder structure is different, you may need to change the corresponding paths in config files.

MMPose supported datasets:

300W Dataset

300W (IMAVIS'2016)
@article{sagonas2016300,
  title={300 faces in-the-wild challenge: Database and results},
  author={Sagonas, Christos and Antonakos, Epameinondas and Tzimiropoulos, Georgios and Zafeiriou, Stefanos and Pantic, Maja},
  journal={Image and vision computing},
  volume={47},
  pages={3--18},
  year={2016},
  publisher={Elsevier}
}

For 300W data, please download images from 300W Dataset. Please download the annotation files from 300w_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── 300w
        |── annotations
        |   |── face_landmarks_300w_train.json
        |   |── face_landmarks_300w_valid.json
        |   |── face_landmarks_300w_valid_common.json
        |   |── face_landmarks_300w_valid_challenge.json
        |   |── face_landmarks_300w_test.json
        `── images
            |── afw
            |   |── 1051618982_1.jpg
            |   |── 111076519_1.jpg
            |    ...
            |── helen
            |   |── trainset
            |   |   |── 100032540_1.jpg
            |   |   |── 100040721_1.jpg
            |   |    ...
            |   |── testset
            |   |   |── 296814969_3.jpg
            |   |   |── 2968560214_1.jpg
            |   |    ...
            |── ibug
            |   |── image_003_1.jpg
            |   |── image_004_1.jpg
            |    ...
            |── lfpw
            |   |── trainset
            |   |   |── image_0001.png
            |   |   |── image_0002.png
            |   |    ...
            |   |── testset
            |   |   |── image_0001.png
            |   |   |── image_0002.png
            |   |    ...
            `── Test
                |── 01_Indoor
                |   |── indoor_001.png
                |   |── indoor_002.png
                |    ...
                `── 02_Outdoor
                    |── outdoor_001.png
                    |── outdoor_002.png
                     ...

300VW Dataset

300VW (ICCVW'2015)
@inproceedings{shen2015first,
  title={The first facial landmark tracking in-the-wild challenge: Benchmark and results},
  author={Shen, Jie and Zafeiriou, Stefanos and Chrysos, Grigoris G and Kossaifi, Jean and Tzimiropoulos, Georgios and Pantic, Maja},
  booktitle={Proceedings of the IEEE international conference on computer vision workshops},
  pages={50--58},
  year={2015}
}

For 300VW data, please register and download images from 300VW Dataset. Unzip and use the "tools/dataset_converters/300vw2coco.py" to process the data.

Put the 300VW under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── 300vw
        |── annotations
        |   |── train.json
        |   |── test_1.json
        |   |── test_2.json
        |   `── test_3.json
        `── images
            |── 001
            |   `── imgs
            |       |── 000001.png
            |       |── 000002.png
            |       ...
            |── 002
            |   `── imgs
            |       |── 000001.png
            |       |── 000002.png
            |       ...
            |   ...

WFLW Dataset

WFLW (CVPR'2018)
@inproceedings{wu2018look,
  title={Look at boundary: A boundary-aware face alignment algorithm},
  author={Wu, Wayne and Qian, Chen and Yang, Shuo and Wang, Quan and Cai, Yici and Zhou, Qiang},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={2129--2138},
  year={2018}
}

For WFLW data, please download images from WFLW Dataset. Please download the annotation files from wflw_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── wflw
        |── annotations
        |   |── face_landmarks_wflw_train.json
        |   |── face_landmarks_wflw_test.json
        |   |── face_landmarks_wflw_test_blur.json
        |   |── face_landmarks_wflw_test_occlusion.json
        |   |── face_landmarks_wflw_test_expression.json
        |   |── face_landmarks_wflw_test_largepose.json
        |   |── face_landmarks_wflw_test_illumination.json
        |   |── face_landmarks_wflw_test_makeup.json
        |
        `── images
            |── 0--Parade
            |   |── 0_Parade_marchingband_1_1015.jpg
            |   |── 0_Parade_marchingband_1_1031.jpg
            |    ...
            |── 1--Handshaking
            |   |── 1_Handshaking_Handshaking_1_105.jpg
            |   |── 1_Handshaking_Handshaking_1_107.jpg
            |    ...
            ...

We also supply a script which can convert the raw WFLW annotations to COCO style. The script's output is not completely consistent with the annotations downloaded from the URL, but it does not affect training and testing.

AFLW Dataset

AFLW (ICCVW'2011)
@inproceedings{koestinger2011annotated,
  title={Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization},
  author={Koestinger, Martin and Wohlhart, Paul and Roth, Peter M and Bischof, Horst},
  booktitle={2011 IEEE international conference on computer vision workshops (ICCV workshops)},
  pages={2144--2151},
  year={2011},
  organization={IEEE}
}

For AFLW data, please download images from AFLW Dataset. Please download the annotation files from aflw_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── aflw
        |── annotations
        |   |── face_landmarks_aflw_train.json
        |   |── face_landmarks_aflw_test_frontal.json
        |   |── face_landmarks_aflw_test.json
        `── images
            |── flickr
                |── 0
                |   |── image00002.jpg
                |   |── image00013.jpg
                |    ...
                |── 2
                |   |── image00004.jpg
                |   |── image00006.jpg
                |    ...
                `── 3
                    |── image00032.jpg
                    |── image00035.jpg
                     ...

COFW Dataset

COFW (ICCV'2013)
@inproceedings{burgos2013robust,
  title={Robust face landmark estimation under occlusion},
  author={Burgos-Artizzu, Xavier P and Perona, Pietro and Doll{\'a}r, Piotr},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={1513--1520},
  year={2013}
}

For COFW data, please download from COFW Dataset (Color Images). Move COFW_train_color.mat and COFW_test_color.mat to data/cofw/ and make them look like:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── cofw
        |── COFW_train_color.mat
        |── COFW_test_color.mat

Run the following script under {MMPose}/data

python tools/dataset_converters/parse_cofw_dataset.py

And you will get

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── cofw
        |── COFW_train_color.mat
        |── COFW_test_color.mat
        |── annotations
        |   |── cofw_train.json
        |   |── cofw_test.json
        |── images
            |── 000001.jpg
            |── 000002.jpg

COCO-WholeBody (Face)

COCO-WholeBody-Face (ECCV'2020)
@inproceedings{jin2020whole,
  title={Whole-Body Human Pose Estimation in the Wild},
  author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2020}
}

For COCO-WholeBody dataset, images can be downloaded from COCO download, 2017 Train/Val is needed for COCO keypoints training and validation. Download COCO-WholeBody annotations for COCO-WholeBody annotations for Train / Validation (Google Drive). Download person detection result of COCO val2017 from OneDrive or GoogleDrive. Download and extract them under $MMPOSE/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── coco
        │-- annotations
        │   │-- coco_wholebody_train_v1.0.json
        │   |-- coco_wholebody_val_v1.0.json
        |-- person_detection_results
        |   |-- COCO_val2017_detections_AP_H_56_person.json
        │-- train2017
        │   │-- 000000000009.jpg
        │   │-- 000000000025.jpg
        │   │-- 000000000030.jpg
        │   │-- ...
        `-- val2017
            │-- 000000000139.jpg
            │-- 000000000285.jpg
            │-- 000000000632.jpg
            │-- ...

Please also install the latest version of Extended COCO API to support COCO-WholeBody evaluation:

pip install xtcocotools

LaPa

LaPa (AAAI'2020)
@inproceedings{liu2020new,
  title={A New Dataset and Boundary-Attention Semantic Segmentation for Face Parsing.},
  author={Liu, Yinglu and Shi, Hailin and Shen, Hao and Si, Yue and Wang, Xiaobo and Mei, Tao},
  booktitle={AAAI},
  pages={11637--11644},
  year={2020}
}

For LaPa dataset, images can be downloaded from their github page.

Download and extract them under $MMPOSE/data, and use our tools/dataset_converters/lapa2coco.py to make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── LaPa
        │-- annotations
        │   │-- lapa_train.json
        │   |-- lapa_val.json
        │   |-- lapa_test.json
        |   |-- lapa_trainval.json
        │-- train
        │   │-- images
        │   │-- labels
        │   │-- landmarks
        │-- val
        │   │-- images
        │   │-- labels
        │   │-- landmarks
        `-- test
        │   │-- images
        │   │-- labels
        │   │-- landmarks