Skip to content

[CVPR 2026] This repo is official PyTorch implementation of the paper "Shoe Style-Invariant and Ground-Aware Learning for Dense Foot Contact Estimation".

License

Notifications You must be signed in to change notification settings

dqj5182/FECO_RELEASE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FECO: Shoe Style-Invariant and Ground-Aware Learning for Dense Foot Contact Estimation

Daniel Sungho Jung, Kyoung Mu Lee

Seoul National University

Python 3.8+ PyTorch License: CC BY-NC 4.0 Project Page Paper PDF Paper PDF

ArXiv 2025

Logo

FECO is a framework for dense foot contact estimation that addresses the challenges posed by diverse shoe appearances and limited ground appearance variability in foot images. Leveraging 10 datasets, including our proposed in-the-wild dataset COFE, we build a powerful model that learns dense foot contact across diverse scenarios.

Code

Installation

  • We recommend you to use an Anaconda virtual environment. Install PyTorch >=1.13.1 and Python >= 3.8.0. Our latest FECO model is tested on Python 3.8.20, PyTorch 1.1e.1, CUDA 11.6.
  • Setup the environment.
# Initialize conda environment
conda create -n feco python=3.8 -y
conda activate feco

# Install PyTorch
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

# Install all remaining packages
pip install -r requirements.txt

Data

You need to follow our directory structure of the data.

Then, download the official checkpoints and place them in the release_checkpoint from HuggingFace by running:

bash scripts/download_feco_checkpoints.sh

Quick demo

To run FECO on demo images using the YOLO human detector, please run:

python demo.py --backbone {BACKBONE_TYPE} --checkpoint {CKPT_PATH} --input_path {INPUT_PATH}

For example,

# ViT-H (Default, HaMeR initialized) backbone
python demo.py --backbone vit-h-14 --checkpoint release_checkpoint/feco_final_vit_h_checkpoint.ckpt --input_path asset/example_images

# ViT-B (ImageNet initialized) backbone
python demo.py --backbone vit-b-16 --checkpoint release_checkpoint/feco_final_vit_b_checkpoint.ckpt --input_path asset/example_images

Technical Q&A

  • ImportError: cannot import name 'bool' from 'numpy': Please just comment out the line from numpy import bool, int, float, complex, object, unicode, str, nan, inf.
  • np.int was a deprecated alias for the builtin int. To avoid this error in existing code, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information: Please refer to here.

Acknowledgement

We thank:

  • SagNets for inspiration on Shoe Style-Content Randomization.
  • Pro-RandConv for inspiration on Low-Level Style Randomization.
  • DECO for human-scene contact estimation.
  • HACO for dense contact estimation.

Reference

@article{jung2025feco,
    title={Shoe Style-Invariant and Ground-Aware Learning for Dense Foot Contact Estimation},
    author={Jung, Daniel Sungho and Lee, Kyoung Mu},
    journal={arXiv preprint arXiv:2511.22184},
    year={2025}
}

About

[CVPR 2026] This repo is official PyTorch implementation of the paper "Shoe Style-Invariant and Ground-Aware Learning for Dense Foot Contact Estimation".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published