Inference for pose estimation models from mmpose.
We strongly recommend using a virtual environment. If you're not sure where to start, we offer a tutorial here.
pip install ikomia
from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display
wf = Workflow()
algo = wf.add_task(name = 'infer_mmlab_pose_estimation', auto_connect=True)
wf.run_on(url="https://cdn.nba.com/teams/legacy/www.nba.com/bulls/sites/bulls/files/jordan_vs_indiana.jpg")
display(algo.get_image_with_graphics())
Ikomia Studio offers a friendly UI with the same features as the API.
-
If you haven't started using Ikomia Studio yet, download and install it from this page.
-
For additional guidance on getting started with Ikomia Studio, check out this blog post.
- config_file (str): Path to the .py config file.
- model_weight_file (str): Path or URL to model weights file .pth. Optional if config_file come from method get_model_zoo (see below for more information).
- conf_thres (float) default '0.5': Threshold of Non Maximum Suppression. It will retain Object Keypoint Similarity overlap when inferior to ‘conf_thres’, [0,1].
- conf_kp_thres (float) default '0.3': Threshold of the keypoint visibility. It will calculate Object Keypoint Similarity based on those keypoints whose visibility higher than ‘conf_kp_thres’, [0,1].
- detector: object detector, ‘Person’, ‘Hand’, Face’.
from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display
# Init your workflow
wf = Workflow()
# Add algorithm
algo = wf.add_task(name="infer_mmlab_pose_estimation", auto_connect=True)
algo.set_parameters({
"config_file": "configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_vipnas-mbv3_8xb64-210e_coco-256x192.py",
"conf_thres": "0.5",
"conf_kp_thres": "0.3",
"detector": "Person",
})
# Run on your image
wf.run_on(url="https://cdn.nba.com/teams/legacy/www.nba.com/bulls/sites/bulls/files/jordan_vs_indiana.jpg")
display(algo.get_image_with_graphics())
You can get the full list of available config_file by running this code snippet:
from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display
# Init your workflow
wf = Workflow()
# Add algorithm
algo = wf.add_task(name="infer_mmlab_pose_estimation", auto_connect=True)
# Get pretrained models
model_zoo = algo.get_model_zoo()
# Print possibilities
for parameters in model_zoo:
print(parameters)
Every algorithm produces specific outputs, yet they can be explored them the same way using the Ikomia API. For a more in-depth understanding of managing algorithm outputs, please refer to the documentation.
from ikomia.dataprocess.workflow import Workflow
# Init your workflow
wf = Workflow()
# Add algorithm
algo = wf.add_task(name="infer_mmlab_pose_estimation", auto_connect=True)
# Run on your image
wf.run_on(url="https://cdn.nba.com/teams/legacy/www.nba.com/bulls/sites/bulls/files/jordan_vs_indiana.jpg")
# Iterate over outputs
for output in algo.get_outputs():
# Print information
print(output)
# Export it to JSON
output.to_json()