Skip to content

Commit

Permalink
update demo & doc
Browse files Browse the repository at this point in the history
  • Loading branch information
Ben-Louis committed Jul 4, 2023
1 parent 1681afb commit d3eb87b
Show file tree
Hide file tree
Showing 8 changed files with 37 additions and 751 deletions.
110 changes: 18 additions & 92 deletions demo/docs/en/webcam_api_demo.md
Original file line number Diff line number Diff line change
@@ -1,104 +1,30 @@
## Webcam Demo

We provide a webcam demo tool which integrartes detection and 2D pose estimation for humans and animals. It can also apply fun effects like putting on sunglasses or enlarging the eyes, based on the pose estimation results.
The original Webcam API has been deprecated starting from version v1.1.0. Users now have the option to utilize either the Inferencer or the demo script for conducting pose estimation using webcam input.

<div align="center">
<img src="https://user-images.githubusercontent.com/15977946/124059525-ce20c580-da5d-11eb-8e4a-2d96cd31fe9f.gif" width="600px" alt><br>
</div>
### Webcam Demo with Inferencer

### Get started

Launch the demo from the mmpose root directory:

```shell
# Run webcam demo with GPU
python demo/webcam_api_demo.py

# Run webcam demo with CPU
python demo/webcam_api_demo.py --cpu
```

The command above will use the default config file `demo/webcam_cfg/human_pose.py`. You can also specify the config file in the command:
Users can utilize the MMPose Inferencer to estimate human poses in webcam inputs by executing the following command:

```shell
python demo/webcam_api_demo.py --config demo/webcam_cfg/human_pose.py
python demo/inferencer_demo.py webcam --pose2d 'human'
```

### Hotkeys

| Hotkey | Function |
| ------ | ------------------------------------- |
| v | Toggle the pose visualization on/off. |
| h | Show help information. |
| m | Show the monitoring information. |
| q | Exit. |

Note that the demo will automatically save the output video into a file `webcam_api_demo.mp4`.
For additional information about the arguments of Inferencer, please refer to the [Inferencer Documentation](/docs/en/user_guides/inference.md).

### Usage and configuarations
### Webcam Demo with Demo Script

Detailed configurations can be found in the config file.
All of the demo scripts, except for `demo/image_demo.py`, support webcam input.

- **Configure detection models**
Users can choose detection models from the [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). Just set the `model_config` and `model_checkpoint` in the detector node accordingly, and the model will be automatically downloaded and loaded.
Take `demo/topdown_demo_with_mmdet.py` as example, users can utilize this script with webcam input by specifying **`--input webcam`** in the command:

```python
# 'DetectorNode':
# This node performs object detection from the frame image using an
# MMDetection model.
dict(
type='DetectorNode',
name='detector',
model_config='demo/mmdetection_cfg/'
'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
model_checkpoint='https://download.openmmlab.com'
'/mmdetection/v2.0/ssd/'
'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
'scratch_600e_coco_20210629_110627-974d9307.pth',
input_buffer='_input_',
output_buffer='det_result'),
```

- **Configure pose estimation models**
In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly.

```python
# 'TopdownPoseEstimatorNode':
# This node performs keypoint detection from the frame image using an
# MMPose top-down model. Detection results is needed.
dict(
type='TopdownPoseEstimatorNode',
name='human pose estimator',
model_config='configs/wholebody_2d_keypoint/'
'topdown_heatmap/coco-wholebody/'
'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py',
model_checkpoint='https://download.openmmlab.com/mmpose/'
'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
'-e2158108_20211205.pth',
labels=['person'],
input_buffer='det_result',
output_buffer='human_pose'),
dict(
type='TopdownPoseEstimatorNode',
name='animal pose estimator',
model_config='configs/animal_2d_keypoint/topdown_heatmap/'
'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py',
model_checkpoint='https://download.openmmlab.com/mmpose/animal/'
'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth',
labels=['cat', 'dog', 'horse', 'sheep', 'cow'],
input_buffer='human_pose',
output_buffer='animal_pose'),
```

- **Run the demo on a local video file**
You can use local video files as the demo input by set `camera_id` to the file path.

- **The computer doesn't have a camera?**
A smart phone can serve as a webcam via apps like [Camo](https://reincubate.com/camo/) or [DroidCam](https://www.dev47apps.com/).

- **Test the camera and display**
Run follow command for a quick test of video capturing and displaying.

```shell
python demo/webcam_api_demo.py --config demo/webcam_cfg/test_camera.py
```
```shell
# inference with webcam
python demo/topdown_demo_with_mmdet.py \
projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
--input webcam \
--show
```
117 changes: 19 additions & 98 deletions demo/docs/zh_cn/webcam_api_demo.md
Original file line number Diff line number Diff line change
@@ -1,109 +1,30 @@
## Webcam Demo
## 摄像头推理

我们提供了同时支持人体和动物的识别和 2D 姿态预估 webcam demo 工具,用户也可以用这个脚本在姿态预测结果上加入譬如大眼和戴墨镜等好玩的特效
从版本 v1.1.0 开始,原来的摄像头 API 已被弃用。用户现在可以选择使用推理器(Inferencer)或 Demo 脚本从摄像头读取的视频中进行姿势估计

<div align="center">
<img src="https://user-images.githubusercontent.com/15977946/124059525-ce20c580-da5d-11eb-8e4a-2d96cd31fe9f.gif" width="600px" alt><br>
</div>
### 使用推理器进行摄像头推理

### Get started

脚本使用方式很简单,直接在 MMPose 根路径使用:

```shell
# 使用 GPU
python demo/webcam_api_demo.py

# 仅使用 CPU
python demo/webcam_api_demo.py --cpu
```

该命令会使用默认的 `demo/webcam_cfg/human_pose.py` 作为配置文件,用户可以自行指定别的配置:
用户可以通过执行以下命令来利用 MMPose Inferencer 对摄像头输入进行人体姿势估计:

```shell
python demo/webcam_api_demo.py --config demo/webcam_cfg/human_pose.py
python demo/inferencer_demo.py webcam --pose2d 'human'
```

### Hotkeys

| Hotkey | Function |
| ------ | ------------------------------------- |
| v | Toggle the pose visualization on/off. |
| h | Show help information. |
| m | Show the monitoring information. |
| q | Exit. |

注意:脚本会自动将实时结果保存成一个名为 `webcam_api_demo.mp4` 的视频文件。

### 配置使用

这里我们只进行一些基本的说明,更多的信息可以直接参考对应的配置文件。

- **设置检测模型**
有关推理器的参数详细信息,请参阅 [推理器文档](/docs/en/user_guides/inference.md)

用户可以直接使用 [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html) 里的识别模型,需要注意的是确保配置文件中的 DetectorNode 里的 `model_config``model_checkpoint` 需要对应起来,这样模型就会被自动下载和加载,例如:
### 使用 Demo 脚本进行摄像头推理

```python
# 'DetectorNode':
# This node performs object detection from the frame image using an
# MMDetection model.
dict(
type='DetectorNode',
name='detector',
model_config='demo/mmdetection_cfg/'
'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
model_checkpoint='https://download.openmmlab.com'
'/mmdetection/v2.0/ssd/'
'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
'scratch_600e_coco_20210629_110627-974d9307.pth',
input_buffer='_input_',
output_buffer='det_result'),
```
除了 `demo/image_demo.py` 之外,所有的 Demo 脚本都支持摄像头输入。

- **设置姿态预估模型**
`demo/topdown_demo_with_mmdet.py` 为例,用户可以通过在命令中指定 **`--input webcam`** 来使用该脚本对摄像头输入进行推理:

这里我们用两个 [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) 结构的人体和动物姿态预估模型进行演示。用户可以自由使用 [MMPose Model Zoo](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo/body_2d_keypoint.html) 里的模型。需要注意的是,更换模型后用户需要在对应的 pose estimate node 里添加或修改对应的 `cls_names` ,例如:

```python
# 'TopdownPoseEstimatorNode':
# This node performs keypoint detection from the frame image using an
# MMPose top-down model. Detection results is needed.
dict(
type='TopdownPoseEstimatorNode',
name='human pose estimator',
model_config='configs/wholebody_2d_keypoint/'
'topdown_heatmap/coco-wholebody/'
'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py',
model_checkpoint='https://download.openmmlab.com/mmpose/'
'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
'-e2158108_20211205.pth',
labels=['person'],
input_buffer='det_result',
output_buffer='human_pose'),
dict(
type='TopdownPoseEstimatorNode',
name='animal pose estimator',
model_config='configs/animal_2d_keypoint/topdown_heatmap/'
'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py',
model_checkpoint='https://download.openmmlab.com/mmpose/animal/'
'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth',
labels=['cat', 'dog', 'horse', 'sheep', 'cow'],
input_buffer='human_pose',
output_buffer='animal_pose'),
```

- **使用本地视频文件**

如果想直接使用本地的视频文件,用户只需要把文件路径设置到 `camera_id` 就行。

- **本机没有摄像头怎么办**

用户可以在自己手机安装上一些 app 就能替代摄像头,例如 [Camo](https://reincubate.com/camo/) 和 [DroidCam](https://www.dev47apps.com/) 。

- **测试摄像头和显示器连接**

使用如下命令就能完成检测:

```shell
python demo/webcam_api_demo.py --config demo/webcam_cfg/test_camera.py
```
```shell
# inference with webcam
python demo/topdown_demo_with_mmdet.py \
projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
--input webcam \
--show
```
76 changes: 0 additions & 76 deletions demo/webcam_api_demo.py

This file was deleted.

Loading

0 comments on commit d3eb87b

Please sign in to comment.