diff --git a/demo/docs/en/webcam_api_demo.md b/demo/docs/en/webcam_api_demo.md index 4bbc75c261..9869392171 100644 --- a/demo/docs/en/webcam_api_demo.md +++ b/demo/docs/en/webcam_api_demo.md @@ -1,104 +1,30 @@ ## Webcam Demo -We provide a webcam demo tool which integrartes detection and 2D pose estimation for humans and animals. It can also apply fun effects like putting on sunglasses or enlarging the eyes, based on the pose estimation results. +The original Webcam API has been deprecated starting from version v1.1.0. Users now have the option to utilize either the Inferencer or the demo script for conducting pose estimation using webcam input. -
-
-
+### Webcam Demo with Inferencer -### Get started - -Launch the demo from the mmpose root directory: - -```shell -# Run webcam demo with GPU -python demo/webcam_api_demo.py - -# Run webcam demo with CPU -python demo/webcam_api_demo.py --cpu -``` - -The command above will use the default config file `demo/webcam_cfg/human_pose.py`. You can also specify the config file in the command: +Users can utilize the MMPose Inferencer to estimate human poses in webcam inputs by executing the following command: ```shell -python demo/webcam_api_demo.py --config demo/webcam_cfg/human_pose.py +python demo/inferencer_demo.py webcam --pose2d 'human' ``` -### Hotkeys - -| Hotkey | Function | -| ------ | ------------------------------------- | -| v | Toggle the pose visualization on/off. | -| h | Show help information. | -| m | Show the monitoring information. | -| q | Exit. | - -Note that the demo will automatically save the output video into a file `webcam_api_demo.mp4`. +For additional information about the arguments of Inferencer, please refer to the [Inferencer Documentation](/docs/en/user_guides/inference.md). -### Usage and configuarations +### Webcam Demo with Demo Script -Detailed configurations can be found in the config file. +All of the demo scripts, except for `demo/image_demo.py`, support webcam input. -- **Configure detection models** - Users can choose detection models from the [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). Just set the `model_config` and `model_checkpoint` in the detector node accordingly, and the model will be automatically downloaded and loaded. +Take `demo/topdown_demo_with_mmdet.py` as example, users can utilize this script with webcam input by specifying **`--input webcam`** in the command: - ```python - # 'DetectorNode': - # This node performs object detection from the frame image using an - # MMDetection model. - dict( - type='DetectorNode', - name='detector', - model_config='demo/mmdetection_cfg/' - 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py', - model_checkpoint='https://download.openmmlab.com' - '/mmdetection/v2.0/ssd/' - 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_' - 'scratch_600e_coco_20210629_110627-974d9307.pth', - input_buffer='_input_', - output_buffer='det_result'), - ``` - -- **Configure pose estimation models** - In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly. - - ```python - # 'TopdownPoseEstimatorNode': - # This node performs keypoint detection from the frame image using an - # MMPose top-down model. Detection results is needed. - dict( - type='TopdownPoseEstimatorNode', - name='human pose estimator', - model_config='configs/wholebody_2d_keypoint/' - 'topdown_heatmap/coco-wholebody/' - 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py', - model_checkpoint='https://download.openmmlab.com/mmpose/' - 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark' - '-e2158108_20211205.pth', - labels=['person'], - input_buffer='det_result', - output_buffer='human_pose'), - dict( - type='TopdownPoseEstimatorNode', - name='animal pose estimator', - model_config='configs/animal_2d_keypoint/topdown_heatmap/' - 'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py', - model_checkpoint='https://download.openmmlab.com/mmpose/animal/' - 'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth', - labels=['cat', 'dog', 'horse', 'sheep', 'cow'], - input_buffer='human_pose', - output_buffer='animal_pose'), - ``` - -- **Run the demo on a local video file** - You can use local video files as the demo input by set `camera_id` to the file path. - -- **The computer doesn't have a camera?** - A smart phone can serve as a webcam via apps like [Camo](https://reincubate.com/camo/) or [DroidCam](https://www.dev47apps.com/). - -- **Test the camera and display** - Run follow command for a quick test of video capturing and displaying. - - ```shell - python demo/webcam_api_demo.py --config demo/webcam_cfg/test_camera.py - ``` +```shell +# inference with webcam +python demo/topdown_demo_with_mmdet.py \ + projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \ + https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \ + projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \ + https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \ + --input webcam \ + --show +``` diff --git a/demo/docs/zh_cn/webcam_api_demo.md b/demo/docs/zh_cn/webcam_api_demo.md index acc1aa9b0a..66099c9ca6 100644 --- a/demo/docs/zh_cn/webcam_api_demo.md +++ b/demo/docs/zh_cn/webcam_api_demo.md @@ -1,109 +1,30 @@ -## Webcam Demo +## 摄像头推理 -我们提供了同时支持人体和动物的识别和 2D 姿态预估 webcam demo 工具,用户也可以用这个脚本在姿态预测结果上加入譬如大眼和戴墨镜等好玩的特效。 +从版本 v1.1.0 开始,原来的摄像头 API 已被弃用。用户现在可以选择使用推理器(Inferencer)或 Demo 脚本从摄像头读取的视频中进行姿势估计。 -
-
-
+### 使用推理器进行摄像头推理 -### Get started - -脚本使用方式很简单,直接在 MMPose 根路径使用: - -```shell -# 使用 GPU -python demo/webcam_api_demo.py - -# 仅使用 CPU -python demo/webcam_api_demo.py --cpu -``` - -该命令会使用默认的 `demo/webcam_cfg/human_pose.py` 作为配置文件,用户可以自行指定别的配置: +用户可以通过执行以下命令来利用 MMPose Inferencer 对摄像头输入进行人体姿势估计: ```shell -python demo/webcam_api_demo.py --config demo/webcam_cfg/human_pose.py +python demo/inferencer_demo.py webcam --pose2d 'human' ``` -### Hotkeys - -| Hotkey | Function | -| ------ | ------------------------------------- | -| v | Toggle the pose visualization on/off. | -| h | Show help information. | -| m | Show the monitoring information. | -| q | Exit. | - -注意:脚本会自动将实时结果保存成一个名为 `webcam_api_demo.mp4` 的视频文件。 - -### 配置使用 - -这里我们只进行一些基本的说明,更多的信息可以直接参考对应的配置文件。 - -- **设置检测模型** +有关推理器的参数详细信息,请参阅 [推理器文档](/docs/en/user_guides/inference.md)。 - 用户可以直接使用 [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html) 里的识别模型,需要注意的是确保配置文件中的 DetectorNode 里的 `model_config` 和 `model_checkpoint` 需要对应起来,这样模型就会被自动下载和加载,例如: +### 使用 Demo 脚本进行摄像头推理 - ```python - # 'DetectorNode': - # This node performs object detection from the frame image using an - # MMDetection model. - dict( - type='DetectorNode', - name='detector', - model_config='demo/mmdetection_cfg/' - 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py', - model_checkpoint='https://download.openmmlab.com' - '/mmdetection/v2.0/ssd/' - 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_' - 'scratch_600e_coco_20210629_110627-974d9307.pth', - input_buffer='_input_', - output_buffer='det_result'), - ``` +除了 `demo/image_demo.py` 之外,所有的 Demo 脚本都支持摄像头输入。 -- **设置姿态预估模型** +以 `demo/topdown_demo_with_mmdet.py` 为例,用户可以通过在命令中指定 **`--input webcam`** 来使用该脚本对摄像头输入进行推理: - 这里我们用两个 [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) 结构的人体和动物姿态预估模型进行演示。用户可以自由使用 [MMPose Model Zoo](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo/body_2d_keypoint.html) 里的模型。需要注意的是,更换模型后用户需要在对应的 pose estimate node 里添加或修改对应的 `cls_names` ,例如: - - ```python - # 'TopdownPoseEstimatorNode': - # This node performs keypoint detection from the frame image using an - # MMPose top-down model. Detection results is needed. - dict( - type='TopdownPoseEstimatorNode', - name='human pose estimator', - model_config='configs/wholebody_2d_keypoint/' - 'topdown_heatmap/coco-wholebody/' - 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py', - model_checkpoint='https://download.openmmlab.com/mmpose/' - 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark' - '-e2158108_20211205.pth', - labels=['person'], - input_buffer='det_result', - output_buffer='human_pose'), - dict( - type='TopdownPoseEstimatorNode', - name='animal pose estimator', - model_config='configs/animal_2d_keypoint/topdown_heatmap/' - 'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py', - model_checkpoint='https://download.openmmlab.com/mmpose/animal/' - 'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth', - labels=['cat', 'dog', 'horse', 'sheep', 'cow'], - input_buffer='human_pose', - output_buffer='animal_pose'), - ``` - -- **使用本地视频文件** - - 如果想直接使用本地的视频文件,用户只需要把文件路径设置到 `camera_id` 就行。 - -- **本机没有摄像头怎么办** - - 用户可以在自己手机安装上一些 app 就能替代摄像头,例如 [Camo](https://reincubate.com/camo/) 和 [DroidCam](https://www.dev47apps.com/) 。 - -- **测试摄像头和显示器连接** - - 使用如下命令就能完成检测: - - ```shell - python demo/webcam_api_demo.py --config demo/webcam_cfg/test_camera.py - ``` +```shell +# inference with webcam +python demo/topdown_demo_with_mmdet.py \ + projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \ + https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \ + projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \ + https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \ + --input webcam \ + --show +``` diff --git a/demo/webcam_api_demo.py b/demo/webcam_api_demo.py deleted file mode 100644 index 7d7ad263b1..0000000000 --- a/demo/webcam_api_demo.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -import logging -import warnings -from argparse import ArgumentParser - -from mmengine import Config, DictAction - -from mmpose.apis.webcam import WebcamExecutor -from mmpose.apis.webcam.nodes import model_nodes - - -def parse_args(): - parser = ArgumentParser('Webcam executor configs') - parser.add_argument( - '--config', type=str, default='demo/webcam_cfg/human_pose.py') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - default={}, - help='Override settings in the config. The key-value pair ' - 'in xxx=yyy format will be merged into config file. For example, ' - "'--cfg-options executor_cfg.camera_id=1'") - parser.add_argument( - '--debug', action='store_true', help='Show debug information.') - parser.add_argument( - '--cpu', action='store_true', help='Use CPU for model inference.') - parser.add_argument( - '--cuda', action='store_true', help='Use GPU for model inference.') - - return parser.parse_args() - - -def set_device(cfg: Config, device: str): - """Set model device in config. - - Args: - cfg (Config): Webcam config - device (str): device indicator like "cpu" or "cuda:0" - """ - - device = device.lower() - assert device == 'cpu' or device.startswith('cuda:') - - for node_cfg in cfg.executor_cfg.nodes: - if node_cfg.type in model_nodes.__all__: - node_cfg.update(device=device) - - return cfg - - -def run(): - - warnings.warn('The Webcam API will be deprecated in future. ', - DeprecationWarning) - - args = parse_args() - cfg = Config.fromfile(args.config) - cfg.merge_from_dict(args.cfg_options) - - if args.debug: - logging.basicConfig(level=logging.DEBUG) - - if args.cpu: - cfg = set_device(cfg, 'cpu') - - if args.cuda: - cfg = set_device(cfg, 'cuda:0') - - webcam_exe = WebcamExecutor(**cfg.executor_cfg) - webcam_exe.run() - - -if __name__ == '__main__': - run() diff --git a/demo/webcam_cfg/human_animal_pose.py b/demo/webcam_cfg/human_animal_pose.py deleted file mode 100644 index 5eedc7f216..0000000000 --- a/demo/webcam_cfg/human_animal_pose.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -executor_cfg = dict( - # Basic configurations of the executor - name='Pose Estimation', - camera_id=0, - # Define nodes. - # The configuration of a node usually includes: - # 1. 'type': Node class name - # 2. 'name': Node name - # 3. I/O buffers (e.g. 'input_buffer', 'output_buffer'): specify the - # input and output buffer names. This may depend on the node class. - # 4. 'enable_key': assign a hot-key to toggle enable/disable this node. - # This may depend on the node class. - # 5. Other class-specific arguments - nodes=[ - # 'DetectorNode': - # This node performs object detection from the frame image using an - # MMDetection model. - dict( - type='DetectorNode', - name='detector', - model_config='demo/mmdetection_cfg/' - 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py', - model_checkpoint='https://download.openmmlab.com' - '/mmdetection/v2.0/ssd/' - 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_' - 'scratch_600e_coco_20210629_110627-974d9307.pth', - input_buffer='_input_', # `_input_` is an executor-reserved buffer - output_buffer='det_result'), - # 'TopdownPoseEstimatorNode': - # This node performs keypoint detection from the frame image using an - # MMPose top-down model. Detection results is needed. - dict( - type='TopdownPoseEstimatorNode', - name='human pose estimator', - model_config='configs/wholebody_2d_keypoint/' - 'topdown_heatmap/coco-wholebody/' - 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py', - model_checkpoint='https://download.openmmlab.com/mmpose/' - 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark' - '-e2158108_20211205.pth', - labels=['person'], - input_buffer='det_result', - output_buffer='human_pose'), - dict( - type='TopdownPoseEstimatorNode', - name='animal pose estimator', - model_config='configs/animal_2d_keypoint/topdown_heatmap/' - 'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py', - model_checkpoint='https://download.openmmlab.com/mmpose/animal/' - 'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth', - labels=['cat', 'dog', 'horse', 'sheep', 'cow'], - input_buffer='human_pose', - output_buffer='animal_pose'), - # 'ObjectAssignerNode': - # This node binds the latest model inference result with the current - # frame. (This means the frame image and inference result may be - # asynchronous). - dict( - type='ObjectAssignerNode', - name='object assigner', - frame_buffer='_frame_', # `_frame_` is an executor-reserved buffer - object_buffer='animal_pose', - output_buffer='frame'), - # 'ObjectVisualizerNode': - # This node draw the pose visualization result in the frame image. - # Pose results is needed. - dict( - type='ObjectVisualizerNode', - name='object visualizer', - enable_key='v', - enable=True, - show_bbox=True, - must_have_keypoint=False, - show_keypoint=True, - input_buffer='frame', - output_buffer='vis'), - # 'SunglassesNode': - # This node draw the sunglasses effect in the frame image. - # Pose results is needed. - dict( - type='SunglassesEffectNode', - name='sunglasses', - enable_key='s', - enable=False, - input_buffer='vis', - output_buffer='vis_sunglasses'), - # 'BigeyeEffectNode': - # This node draw the big-eye effetc in the frame image. - # Pose results is needed. - dict( - type='BigeyeEffectNode', - name='big-eye', - enable_key='b', - enable=False, - input_buffer='vis_sunglasses', - output_buffer='vis_bigeye'), - # 'NoticeBoardNode': - # This node show a notice board with given content, e.g. help - # information. - dict( - type='NoticeBoardNode', - name='instruction', - enable_key='h', - enable=True, - input_buffer='vis_bigeye', - output_buffer='vis_notice', - content_lines=[ - 'This is a demo for pose visualization and simple image ' - 'effects. Have fun!', '', 'Hot-keys:', - '"v": Pose estimation result visualization', - '"s": Sunglasses effect B-)', '"b": Big-eye effect 0_0', - '"h": Show help information', - '"m": Show diagnostic information', '"q": Exit' - ], - ), - # 'MonitorNode': - # This node show diagnostic information in the frame image. It can - # be used for debugging or monitoring system resource status. - dict( - type='MonitorNode', - name='monitor', - enable_key='m', - enable=False, - input_buffer='vis_notice', - output_buffer='display'), - # 'RecorderNode': - # This node save the output video into a file. - dict( - type='RecorderNode', - name='recorder', - out_video_file='webcam_api_demo.mp4', - input_buffer='display', - output_buffer='_display_' - # `_display_` is an executor-reserved buffer - ) - ]) diff --git a/demo/webcam_cfg/human_pose.py b/demo/webcam_cfg/human_pose.py deleted file mode 100644 index d1bac5722a..0000000000 --- a/demo/webcam_cfg/human_pose.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -executor_cfg = dict( - # Basic configurations of the executor - name='Pose Estimation', - camera_id=0, - # Define nodes. - # The configuration of a node usually includes: - # 1. 'type': Node class name - # 2. 'name': Node name - # 3. I/O buffers (e.g. 'input_buffer', 'output_buffer'): specify the - # input and output buffer names. This may depend on the node class. - # 4. 'enable_key': assign a hot-key to toggle enable/disable this node. - # This may depend on the node class. - # 5. Other class-specific arguments - nodes=[ - # 'DetectorNode': - # This node performs object detection from the frame image using an - # MMDetection model. - dict( - type='DetectorNode', - name='detector', - model_config='projects/rtmpose/rtmdet/person/' - 'rtmdet_nano_320-8xb32_coco-person.py', - model_checkpoint='https://download.openmmlab.com/mmpose/v1/' - 'projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth', # noqa - input_buffer='_input_', # `_input_` is an executor-reserved buffer - output_buffer='det_result'), - # 'TopdownPoseEstimatorNode': - # This node performs keypoint detection from the frame image using an - # MMPose top-down model. Detection results is needed. - dict( - type='TopdownPoseEstimatorNode', - name='human pose estimator', - model_config='projects/rtmpose/rtmpose/body_2d_keypoint/' - 'rtmpose-t_8xb256-420e_coco-256x192.py', - model_checkpoint='https://download.openmmlab.com/mmpose/v1/' - 'projects/rtmpose/rtmpose-tiny_simcc-aic-coco_pt-aic-coco_420e-256x192-cfc8f33d_20230126.pth', # noqa - labels=['person'], - input_buffer='det_result', - output_buffer='human_pose'), - # 'ObjectAssignerNode': - # This node binds the latest model inference result with the current - # frame. (This means the frame image and inference result may be - # asynchronous). - dict( - type='ObjectAssignerNode', - name='object assigner', - frame_buffer='_frame_', # `_frame_` is an executor-reserved buffer - object_buffer='human_pose', - output_buffer='frame'), - # 'ObjectVisualizerNode': - # This node draw the pose visualization result in the frame image. - # Pose results is needed. - dict( - type='ObjectVisualizerNode', - name='object visualizer', - enable_key='v', - enable=True, - show_bbox=True, - must_have_keypoint=False, - show_keypoint=True, - input_buffer='frame', - output_buffer='vis'), - # 'NoticeBoardNode': - # This node show a notice board with given content, e.g. help - # information. - dict( - type='NoticeBoardNode', - name='instruction', - enable_key='h', - enable=True, - input_buffer='vis', - output_buffer='vis_notice', - content_lines=[ - 'This is a demo for pose visualization and simple image ' - 'effects. Have fun!', '', 'Hot-keys:', - '"v": Pose estimation result visualization', - '"h": Show help information', - '"m": Show diagnostic information', '"q": Exit' - ], - ), - # 'MonitorNode': - # This node show diagnostic information in the frame image. It can - # be used for debugging or monitoring system resource status. - dict( - type='MonitorNode', - name='monitor', - enable_key='m', - enable=False, - input_buffer='vis_notice', - output_buffer='display'), - # 'RecorderNode': - # This node save the output video into a file. - dict( - type='RecorderNode', - name='recorder', - out_video_file='webcam_api_demo.mp4', - input_buffer='display', - output_buffer='_display_' - # `_display_` is an executor-reserved buffer - ) - ]) diff --git a/demo/webcam_cfg/test_camera.py b/demo/webcam_cfg/test_camera.py deleted file mode 100644 index e6d79cf6db..0000000000 --- a/demo/webcam_cfg/test_camera.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -executor_cfg = dict( - name='Test Webcam', - camera_id=0, - camera_max_fps=30, - nodes=[ - dict( - type='MonitorNode', - name='monitor', - enable_key='m', - enable=False, - input_buffer='_frame_', - output_buffer='display'), - # 'RecorderNode': - # This node save the output video into a file. - dict( - type='RecorderNode', - name='recorder', - out_video_file='webcam_api_output.mp4', - input_buffer='display', - output_buffer='_display_') - ]) diff --git a/docs/en/webcam_api.rst b/docs/en/webcam_api.rst deleted file mode 100644 index ff1c127515..0000000000 --- a/docs/en/webcam_api.rst +++ /dev/null @@ -1,112 +0,0 @@ -mmpose.apis.webcam --------------------- -.. contents:: MMPose Webcam API: Tools to build simple interactive webcam applications and demos - :depth: 2 - :local: - :backlinks: top - -Executor -^^^^^^^^^^^^^^^^^^^^ -.. currentmodule:: mmpose.apis.webcam -.. autosummary:: - :toctree: generated - :nosignatures: - - WebcamExecutor - -Nodes -^^^^^^^^^^^^^^^^^^^^ -.. currentmodule:: mmpose.apis.webcam.nodes - -Base Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - Node - BaseVisualizerNode - -Model Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - DetectorNode - TopdownPoseEstimatorNode - -Visualizer Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - ObjectVisualizerNode - NoticeBoardNode - SunglassesEffectNode - BigeyeEffectNode - -Helper Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - ObjectAssignerNode - MonitorNode - RecorderNode - -Utils -^^^^^^^^^^^^^^^^^^^^ -.. currentmodule:: mmpose.apis.webcam.utils - -Buffer and Message -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - BufferManager - Message - FrameMessage - VideoEndingMessage - -Pose -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - get_eye_keypoint_ids - get_face_keypoint_ids - get_hand_keypoint_ids - get_mouth_keypoint_ids - get_wrist_keypoint_ids - -Event -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - EventManager - -Misc -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - copy_and_paste - screen_matting - expand_and_clamp - limit_max_fps - is_image_file - get_cached_file_path - load_image_from_disk_or_url - get_config_path diff --git a/docs/zh_cn/webcam_api.rst b/docs/zh_cn/webcam_api.rst deleted file mode 100644 index ff1c127515..0000000000 --- a/docs/zh_cn/webcam_api.rst +++ /dev/null @@ -1,112 +0,0 @@ -mmpose.apis.webcam --------------------- -.. contents:: MMPose Webcam API: Tools to build simple interactive webcam applications and demos - :depth: 2 - :local: - :backlinks: top - -Executor -^^^^^^^^^^^^^^^^^^^^ -.. currentmodule:: mmpose.apis.webcam -.. autosummary:: - :toctree: generated - :nosignatures: - - WebcamExecutor - -Nodes -^^^^^^^^^^^^^^^^^^^^ -.. currentmodule:: mmpose.apis.webcam.nodes - -Base Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - Node - BaseVisualizerNode - -Model Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - DetectorNode - TopdownPoseEstimatorNode - -Visualizer Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - ObjectVisualizerNode - NoticeBoardNode - SunglassesEffectNode - BigeyeEffectNode - -Helper Nodes -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - :template: webcam_node_class.rst - - ObjectAssignerNode - MonitorNode - RecorderNode - -Utils -^^^^^^^^^^^^^^^^^^^^ -.. currentmodule:: mmpose.apis.webcam.utils - -Buffer and Message -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - BufferManager - Message - FrameMessage - VideoEndingMessage - -Pose -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - get_eye_keypoint_ids - get_face_keypoint_ids - get_hand_keypoint_ids - get_mouth_keypoint_ids - get_wrist_keypoint_ids - -Event -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - EventManager - -Misc -"""""""""""""""""""" -.. autosummary:: - :toctree: generated - :nosignatures: - - copy_and_paste - screen_matting - expand_and_clamp - limit_max_fps - is_image_file - get_cached_file_path - load_image_from_disk_or_url - get_config_path