Skip to content

Commit

Permalink
Bump version 0.28.0 (#1468)
Browse files Browse the repository at this point in the history
* update changelog

* update readme
  • Loading branch information
ly015 authored Jul 6, 2022
1 parent 8174b01 commit 5123a2a
Show file tree
Hide file tree
Showing 5 changed files with 49 additions and 52 deletions.
23 changes: 9 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@
[![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues)
[![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues)

[📘Documentation](https://mmpose.readthedocs.io/en/v0.27.0/) |
[🛠️Installation](https://mmpose.readthedocs.io/en/v0.27.0/install.html) |
[👀Model Zoo](https://mmpose.readthedocs.io/en/v0.27.0/modelzoo.html) |
[📜Papers](https://mmpose.readthedocs.io/en/v0.27.0/papers/algorithms.html) |
[🆕Update News](https://mmpose.readthedocs.io/en/v0.27.0/changelog.html) |
[📘Documentation](https://mmpose.readthedocs.io/en/v0.28.0/) |
[🛠️Installation](https://mmpose.readthedocs.io/en/v0.28.0/install.html) |
[👀Model Zoo](https://mmpose.readthedocs.io/en/v0.28.0/modelzoo.html) |
[📜Papers](https://mmpose.readthedocs.io/en/v0.28.0/papers/algorithms.html) |
[🆕Update News](https://mmpose.readthedocs.io/en/v0.28.0/changelog.html) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmpose/issues/new/choose)

</div>
Expand Down Expand Up @@ -78,15 +78,10 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb-

## What's New

- 2022-06-07: MMPose [v0.27.0](https://github.com/open-mmlab/mmpose/releases/tag/v0.27.0) is released. Major updates include:
- Support hand gesture recognition
- Try the [demo](/demo/docs/gesture_recognition_demo.md) for gesture recognition
- Learn more about the [algorithm](/docs/en/papers/algorithms/mtut.md), [dataset](/docs/en/papers/datasets/nvgesture.md) and [experiment results](/configs/hand/gesture_sview_rgbd_vid/mtut/nvgesture/i3d_nvgesture.md)
- Major upgrade to MMPose Webcam API towards simpler and more efficient development of pose-empowered applications
- Tutorials ([EN](/docs/en/tutorials/7_webcam_api.md)|[zh_CN](/docs/zh_cn/tutorials/7_webcam_api.md))
- [API Reference](https://mmpose.readthedocs.io/en/latest/api.html#mmpose-apis-webcam)
- [Demo](/demo/docs/webcam_demo.md)
- 2022-04: MMPose is available on [Gitee](https://gitee.com/open-mmlab/mmpose)
- 2022-07-06: MMPose [v0.28.0](https://github.com/open-mmlab/mmpose/releases/tag/v0.28.0) is released. Major updates include:
- Support [TCFormer](https://openaccess.thecvf.com/content/CVPR2022/html/Zeng_Not_All_Tokens_Are_Equal_Human-Centric_Visual_Analysis_via_Token_CVPR_2022_paper.html) (CVPR'2022). See the [model page](/configs/wholebody/2d_kpt_sview_rgb_img/topdown_heatmap/coco-wholebody/tcformer_coco-wholebody.md)
- Add [RLE](https://arxiv.org/abs/2107.11291) pre-trained model on COCO dataset. See the [model page](/configs/body/2d_kpt_sview_rgb_img/deeppose/coco/resnet_rle_coco.md)
- Update [Swin](/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/swin_coco.md) models with better performance
- 2022-02-28: MMPose model deployment is supported by [MMDeploy](https://github.com/open-mmlab/mmdeploy) v0.3.0
MMPose Webcam API is a simple yet powerful tool to develop interactive webcam applications with MMPose features.
- 2021-12-29: OpenMMLab Open Platform is online! Try our [pose estimation demo](https://platform.openmmlab.com/web-demo/demo/poseestimation)
Expand Down
22 changes: 9 additions & 13 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@
[![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues)
[![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues)

[📘文档](https://mmpose.readthedocs.io/zh_CN/v0.27.0/) |
[🛠️安装](https://mmpose.readthedocs.io/zh_CN/v0.27.0/install.html) |
[👀模型库](https://mmpose.readthedocs.io/zh_CN/v0.27.0/modelzoo.html) |
[📜论文库](https://mmpose.readthedocs.io/zh_CN/v0.27.0/papers/algorithms.html) |
[🆕更新日志](https://mmpose.readthedocs.io/en/v0.27.0/changelog.html) |
[📘文档](https://mmpose.readthedocs.io/zh_CN/v0.28.0/) |
[🛠️安装](https://mmpose.readthedocs.io/zh_CN/v0.28.0/install.html) |
[👀模型库](https://mmpose.readthedocs.io/zh_CN/v0.28.0/modelzoo.html) |
[📜论文库](https://mmpose.readthedocs.io/zh_CN/v0.28.0/papers/algorithms.html) |
[🆕更新日志](https://mmpose.readthedocs.io/en/v0.28.0/changelog.html) |
[🤔报告问题](https://github.com/open-mmlab/mmpose/issues/new/choose)

</div>
Expand Down Expand Up @@ -78,14 +78,10 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb-

## 最新进展

- 2022-06-07: MMPose [v0.27.0](https://github.com/open-mmlab/mmpose/releases/tag/v0.27.0) 已经发布. 主要更新包括:
- 支持了手势识别功能
- 试用手势识别 [demo](/demo/docs/gesture_recognition_demo.md)
- 了解更多关于 [算法](/docs/en/papers/algorithms/mtut.md)[数据集](/docs/en/papers/datasets/nvgesture.md)[预训练模型](/configs/hand/gesture_sview_rgbd_vid/mtut/nvgesture/i3d_nvgesture.md) 的信息
- 升级了 MMPose 摄像头应用接口(Webcam API),帮助用户更简捷高效地开发基于姿态估计的应用
- 教程 ([中文](/docs/zh_cn/tutorials/7_webcam_api.md)|[英文](/docs/en/tutorials/7_webcam_api.md)
- [API 查询](https://mmpose.readthedocs.io/zh_CN/latest/api.html#mmpose-apis-webcam)
- [Demo](/demo/docs/webcam_demo.md)
- 2022-07-06: MMPose [v0.28.0](https://github.com/open-mmlab/mmpose/releases/tag/v0.28.0) 已经发布. 主要更新包括:
- 支持了新的主干网络 [TCFormer](https://openaccess.thecvf.com/content/CVPR2022/html/Zeng_Not_All_Tokens_Are_Equal_Human-Centric_Visual_Analysis_via_Token_CVPR_2022_paper.html) (CVPR'2022),详见 [模型信息](/configs/wholebody/2d_kpt_sview_rgb_img/topdown_heatmap/coco-wholebody/tcformer_coco-wholebody.md)
- 增加了 [RLE](https://arxiv.org/abs/2107.11291) 在 COCO 数据集上的模型,详见 [模型信息](/configs/body/2d_kpt_sview_rgb_img/deeppose/coco/resnet_rle_coco.md)
- 优化了 [Swin](/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/swin_coco.md) 模型精度
- 2022-04: MMPose 代码可以通过 [Gitee](https://gitee.com/open-mmlab/mmpose) 访问
- 2022-02-28: [MMDeploy](https://github.com/open-mmlab/mmdeploy) v0.3.0 支持 MMPose 模型部署
- 2021-12-29: OpenMMLab 开放平台已经正式上线! 欢迎试用基于 MMPose 的[姿态估计 Demo](https://platform.openmmlab.com/web-demo/demo/poseestimation)
Expand Down
29 changes: 29 additions & 0 deletions docs/en/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,34 @@
# Changelog

## v0.28.0 (06/07/2022)

**Highlights**

- Support [TCFormer](https://openaccess.thecvf.com/content/CVPR2022/html/Zeng_Not_All_Tokens_Are_Equal_Human-Centric_Visual_Analysis_via_Token_CVPR_2022_paper.html) backbone, CVPR'2022 ([#1447](https://github.com/open-mmlab/mmpose/pull/1447), [#1452](https://github.com/open-mmlab/mmpose/pull/1452)) @zengwang430521
- Add [RLE](https://arxiv.org/abs/2107.11291) models on COCO dataset ([#1424](https://github.com/open-mmlab/mmpose/pull/1424)) @Indigo6, @Ben-Louis, @ly015
- Update swin models with better performance ([#1467](https://github.com/open-mmlab/mmpose/pull/1434)) @jin-s13

**New Features**

- Support [TCFormer](https://openaccess.thecvf.com/content/CVPR2022/html/Zeng_Not_All_Tokens_Are_Equal_Human-Centric_Visual_Analysis_via_Token_CVPR_2022_paper.html) backbone, CVPR'2022 ([#1447](https://github.com/open-mmlab/mmpose/pull/1447), [#1452](https://github.com/open-mmlab/mmpose/pull/1452)) @zengwang430521
- Add [RLE](https://arxiv.org/abs/2107.11291) models on COCO dataset ([#1424](https://github.com/open-mmlab/mmpose/pull/1424)) @Indigo6, @Ben-Louis, @ly015
- Support layer decay optimizer conctructor and learning rate decay optimizer constructor ([#1423](https://github.com/open-mmlab/mmpose/pull/1423)) @jin-s13

**Improvements**

- Improve documentation quality ([#1416](https://github.com/open-mmlab/mmpose/pull/1416), [#1421](https://github.com/open-mmlab/mmpose/pull/1421), [#1423](https://github.com/open-mmlab/mmpose/pull/1423), [#1426](https://github.com/open-mmlab/mmpose/pull/1426), [#1458](https://github.com/open-mmlab/mmpose/pull/1458), [#1463](https://github.com/open-mmlab/mmpose/pull/1463)) @ly015, @liqikai9
- Support installation by [mim](https://github.com/open-mmlab/mim) ([#1425](https://github.com/open-mmlab/mmpose/pull/1425)) @liqikai9
- Support PAVI logger ([#1434](https://github.com/open-mmlab/mmpose/pull/1434)) @EvelynWang-0423
- Add progress bar for some demos ([#1454](https://github.com/open-mmlab/mmpose/pull/1454)) @liqikai9
- Webcam API supports quick device setting in terminal commands ([#1466](https://github.com/open-mmlab/mmpose/pull/1466)) @ly015
- Update swin models with better performance ([#1467](https://github.com/open-mmlab/mmpose/pull/1434)) @jin-s13

**Bug Fixes**

- Rename `custom_hooks_config` to `cust_hooks` in configs to align with the documentation ([#1427](https://github.com/open-mmlab/mmpose/pull/1427)) @ly015
- Fix deadlock issue in Webcam API ([#1430](https://github.com/open-mmlab/mmpose/pull/1430)) @ly015
- Fix smoother configs in video 3D demo ([#1457](https://github.com/open-mmlab/mmpose/pull/1457)) @ly015

## v0.27.0 (07/06/2022)

**Highlights**
Expand Down
25 changes: 1 addition & 24 deletions docs/en/papers/backbones/tcformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,30 +21,7 @@

<!-- [ABSTRACT] -->

Vision transformers have achieved great successes in
many computer vision tasks. Most methods generate
vision tokens by splitting an image into a regular
and fixed grid and treating each cell as a token.
However, not all regions are equally important in
human-centric vision tasks, e.g., the human body
needs a fine representation with many tokens, while
the image background can be modeled by a few tokens.
To address this problem, we propose a novel Vision
Transformer, called Token Clustering Transformer
(TCFormer), which merges tokens by progressive
clustering, where the tokens can be merged from
different locations with flexible shapes and sizes.
The tokens in TCFormer can not only focus on important
areas but also adjust the token shapes to fit the
semantic concept and adopt a fine resolution for
regions containing critical details, which is
beneficial to capturing detailed information.
Extensive experiments show that TCFormer consistently
outperforms its counterparts on different challenging
humancentric tasks and datasets, including whole-body
pose estimation on COCO-WholeBody and 3D human mesh
reconstruction on 3DPW. Code is available at
https://github.com/zengwang430521/TCFormer.git.
Vision transformers have achieved great successes in many computer vision tasks. Most methods generate vision tokens by splitting an image into a regular and fixed grid and treating each cell as a token. However, not all regions are equally important in human-centric vision tasks, e.g., the human body needs a fine representation with many tokens, while the image background can be modeled by a few tokens. To address this problem, we propose a novel Vision Transformer, called Token Clustering Transformer (TCFormer), which merges tokens by progressive clustering, where the tokens can be merged from different locations with flexible shapes and sizes. The tokens in TCFormer can not only focus on important areas but also adjust the token shapes to fit the semantic concept and adopt a fine resolution for regions containing critical details, which is beneficial to capturing detailed information. Extensive experiments show that TCFormer consistently outperforms its counterparts on different challenging humancentric tasks and datasets, including whole-body pose estimation on COCO-WholeBody and 3D human mesh reconstruction on 3DPW. Code is available at https://github.com/zengwang430521/TCFormer.git.

<!-- [IMAGE] -->

Expand Down
2 changes: 1 addition & 1 deletion mmpose/version.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Copyright (c) Open-MMLab. All rights reserved.

__version__ = '0.27.0'
__version__ = '0.28.0'
short_version = __version__


Expand Down

0 comments on commit 5123a2a

Please sign in to comment.