Skip to content

Commit

Permalink
docs: set long_description to the contents of README.md as the descri…
Browse files Browse the repository at this point in the history
…ption on PyPI (#669)
  • Loading branch information
geniuspatrick authored Jun 2, 2023
1 parent 6901173 commit 784fba4
Show file tree
Hide file tree
Showing 10 changed files with 406 additions and 410 deletions.
348 changes: 137 additions & 211 deletions README.md

Large diffs are not rendered by default.

272 changes: 122 additions & 150 deletions README_CN.md

Large diffs are not rendered by default.

93 changes: 92 additions & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,97 @@

# Release Note

- 2023/5/30
1. New Models:
- AMP(O2) version of [VGG](configs/vgg)
- [GhostNet](configs/ghostnet)
- AMP(O3) version of [MobileNetV2](configs/mobilenetv2) and [MobileNetV3](configs/mobilenetv3)
- (x,y)_(200,400,600,800)mf of [RegNet](configs/regnet)
- b1g2, b1g4 & b2g4 of [RepVGG](configs/repvgg)
- 0.5 of [MnasNet](configs/mnasnet)
- b3 & b4 of [PVTv2](configs/pvt_v2)
2. New Features:
- 3-Augment, Augmix, TrivialAugmentWide
3. Bug Fixes:
- ViT pooling mode

- 2023/04/28
1. Add some new models, listed as following
- [VGG](configs/vgg)
- [DPN](configs/dpn)
- [ResNet v2](configs/resnetv2)
- [MnasNet](configs/mnasnet)
- [MixNet](configs/mixnet)
- [RepVGG](configs/repvgg)
- [ConvNeXt](configs/convnext)
- [Swin Transformer](configs/swintransformer)
- [EdgeNeXt](configs/edgenext)
- [CrossViT](configs/crossvit)
- [XCiT](configs/xcit)
- [CoAT](configs/coat)
- [PiT](configs/pit)
- [PVT v2](configs/pvt_v2)
- [MobileViT](configs/mobilevit)
2. Bug fix:
- Setting the same random seed for each rank
- Checking if options from yaml config exist in argument parser
- Initializing flag variable as `Tensor` in Optimizer `Adan`

## 0.2.0

- 2023/03/25
1. Update checkpoints for pretrained ResNet for better accuracy
- ResNet18 (from 70.09 to 70.31 @Top1 accuracy)
- ResNet34 (from 73.69 to 74.15 @Top1 accuracy)
- ResNet50 (from 76.64 to 76.69 @Top1 accuracy)
- ResNet101 (from 77.63 to 78.24 @Top1 accuracy)
- ResNet152 (from 78.63 to 78.72 @Top1 accuracy)
2. Rename checkpoint file name to follow naming rule ({model_scale-sha256sum.ckpt}) and update download URLs.

- 2023/03/05
1. Add Lion (EvoLved Sign Momentum) optimizer from paper https://arxiv.org/abs/2302.06675
- To replace adamw with lion, LR is usually 3-10x smaller, and weight decay is usually 3-10x larger than adamw.
2. Add 6 new models with training recipes and pretrained weights for
- [HRNet](configs/hrnet)
- [SENet](configs/senet)
- [GoogLeNet](configs/googlenet)
- [Inception V3](configs/inception_v3)
- [Inception V4](configs/inception_v4)
- [Xception](configs/xception)
3. Support gradient clip
4. Arg name `use_ema` changed to **`ema`**, add `ema: True` in yaml to enable EMA.

## 0.1.1

- 2023/01/10
1. MindCV v0.1 released! It can be installed via PyPI `pip install mindcv` now.
2. Add training recipe and trained weights of googlenet, inception_v3, inception_v4, xception

## 0.1.0

- 2022/12/09
1. Support lr warmup for all lr scheduling algorithms besides cosine decay.
2. Add repeated augmentation, which can be enabled by setting `--aug_repeats` to be a value larger than 1 (typically, 3 or 4 is a common choice).
3. Add EMA.
4. Improve BCE loss to support mixup/cutmix.

- 2022/11/21
1. Add visualization for loss and acc curves
2. Support epochwise lr warmup cosine decay (previous is stepwise)

- 2022/11/09
1. Add 7 pretrained ViT models.
2. Add RandAugment augmentation.
3. Fix CutMix efficiency issue and CutMix and Mixup can be used together.
4. Fix lr plot and scheduling bug.

- 2022/10/12
1. Both BCE and CE loss now support class-weight config, label smoothing, and auxiliary logit input (for networks like inception).

## 0.0.1-beta

- 2022/09/13
1. Add Adan optimizer (experimental)

## MindSpore Computer Vision 0.0.1

### Models
Expand Down
66 changes: 33 additions & 33 deletions docs/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Below are a few code snippets for your taste.
<img src="https://user-images.githubusercontent.com/8156835/210049681-89f68b9f-eb44-44e2-b689-4d30c93c6191.jpg" width=360 />
</p>

Classify the dowloaded image with a pretrained SoTA model:
Classify the downloaded image with a pretrained SoTA model:

```pycon
>>> !python infer.py --model=swin_tiny --image_path='./dog.jpg'
Expand Down Expand Up @@ -156,7 +156,7 @@ It is easy to train your model on a standard or customized dataset using `train.
[Pynative mode with ms_function](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/pynative_graph/combine.html) is a mixed mode for comprising flexibility and efficiency in MindSpore. To apply pynative mode with ms_function for training, please run `train_with_func.py`, e.g.,
``` shell
```shell
python train_with_func.py --model=resnet50 --dataset=cifar10 --dataset_download --epoch_size=10
```
Expand Down Expand Up @@ -201,42 +201,42 @@ We provide the following jupyter notebook tutorials to help users learn to use M
<summary> Supported algorithms </summary>
* Augmentation
* [AutoAugment](https://arxiv.org/abs/1805.09501)
* [RandAugment](https://arxiv.org/abs/1909.13719)
* [Repeated Augmentation](https://openaccess.thecvf.com/content_CVPR_2020/papers/Hoffer_Augment_Your_Batch_Improving_Generalization_Through_Instance_Repetition_CVPR_2020_paper.pdf)
* RandErasing (Cutout)
* CutMix
* MixUp
* RandomResizeCrop
* Color Jitter, Flip, etc
* [AutoAugment](https://arxiv.org/abs/1805.09501)
* [RandAugment](https://arxiv.org/abs/1909.13719)
* [Repeated Augmentation](https://openaccess.thecvf.com/content_CVPR_2020/papers/Hoffer_Augment_Your_Batch_Improving_Generalization_Through_Instance_Repetition_CVPR_2020_paper.pdf)
* RandErasing (Cutout)
* CutMix
* MixUp
* RandomResizeCrop
* Color Jitter, Flip, etc
* Optimizer
* Adam
* AdamW
* [Lion](https://arxiv.org/abs/2302.06675)
* Adan (experimental)
* AdaGrad
* LAMB
* Momentum
* RMSProp
* SGD
* NAdam
* Adam
* AdamW
* [Lion](https://arxiv.org/abs/2302.06675)
* Adan (experimental)
* AdaGrad
* LAMB
* Momentum
* RMSProp
* SGD
* NAdam
* LR Scheduler
* Warmup Cosine Decay
* Step LR
* Polynomial Decay
* Exponential Decay
* Warmup Cosine Decay
* Step LR
* Polynomial Decay
* Exponential Decay
* Regularization
* Weight Decay
* Label Smoothing
* Stochastic Depth (depends on networks)
* Dropout (depends on networks)
* Weight Decay
* Label Smoothing
* Stochastic Depth (depends on networks)
* Dropout (depends on networks)
* Loss
* Cross Entropy (w/ class weight and auxiliary logit support)
* Binary Cross Entropy (w/ class weight and auxiliary logit support)
* Soft Cross Entropy Loss (automatically enabled if mixup or label smoothing is used)
* Soft Binary Cross Entropy Loss (automatically enabled if mixup or label smoothing is used)
* Cross Entropy (w/ class weight and auxiliary logit support)
* Binary Cross Entropy (w/ class weight and auxiliary logit support)
* Soft Cross Entropy Loss (automatically enabled if mixup or label smoothing is used)
* Soft Binary Cross Entropy Loss (automatically enabled if mixup or label smoothing is used)
* Ensemble
* Warmup EMA (Exponential Moving Average)
* Warmup EMA (Exponential Moving Average)
</details>
Expand Down
21 changes: 10 additions & 11 deletions docs/zh/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力

## 安装

详情请见[安装](./installation.md)页面
详情请见[安装](./installation.md)页面

## 快速入门

Expand All @@ -73,7 +73,7 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力
# 创建模型
>>> network = mindcv.create_model('swin_tiny', pretrained=True)
# 验证模型的准确率
>>> !python validate.py - -model = swin_tiny - -pretrained - -dataset = imagenet - -val_split = validation
>>> !python validate.py --model=swin_tiny --pretrained --dataset=imagenet --val_split=validation
{'Top_1_Accuracy': 0.808343989769821, 'Top_5_Accuracy': 0.9527253836317136, 'loss': 0.8474242982580839}
```

Expand All @@ -96,7 +96,7 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力

### 模型训练

通过`train.py`,用户可以很容易地在标准数据集或自定义数据集上训练模型,用户可以通过外部变量或者yaml配文件来设置训练策略(如数据增强、学习路策略)。
通过`train.py`,用户可以很容易地在标准数据集或自定义数据集上训练模型,用户可以通过外部变量或者yaml配文件来设置训练策略(如数据增强、学习率策略)。

- 单卡训练

Expand Down Expand Up @@ -133,18 +133,17 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力
!!! tip "预定义的训练策略"
MindCV目前提前了超过20种模型训练策略,在ImageNet取得SoTA性能。
具体的参数配置和详细精度性能汇总请见[`configs`](https://github.com/mindspore-lab/mindcv/tree/main/configs)文件夹。
您可以便捷将这些训练策略用于您的模型训练中以提高性能(复用或修改相应的yaml文件即可)

您可以便捷将这些训练策略用于您的模型训练中以提高性能(复用或修改相应的yaml文件即可)。

- 在ModelArts/OpenI平台上训练

在[ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html)或[OpenI](https://openi.pcl.ac.cn/)云平台上进行训练,需要执行以下操作
在[ModelArts](https://www.huaweicloud.com/intl/en-us/product/modelarts.html)或[OpenI](https://openi.pcl.ac.cn/)云平台上进行训练,需要执行以下操作:

```text
1、在云平台上创建新的训练任务。
2、在网站UI界面添加运行参数`config`,并指定yaml配置文件的路径。
3、在网站UI界面添加运行参数`enable_modelarts`并设置为True。
4、在网站上填写其他训练信息并启动培训任务
4、在网站上填写其他训练信息并启动训练任务
```

!!! tip "静态图和动态图模式"
Expand All @@ -160,7 +159,7 @@ MindCV是一个基于 [MindSpore](https://www.mindspore.cn/) 开发的,致力
python train_with_func.py --model=resnet50 --dataset=cifar10 --dataset_download --epoch_size=10
```

> 注:此为试验性质的训练脚本,仍在改进,在1.8.1或更早版本的MindSpore上使用此模式目前并不稳定
> 注:此为试验性质的训练脚本,仍在改进,在MindSpore 1.8.1或更早版本上使用此模式目前并不稳定

### 模型验证

Expand All @@ -177,7 +176,7 @@ python validate.py --model=resnet50 --dataset=imagenet --data_dir=/path/to/data

```shell
python train.py --model=resnet50 --dataset=cifar10 \
--val_while_train --val_split=test --val_interval=1
--val_while_train --val_split=test --val_interval=1
```

各轮次的训练损失和测试精度将保存在`{ckpt_save_dir}/results.log`中。
Expand Down Expand Up @@ -206,13 +205,13 @@ python validate.py --model=resnet50 --dataset=imagenet --data_dir=/path/to/data
* [Repeated Augmentation](https://openaccess.thecvf.com/content_CVPR_2020/papers/Hoffer_Augment_Your_Batch_Improving_Generalization_Through_Instance_Repetition_CVPR_2020_paper.pdf)
* RandErasing (Cutout)
* CutMix
* Mixup
* MixUp
* RandomResizeCrop
* Color Jitter, Flip, etc
* 优化器
* Adam
* AdamW
* [Lion](https://arxiv.org/abs/2302.06675)
* [Lion](https://arxiv.org/abs/2302.06675)
* Adan (experimental)
* AdaGrad
* LAMB
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def main():
logits = nn.Softmax()(network(ms.Tensor(img)))[0].asnumpy()
preds = np.argsort(logits)[::-1][:5]
probs = logits[preds]
with open("./tutorials/imagenet1000_clsidx_to_labels.txt", encoding="utf-8") as f:
with open("./examples/data/imagenet1000_clsidx_to_labels.txt", encoding="utf-8") as f:
idx2label = ast.literal_eval(f.read())
# print(f"Predict result of {args.image_path}:")
cls_prob = {}
Expand Down
13 changes: 11 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,28 @@
#!/usr/bin/env python

from pathlib import Path

from setuptools import find_packages, setup

# read the contents of README file
this_directory = Path(__file__).parent
long_description = (this_directory / "README.md").read_text()

# read the `__version__` global variable in `version.py`
exec(open("mindcv/version.py").read())

setup(
name="mindcv",
author="MindSpore Ecosystem",
author_email="mindspore-ecosystem@example.com",
author="MindSpore Lab",
author_email="mindspore-lab@example.com",
url="https://github.com/mindspore-lab/mindcv",
project_urls={
"Sources": "https://github.com/mindspore-lab/mindcv",
"Issue Tracker": "https://github.com/mindspore-lab/mindcv/issues",
},
description="A toolbox of vision models and algorithms based on MindSpore.",
long_description=long_description,
long_description_content_type="text/markdown",
license="Apache Software License 2.0",
include_package_data=True,
packages=find_packages(include=["mindcv", "mindcv.*"]),
Expand Down
1 change: 0 additions & 1 deletion tutorials/README.md

This file was deleted.

Binary file removed tutorials/data/test/dog/dog.jpg
Binary file not shown.

0 comments on commit 784fba4

Please sign in to comment.