Skip to content

Commit

Permalink
refactor: uniform all model names
Browse files Browse the repository at this point in the history
  • Loading branch information
XixinYang committed Jul 11, 2023
1 parent 8bc296a commit 06ab4a6
Show file tree
Hide file tree
Showing 143 changed files with 749 additions and 742 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ It is easy to train your model on a standard or customized dataset using `train.
You can configure your model and other components either by specifying external parameters or by writing a yaml config file. Here is an example of training using a preset yaml file.
```shell
mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet_1.0_gpu.yaml
mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet1_0_gpu.yaml
```
**Pre-defined Training Strategies:**
Expand Down Expand Up @@ -216,7 +216,7 @@ Currently, MindCV supports the model families listed below. More models with pre
* EfficientNet (MBConvNet Family) https://arxiv.org/abs/1905.11946
* EfficientNet V2 - https://arxiv.org/abs/2104.00298
* GhostNet - https://arxiv.org/abs/1911.11907
* GoogleNet - https://arxiv.org/abs/1409.4842
* GoogLeNet - https://arxiv.org/abs/1409.4842
* Inception-V3 - https://arxiv.org/abs/1512.00567
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
* MNASNet - https://arxiv.org/abs/1807.11626
Expand Down
4 changes: 2 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg'
您可以编写yaml文件或设置外部参数来指定配置数据、模型、优化器等组件及其超参。以下是使用预设的训练策略(yaml文件)进行模型训练的示例。

```shell
mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet_1.0_gpu.yaml
mpirun --allow-run-as-root -n 4 python train.py -c configs/squeezenet/squeezenet1_0_gpu.yaml
```

**预定义的训练策略**
Expand Down Expand Up @@ -217,7 +217,7 @@ python train.py --model=resnet50 --dataset=cifar10 \
* EfficientNet (MBConvNet Family) https://arxiv.org/abs/1905.11946
* EfficientNet V2 - https://arxiv.org/abs/2104.00298
* GhostNet - https://arxiv.org/abs/1911.11907
* GoogleNet - https://arxiv.org/abs/1409.4842
* GoogLeNet - https://arxiv.org/abs/1409.4842
* Inception-V3 - https://arxiv.org/abs/1512.00567
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
* MNASNet - https://arxiv.org/abs/1807.11626
Expand Down
2 changes: 1 addition & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@
`mindcv.models` now expose `num_classes` and `in_channels` as constructor arguments:

- Add DenseNet models and pre-trained weights
- Add GoogleNet models and pre-trained weights
- Add GoogLeNet models and pre-trained weights
- Add Inception V3 models and pre-trained weights
- Add Inception V4 models and pre-trained weights
- Add MnasNet models and pre-trained weights
Expand Down
196 changes: 100 additions & 96 deletions benchmark_results.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions configs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,10 +56,10 @@ For consistency, it is recommended to provide distributed training commands base

```shell
# standalone training on a gpu or ascend device
python train.py --config configs/densenet/densenet_121_gpu.yaml --data_dir /path/to/dataset --distribute False
python train.py --config configs/densenet/densenet121_gpu.yaml --data_dir /path/to/dataset --distribute False
# distributed training on gpu or ascend divices
mpirun -n 8 python train.py --config configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/imagenet
mpirun -n 8 python train.py --config configs/densenet/densenet121_ascend.yaml --data_dir /path/to/imagenet
```
> If the script is executed by the root user, the `--allow-run-as-root` parameter must be added to `mpirun`.
Expand Down
2 changes: 1 addition & 1 deletion configs/bit/bit_resnet101_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ hflip: 0.5
crop_pct: 0.875

# model
model: 'BiTresnet101'
model: 'BiT_resnet101'
num_classes: 1000
pretrained: False
ckpt_path: ''
Expand Down
2 changes: 1 addition & 1 deletion configs/bit/bit_resnet50_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ hflip: 0.5
crop_pct: 0.875

# model
model: 'BiTresnet50'
model: 'BiT_resnet50'
num_classes: 1000
pretrained: False
ckpt_path: ''
Expand Down
2 changes: 1 addition & 1 deletion configs/bit/bit_resnet50x3_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ crop_pct: 0.875
auto_augment: "randaug-m7-mstd0.5"

# model
model: 'BiTresnet50x3'
model: 'BiT_resnet50x3'
num_classes: 1000
pretrained: False
ckpt_path: ''
Expand Down
6 changes: 3 additions & 3 deletions configs/convnext/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ Our reproduced model performance on ImageNet-1K is reported as follows.

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|----------------|-----------|-----------|-----------|------------|-------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| ConvNeXt_tiny | D910x64-G | 81.91 | 95.79 | 28.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
| ConvNeXt_small | D910x64-G | 83.40 | 96.36 | 50.22 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_small-e23008f3.ckpt) |
| ConvNeXt_base | D910x64-G | 83.32 | 96.24 | 88.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_base-ee3544b8.ckpt) |
| convnext_tiny | D910x64-G | 81.91 | 95.79 | 28.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
| convnext_small | D910x64-G | 83.40 | 96.36 | 50.22 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_small-e23008f3.ckpt) |
| convnext_base | D910x64-G | 83.32 | 96.24 | 88.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_base-ee3544b8.ckpt) |

</div>

Expand Down
6 changes: 3 additions & 3 deletions configs/convnextv2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ Our reproduced model performance on ImageNet-1K is reported as follows.

<div align="center">

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|-----------------|----------|-----------|-----------|------------|----------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| ConvNeXtV2_tiny | D910x8-G | 82.43 | 95.98 | 28.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|------------------|----------|-----------|-----------|------------|----------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| convnextv2_tiny | D910x8-G | 82.43 | 95.98 | 28.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |

</div>

Expand Down
4 changes: 2 additions & 2 deletions configs/crossvit/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Crossvit
# CrossViT
> [CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification](https://arxiv.org/abs/2103.14899)
## Introduction
Expand Down Expand Up @@ -77,7 +77,7 @@ python train.py --config configs/crossvit/crossvit_15_ascend.yaml --data_dir /pa
To validate the accuracy of the trained model, you can use `validate.py` and parse the checkpoint path with `--ckpt_path`.

```
python validate.py -c configs/crossvit/crossvit15_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
python validate.py -c configs/crossvit/crossvit_15_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```

### Deployment
Expand Down
2 changes: 1 addition & 1 deletion configs/crossvit/crossvit_15_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ crop_pct: 0.935
ema: True

# model
model: 'crossvit15'
model: 'crossvit_15'
num_classes: 1000
pretrained: False
ckpt_path: ''
Expand Down
2 changes: 1 addition & 1 deletion configs/crossvit/crossvit_18_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ crop_pct: 0.935
ema: True

# model
model: 'crossvit18'
model: 'crossvit_18'
num_classes: 1000
pretrained: False
ckpt_path: ''
Expand Down
2 changes: 1 addition & 1 deletion configs/crossvit/crossvit_9_ascend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ color_jitter: 0.4
crop_pct: 0.935

# model
model: 'crossvit9'
model: 'crossvit_9'
num_classes: 1000
pretrained: False
ckpt_path: ''
Expand Down
16 changes: 8 additions & 8 deletions configs/densenet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,12 @@ Our reproduced model performance on ImageNet-1K is reported as follows.

<div align="center">

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
| densenet_121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
| densenet_161 | D910x8-G | 79.09 | 94.66 | 28.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_161_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet161-120_5004_Ascend.ckpt) |
| densenet_169 | D910x8-G | 77.26 | 93.71 | 14.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_169_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet169-120_5004_Ascend.ckpt) |
| densenet_201 | D910x8-G | 78.14 | 94.08 | 20.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_201_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet201-120_5004_Ascend.ckpt) |
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|-------------|----------|-----------|-----------|------------|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
| densenet121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
| densenet161 | D910x8-G | 79.09 | 94.66 | 28.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet161_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet161-120_5004_Ascend.ckpt) |
| densenet169 | D910x8-G | 77.26 | 93.71 | 14.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet169_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet169-120_5004_Ascend.ckpt) |
| densenet201 | D910x8-G | 78.14 | 94.08 | 20.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet201_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet201-120_5004_Ascend.ckpt) |

</div>

Expand Down Expand Up @@ -70,7 +70,7 @@ It is easy to reproduce the reported results with the pre-defined training recip

```shell
# distributed training on multiple GPU/Ascend devices
mpirun -n 8 python train.py --config configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/imagenet
mpirun -n 8 python train.py --config configs/densenet/densenet121_ascend.yaml --data_dir /path/to/imagenet
```
> If the script is executed by the root user, the `--allow-run-as-root` parameter must be added to `mpirun`.
Expand All @@ -86,7 +86,7 @@ If you want to train or finetune the model on a smaller dataset without distribu

```shell
# standalone training on a CPU/GPU/Ascend device
python train.py --config configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/dataset --distribute False
python train.py --config configs/densenet/densenet121_ascend.yaml --data_dir /path/to/dataset --distribute False
```

### Validation
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
12 changes: 6 additions & 6 deletions configs/dpn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,12 @@ Our reproduced model performance on ImageNet-1K is reported as follows.

<div align="center">

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|-------|----------|-----------|-----------|------------|------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
| dpn92 | D910x8-G | 79.46 | 94.49 | 37.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) |
| dpn98 | D910x8-G | 79.94 | 94.57 | 61.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn98_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn98-119a8207.ckpt) |
| dpn107 | D910x8-G | 80.05 | 94.74 | 87.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn107_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn107-7d7df07b.ckpt) |
| dpn131 | D910x8-G | 80.07 | 94.72 | 79.48 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn131_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn131-47f084b3.ckpt) |
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|---------|----------|-----------|-----------|------------|------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
| dpn92 | D910x8-G | 79.46 | 94.49 | 37.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) |
| dpn98 | D910x8-G | 79.94 | 94.57 | 61.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn98_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn98-119a8207.ckpt) |
| dpn107 | D910x8-G | 80.05 | 94.74 | 87.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn107_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn107-7d7df07b.ckpt) |
| dpn131 | D910x8-G | 80.07 | 94.72 | 79.48 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn131_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn131-47f084b3.ckpt) |

</div>

Expand Down
6 changes: 3 additions & 3 deletions configs/ghostnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Our reproduced model performance on ImageNet-1K is reported as follows.

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|
| GhostNet_050 | D910x8-G | 66.03 | 86.64 | 2.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
| GhostNet_100 | D910x8-G | 73.78 | 91.66 | 5.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_100_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_100-bef8025a.ckpt) |
| GhostNet_130 | D910x8-G | 75.50 | 92.56 | 7.39 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_130_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_130-cf4c235c.ckpt) |
| ghostnet_050 | D910x8-G | 66.03 | 86.64 | 2.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
| ghostnet_100 | D910x8-G | 73.78 | 91.66 | 5.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_100_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_100-bef8025a.ckpt) |
| ghostnet_130 | D910x8-G | 75.50 | 92.56 | 7.39 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_130_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_130-cf4c235c.ckpt) |

</div>

Expand Down
4 changes: 2 additions & 2 deletions configs/googlenet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ training results.[[1](#references)]
<img src="https://user-images.githubusercontent.com/53842165/210749903-5ff23c0e-547f-487d-bb64-70b6e99031ea.jpg" width=180 />
</p>
<p align="center">
<em>Figure 1. Architecture of GoogLENet [<a href="#references">1</a>] </em>
<em>Figure 1. Architecture of GoogLeNet [<a href="#references">1</a>] </em>
</p>

## Results
Expand All @@ -25,7 +25,7 @@ Our reproduced model performance on ImageNet-1K is reported as follows.

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|-----------|----------|-----------|-----------|------------|---------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|
| GoogLeNet | D910x8-G | 72.68 | 90.89 | 6.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |
| googlenet | D910x8-G | 72.68 | 90.89 | 6.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |

</div>

Expand Down
Loading

0 comments on commit 06ab4a6

Please sign in to comment.