diff --git a/README.md b/README.md index d7ebf826..84dbd600 100644 --- a/README.md +++ b/README.md @@ -217,7 +217,6 @@ We provide the following jupyter notebook tutorials to help users learn to use M - [Finetune a pretrained model on custom datasets](docs/en/tutorials/finetune.md) - [Customize your model]() //coming soon - [Optimizing performance for vision transformer]() //coming soon -- [Deployment demo](docs/en/tutorials/deployment.md) ## Model List diff --git a/README_CN.md b/README_CN.md index b474b09f..bd9f3b0c 100644 --- a/README_CN.md +++ b/README_CN.md @@ -121,7 +121,7 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg' ```shell # 分布式训练 - # 假设你有4张GPU或者NPU卡 + # 假设你有4张NPU卡 msrun --bind_core=True --worker_num 4 python train.py --distribute \ --model densenet121 --dataset imagenet --data_dir ./datasets/imagenet ``` diff --git a/benchmark_results.md b/benchmark_results.md index 90530c36..276d707b 100644 --- a/benchmark_results.md +++ b/benchmark_results.md @@ -2,61 +2,62 @@
performance tested on Ascend 910(8p) with graph mode -| Model | Top-1 (%) | Top-5 (%) | Params(M) | BatchSize | Recipe | Download | -| ---------------------- | --------- | --------- | --------- |-----------| ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | -| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) | -| cmt_small | 83.24 | 96.41 | 26.09 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) | -| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) | -| convit_tiny | 73.66 | 91.72 | 5.71 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) | -| convnext_tiny | 81.91 | 95.79 | 28.59 | 16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) | -| convnextv2_tiny | 82.43 | 95.98 | 28.64 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) | -| crossvit_9 | 73.56 | 91.79 | 8.55 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) | -| densenet121 | 75.64 | 92.84 | 8.06 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) | -| dpn92 | 79.46 | 94.49 | 37.79 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) | -| edgenext_xx_small | 71.02 | 89.99 | 1.33 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) | -| efficientnet_b0 | 76.89 | 93.16 | 5.33 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/efficientnet/efficientnet_b0-103ec70c.ckpt) | -| ghostnet_050 | 66.03 | 86.64 | 2.60 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) | -| googlenet | 72.68 | 90.89 | 6.99 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) | -| halonet_50t | 79.53 | 94.79 | 22.79 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/halonet/halonet_50t_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/halonet/halonet_50t-533da6be.ckpt) | -| hrnet_w32 | 80.64 | 95.44 | 41.30 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/hrnet/hrnet_w32-cc4fbd91.ckpt) | -| inception_v3 | 79.11 | 94.40 | 27.20 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) | -| inception_v4 | 80.88 | 95.34 | 42.74 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) | -| mixnet_s | 75.52 | 92.52 | 4.17 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) | -| mnasnet_075 | 71.81 | 90.53 | 3.20 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) | -| mobilenet_v1_025 | 53.87 | 77.66 | 0.47 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) | -| mobilenet_v2_075 | 69.98 | 89.32 | 2.66 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) | -| mobilenet_v3_small_100 | 68.10 | 87.86 | 2.55 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) | -| mobilenet_v3_large_100 | 75.23 | 92.31 | 5.51 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) | -| mobilevit_xx_small | 68.91 | 88.91 | 1.27 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) | -| nasnet_a_4x1056 | 73.65 | 91.25 | 5.33 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-0fbb5cdd.ckpt) | -| pit_ti | 72.96 | 91.33 | 4.85 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_ti-e647a593.ckpt) | -| poolformer_s12 | 77.33 | 93.34 | 11.92 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) | -| pvt_tiny | 74.81 | 92.18 | 13.23 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) | -| pvt_v2_b0 | 71.50 | 90.60 | 3.67 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) | -| regnet_x_800mf | 76.04 | 92.97 | 7.26 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) | -| repmlp_t224 | 76.71 | 93.30 | 38.30 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) | -| repvgg_a0 | 72.19 | 90.75 | 9.13 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a0-6e71139d.ckpt) | -| repvgg_a1 | 74.19 | 91.89 | 14.12 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a1-539513ac.ckpt) | -| res2net50 | 79.35 | 94.64 | 25.76 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net50-f42cf71b.ckpt) | -| resnest50 | 80.81 | 95.16 | 27.55 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnest/resnest50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnest/resnest50-f2e7fc9c.ckpt) | -| resnet50 | 76.69 | 93.50 | 25.61 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet50-e0733ab8.ckpt) | -| resnetv2_50 | 76.90 | 93.37 | 25.60 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnetv2/resnetv2_50-3c2f143b.ckpt) | -| resnext50_32x4d | 78.53 | 94.10 | 25.10 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext50_32x4d-af8aba16.ckpt) | -| rexnet_09 | 77.06 | 93.41 | 4.13 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_09-da498331.ckpt) | -| seresnet18 | 71.81 | 90.49 | 11.80 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet18-7880643b.ckpt) | -| shufflenet_v1_g3_05 | 57.05 | 79.73 | 0.73 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-42cfe109.ckpt) | -| shufflenet_v2_x0_5 | 60.53 | 82.11 | 1.37 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-8c841061.ckpt) | -| skresnet18 | 73.09 | 91.20 | 11.97 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnet18-868228e5.ckpt) | -| squeezenet1_0 | 59.01 | 81.01 | 1.25 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_0-e2d78c4a.ckpt) | -| swin_tiny | 80.82 | 94.80 | 33.38 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swin/swin_tiny-0ff2f96d.ckpt) | -| swinv2_tiny_window8 | 81.42 | 95.43 | 28.78 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-3ef8b787.ckpt) | -| vgg13 | 72.87 | 91.02 | 133.04 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg13-da805e6e.ckpt) | -| vgg19 | 75.21 | 92.56 | 143.66 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg19-bedee7b6.ckpt) | -| visformer_tiny | 78.28 | 94.15 | 10.33 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_tiny-daee0322.ckpt) | -| vit_b_32_224 | 75.86 | 92.08 | 87.46 | 512 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vit/vit_b32_224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vit/vit_b_32_224-7553218f.ckpt) | -| volo_d1 | 82.59 | 95.99 | 27 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/volo/volo_d1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/volo/volo_d1-c7efada9.ckpt) | -| xception | 79.01 | 94.25 | 22.91 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xception/xception_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xception/xception-2c1e711df.ckpt) | -| xcit_tiny_12_p16_224 | 77.67 | 93.79 | 7.00 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-1b1c9301.ckpt) | + +| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------------- | --------- | --------- | --------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | +| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | 8 | 74.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) | +| cmt_small | 83.24 | 96.41 | 26.09 | 128 | 8 | 500.64 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) | +| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | 8 | 207.74 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) | +| convit_tiny | 73.66 | 91.72 | 5.71 | 256 | 8 | 231.62 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) | +| convnext_tiny | 81.91 | 95.79 | 28.59 | 16 | 8 | 66.79 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) | +| convnextv2_tiny | 82.43 | 95.98 | 28.64 | 128 | 8 | 400.20 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) | +| crossvit_9 | 73.56 | 91.79 | 8.55 | 256 | 8 | 550.79 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) | +| densenet121 | 75.64 | 92.84 | 8.06 | 32 | 8 | 43.28 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) | +| dpn92 | 79.46 | 94.49 | 37.79 | 32 | 8 | 78.22 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) | +| edgenext_xx_small | 71.02 | 89.99 | 1.33 | 256 | 8 | 191.24 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) | +| efficientnet_b0 | 76.89 | 93.16 | 5.33 | 128 | 8 | 172.78 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/efficientnet/efficientnet_b0-103ec70c.ckpt) | +| ghostnet_050 | 66.03 | 86.64 | 2.60 | 128 | 8 | 211.13 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) | +| googlenet | 72.68 | 90.89 | 6.99 | 32 | 8 | 21.40 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) | +| halonet_50t | 79.53 | 94.79 | 22.79 | 64 | 8 | 421.66 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/halonet/halonet_50t_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/halonet/halonet_50t-533da6be.ckpt) | +| hrnet_w32 | 80.64 | 95.44 | 41.30 | 128 | 8 | 279.10 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/hrnet/hrnet_w32-cc4fbd91.ckpt) | +| inception_v3 | 79.11 | 94.40 | 27.20 | 32 | 8 | 76.42 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) | +| inception_v4 | 80.88 | 95.34 | 42.74 | 32 | 8 | 76.19 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) | +| mixnet_s | 75.52 | 92.52 | 4.17 | 128 | 8 | 252.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) | +| mnasnet_075 | 71.81 | 90.53 | 3.20 | 256 | 8 | 165.43 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) | +| mobilenet_v1_025 | 53.87 | 77.66 | 0.47 | 64 | 8 | 42.43 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) | +| mobilenet_v2_075 | 69.98 | 89.32 | 2.66 | 256 | 8 | 155.94 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) | +| mobilenet_v3_small_100 | 68.10 | 87.86 | 2.55 | 75 | 8 | 48.14 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) | +| mobilenet_v3_large_100 | 75.23 | 92.31 | 5.51 | 75 | 8 | 47.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) | +| mobilevit_xx_small | 68.91 | 88.91 | 1.27 | 64 | 8 | 53.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) | +| nasnet_a_4x1056 | 73.65 | 91.25 | 5.33 | 256 | 8 | 330.89 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-0fbb5cdd.ckpt) | +| pit_ti | 72.96 | 91.33 | 4.85 | 128 | 8 | 271.50 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_ti-e647a593.ckpt) | +| poolformer_s12 | 77.33 | 93.34 | 11.92 | 128 | 8 | 220.13 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) | +| pvt_tiny | 74.81 | 92.18 | 13.23 | 128 | 8 | 229.63 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) | +| pvt_v2_b0 | 71.50 | 90.60 | 3.67 | 128 | 8 | 269.38 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) | +| regnet_x_800mf | 76.04 | 92.97 | 7.26 | 64 | 8 | 42.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) | +| repmlp_t224 | 76.71 | 93.30 | 38.30 | 128 | 8 | 578.23 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) | +| repvgg_a0 | 72.19 | 90.75 | 9.13 | 32 | 8 | 20.58 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a0-6e71139d.ckpt) | +| repvgg_a1 | 74.19 | 91.89 | 14.12 | 32 | 8 | 20.70 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a1-539513ac.ckpt) | +| res2net50 | 79.35 | 94.64 | 25.76 | 32 | 8 | 39.68 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net50-f42cf71b.ckpt) | +| resnest50 | 80.81 | 95.16 | 27.55 | 128 | 8 | 244.92 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnest/resnest50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnest/resnest50-f2e7fc9c.ckpt) | +| resnet50 | 76.69 | 93.50 | 25.61 | 32 | 8 | 31.41 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet50-e0733ab8.ckpt) | +| resnetv2_50 | 76.90 | 93.37 | 25.60 | 32 | 8 | 32.66 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnetv2/resnetv2_50-3c2f143b.ckpt) | +| resnext50_32x4d | 78.53 | 94.10 | 25.10 | 32 | 8 | 37.22 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext50_32x4d-af8aba16.ckpt) | +| rexnet_09 | 77.06 | 93.41 | 4.13 | 64 | 8 | 130.10 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_09-da498331.ckpt) | +| seresnet18 | 71.81 | 90.49 | 11.80 | 64 | 8 | 44.40 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet18-7880643b.ckpt) | +| shufflenet_v1_g3_05 | 57.05 | 79.73 | 0.73 | 64 | 8 | 40.62 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-42cfe109.ckpt) | +| shufflenet_v2_x0_5 | 60.53 | 82.11 | 1.37 | 64 | 8 | 41.87 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-8c841061.ckpt) | +| skresnet18 | 73.09 | 91.20 | 11.97 | 64 | 8 | 45.84 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnet18-868228e5.ckpt) | +| squeezenet1_0 | 59.01 | 81.01 | 1.25 | 32 | 8 | 22.36 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_0-e2d78c4a.ckpt) | +| swin_tiny | 80.82 | 94.80 | 33.38 | 256 | 8 | 454.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swin/swin_tiny-0ff2f96d.ckpt) | +| swinv2_tiny_window8 | 81.42 | 95.43 | 28.78 | 128 | 8 | 317.19 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-3ef8b787.ckpt) | +| vgg13 | 72.87 | 91.02 | 133.04 | 32 | 8 | 55.20 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg13-da805e6e.ckpt) | +| vgg19 | 75.21 | 92.56 | 143.66 | 32 | 8 | 67.42 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg19-bedee7b6.ckpt) | +| visformer_tiny | 78.28 | 94.15 | 10.33 | 128 | 8 | 217.92 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_tiny-daee0322.ckpt) | +| vit_b_32_224 | 75.86 | 92.08 | 87.46 | 512 | 8 | 454.57 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vit/vit_b32_224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vit/vit_b_32_224-7553218f.ckpt) | +| volo_d1 | 82.59 | 95.99 | 27 | 128 | 8 | 270.79 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/volo/volo_d1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/volo/volo_d1-c7efada9.ckpt) | +| xception | 79.01 | 94.25 | 22.91 | 32 | 8 | 92.78 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xception/xception_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xception/xception-2c1e711df.ckpt) | +| xcit_tiny_12_p16_224 | 77.67 | 93.79 | 7.00 | 128 | 8 | 252.98 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-1b1c9301.ckpt) |
@@ -64,50 +65,51 @@ performance tested on Ascend 910*(8p) with graph mode -| Model | Top-1 (%) | Top-5 (%) | ms/step | Params(M) | BatchSize | Recipe | Download | -| ---------------------- | --------- | --------- | ------- | --------- | --------- | ------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -| convit_tiny | 73.79 | 91.70 | 342.81 | 5.71 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convit/convit_tiny-1961717e-910v2.ckpt) | -| convnext_tiny | 81.28 | 95.61 | 54.08 | 28.59 | 16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnext/convnext_tiny-db11dc82-910v2.ckpt) | -| convnextv2_tiny | 82.39 | 95.95 | 360.29 | 28.64 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-a35b79ce-910v2.ckpt) | -| crossvit_9 | 73.38 | 91.51 | 711.19 | 8.55 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/crossvit/crossvit_9-32c69c96-910v2.ckpt) | -| densenet121 | 75.67 | 92.77 | 50.55 | 8.06 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) | -| edgenext_xx_small | 70.64 | 89.75 | 295.88 | 1.33 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/edgenext/edgenext_xx_small-cad13d2c-910v2.ckpt) | -| efficientnet_b0 | 76.88 | 93.28 | 168.78 | 5.33 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/efficientnet/efficientnet_b0-f8d7aa2a-910v2.ckpt) | -| googlenet | 72.89 | 90.89 | 24.29 | 6.99 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/googlenet/googlenet-de74c31d-910v2.ckpt) | -| hrnet_w32 | 80.66 | 95.30 | 303.01 | 41.30 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/hrnet/hrnet_w32-e616cdcb-910v2.ckpt) | -| inception_v3 | 79.25 | 94.47 | 79.87 | 27.20 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v3/inception_v3-61a8e9ed-910v2.ckpt) | -| inception_v4 | 80.98 | 95.25 | 84.59 | 42.74 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v4/inception_v4-56e798fc-910v2.ckpt) | -| mixnet_s | 75.58 | 95.54 | 306.16 | 4.17 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mixnet/mixnet_s-fe4fcc63-910v2.ckpt) | -| mnasnet_075 | 71.77 | 90.52 | 177.22 | 3.20 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mnasnet/mnasnet_075-083b2bc4-910v2.ckpt) | -| mobilenet_v1_025 | 54.05 | 77.74 | 43.85 | 0.47 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-cbe3d3b3-910v2.ckpt) | -| mobilenet_v2_075 | 69.73 | 89.35 | 170.41 | 2.66 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-755932c4-910v2.ckpt) | -| mobilenet_v3_small_100 | 68.07 | 87.77 | 51.97 | 2.55 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-6fa3c17d-910v2.ckpt) | -| mobilenet_v3_large_100 | 75.59 | 92.57 | 52.55 | 5.51 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-bd4e7bdc-910v2.ckpt) | -| mobilevit_xx_small | 67.11 | 87.85 | 64.91 | 1.27 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-6f2745c3-910v2.ckpt) | -| nasnet_a_4x1056 | 74.12 | 91.36 | 401.34 | 5.33 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-015ba575c-910v2.ckpt) | -| pit_ti | 73.26 | 91.57 | 343.45 | 4.85 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pit/pit_ti-33466a0d-910v2.ckpt) | -| poolformer_s12 | 77.49 | 93.55 | 294.54 | 11.92 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/poolformer/poolformer_s12-c7e14eea-910v2.ckpt) | -| pvt_tiny | 74.88 | 92.12 | 308.02 | 13.23 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt/pvt_tiny-6676051f-910v2.ckpt) | -| pvt_v2_b0 | 71.25 | 90.50 | 343.22 | 3.67 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-d9cd9d6a-910v2.ckpt) | -| regnet_x_800mf | 76.11 | 93.00 | 50.29 | 7.26 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/regnet/regnet_x_800mf-68fe1cca-910v2.ckpt) | -| repvgg_a0 | 72.29 | 90.78 | 25.14 | 9.13 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a0-b67a9f15-910v2.ckpt) | -| repvgg_a1 | 73.68 | 91.51 | 31.78 | 14.12 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a1-a40aa623-910v2.ckpt) | -| res2net50 | 79.33 | 94.64 | 43.22 | 25.76 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/res2net/res2net50-aa758355-910v2.ckpt) | -| resnet50 | 76.76 | 93.31 | 32.96 | 25.61 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnet/resnet50-f369a08d-910v2.ckpt) | -| resnetv2_50 | 77.03 | 93.29 | 33.83 | 25.60 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnetv2/resnetv2_50-a0b9f7f8-910v2.ckpt) | -| resnext50_32x4d | 78.64 | 94.18 | 46.18 | 25.10 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnext/resnext50_32x4d-988f75bc-910v2.ckpt) | -| rexnet_09 | 76.14 | 92.96 | 142.77 | 4.13 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/rexnet/rexnet_09-00223eb4-910v2.ckpt) | -| seresnet18 | 72.05 | 90.59 | 48.72 | 11.80 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/senet/seresnet18-7b971c78-910v2.ckpt) | -| shufflenet_v1_g3_05 | 57.08 | 79.89 | 45.44 | 0.73 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-56209ef3-910v2.ckpt) | -| shufflenet_v2_x0_5 | 60.65 | 82.26 | 47.18 | 1.37 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-39d05bb6-910v2.ckpt) | -| skresnet18 | 72.85 | 90.83 | 48.35 | 11.97 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/sknet/skresnet18-9d8b1afc-910v2.ckpt) | -| squeezenet1_0 | 58.75 | 80.76 | 24.28 | 1.25 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/squeezenet/squeezenet1_0-24010b28-910v2.ckpt) | -| swin_tiny | 80.90 | 94.90 | 637.41 | 33.38 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swin/swin_tiny-72b3c5e6-910v2.ckpt) | -| swinv2_tiny_window8 | 81.38 | 95.46 | 380.93 | 28.78 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-70c5e903-910v2.ckpt) | -| vgg13 | 72.81 | 91.02 | 30.97 | 133.04 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg13-7756f33c-910v2.ckpt) | -| vgg19 | 75.24 | 92.55 | 40.02 | 143.66 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg19-5104d1ea-910v2.ckpt) | -| visformer_tiny | 78.40 | 94.30 | 311.34 | 10.33 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/visformer/visformer_tiny-df995ba4-910v2.ckpt) | -| xcit_tiny_12_p16_224 | 77.27 | 93.56 | 320.25 | 7.00 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-bd90776e-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------------- | --------- | --------- | --------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | +| convit_tiny | 73.79 | 91.70 | 5.71 | 256 | 8 | 226.51 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convit/convit_tiny-1961717e-910v2.ckpt) | +| convnext_tiny | 81.28 | 95.61 | 28.59 | 16 | 8 | 48.7 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnext/convnext_tiny-db11dc82-910v2.ckpt) | +| convnextv2_tiny | 82.39 | 95.95 | 28.64 | 128 | 8 | 257.2 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-a35b79ce-910v2.ckpt) | +| crossvit_9 | 73.38 | 91.51 | 8.55 | 256 | 8 | 514.36 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/crossvit/crossvit_9-32c69c96-910v2.ckpt) | +| densenet121 | 75.67 | 92.77 | 8.06 | 32 | 8 | 47.34 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) | +| edgenext_xx_small | 70.64 | 89.75 | 1.33 | 256 | 8 | 239.38 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/edgenext/edgenext_xx_small-cad13d2c-910v2.ckpt) | +| efficientnet_b0 | 76.88 | 93.28 | 5.33 | 128 | 8 | 172.64 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/efficientnet/efficientnet_b0-f8d7aa2a-910v2.ckpt) | +| googlenet | 72.89 | 90.89 | 6.99 | 32 | 8 | 23.5 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/googlenet/googlenet-de74c31d-910v2.ckpt) | +| hrnet_w32 | 80.66 | 95.30 | 41.30 | 128 | 8 | 238.03 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/hrnet/hrnet_w32-e616cdcb-910v2.ckpt) | +| inception_v3 | 79.25 | 94.47 | 27.20 | 32 | 8 | 70.83 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v3/inception_v3-61a8e9ed-910v2.ckpt) | +| inception_v4 | 80.98 | 95.25 | 42.74 | 32 | 8 | 80.97 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v4/inception_v4-56e798fc-910v2.ckpt) | +| mixnet_s | 75.58 | 95.54 | 4.17 | 128 | 8 | 228.03 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mixnet/mixnet_s-fe4fcc63-910v2.ckpt) | +| mnasnet_075 | 71.77 | 90.52 | 3.20 | 256 | 8 | 175.85 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mnasnet/mnasnet_075-083b2bc4-910v2.ckpt) | +| mobilenet_v1_025 | 54.05 | 77.74 | 0.47 | 64 | 8 | 47.47 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-cbe3d3b3-910v2.ckpt) | +| mobilenet_v2_075 | 69.73 | 89.35 | 2.66 | 256 | 8 | 174.65 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-755932c4-910v2.ckpt) | +| mobilenet_v3_small_100 | 68.07 | 87.77 | 2.55 | 75 | 8 | 52.38 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-6fa3c17d-910v2.ckpt) | +| mobilenet_v3_large_100 | 75.59 | 92.57 | 5.51 | 75 | 8 | 55.89 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-bd4e7bdc-910v2.ckpt) | +| mobilevit_xx_small | 67.11 | 87.85 | 1.27 | 64 | 8 | 67.24 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-6f2745c3-910v2.ckpt) | +| nasnet_a_4x1056 | 74.12 | 91.36 | 5.33 | 256 | 8 | 364.35 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-015ba575c-910v2.ckpt) | +| pit_ti | 73.26 | 91.57 | 4.85 | 128 | 8 | 266.47 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pit/pit_ti-33466a0d-910v2.ckpt) | +| poolformer_s12 | 77.49 | 93.55 | 11.92 | 128 | 8 | 211.81 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/poolformer/poolformer_s12-c7e14eea-910v2.ckpt) | +| pvt_tiny | 74.88 | 92.12 | 13.23 | 128 | 8 | 237.5 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt/pvt_tiny-6676051f-910v2.ckpt) | +| pvt_v2_b0 | 71.25 | 90.50 | 3.67 | 128 | 8 | 255.76 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-d9cd9d6a-910v2.ckpt) | +| regnet_x_800mf | 76.11 | 93.00 | 7.26 | 64 | 8 | 50.74 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/regnet/regnet_x_800mf-68fe1cca-910v2.ckpt) | +| repvgg_a0 | 72.29 | 90.78 | 9.13 | 32 | 8 | 24.12 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a0-b67a9f15-910v2.ckpt) | +| repvgg_a1 | 73.68 | 91.51 | 14.12 | 32 | 8 | 28.29 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a1-a40aa623-910v2.ckpt) | +| res2net50 | 79.33 | 94.64 | 25.76 | 32 | 8 | 39.6 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/res2net/res2net50-aa758355-910v2.ckpt) | +| resnet50 | 76.76 | 93.31 | 25.61 | 32 | 8 | 31.9 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnet/resnet50-f369a08d-910v2.ckpt) | +| resnetv2_50 | 77.03 | 93.29 | 25.60 | 32 | 8 | 32.19 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnetv2/resnetv2_50-a0b9f7f8-910v2.ckpt) | +| resnext50_32x4d | 78.64 | 94.18 | 25.10 | 32 | 8 | 44.61 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnext/resnext50_32x4d-988f75bc-910v2.ckpt) | +| rexnet_09 | 76.14 | 92.96 | 4.13 | 64 | 8 | 115.61 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/rexnet/rexnet_09-00223eb4-910v2.ckpt) | +| seresnet18 | 72.05 | 90.59 | 11.80 | 64 | 8 | 51.09 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/senet/seresnet18-7b971c78-910v2.ckpt) | +| shufflenet_v1_g3_05 | 57.08 | 79.89 | 0.73 | 64 | 8 | 47.77 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-56209ef3-910v2.ckpt) | +| shufflenet_v2_x0_5 | 60.65 | 82.26 | 1.37 | 64 | 8 | 47.32 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-39d05bb6-910v2.ckpt) | +| skresnet18 | 72.85 | 90.83 | 11.97 | 64 | 8 | 49.83 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/sknet/skresnet18-9d8b1afc-910v2.ckpt) | +| squeezenet1_0 | 58.75 | 80.76 | 1.25 | 32 | 8 | 23.48 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/squeezenet/squeezenet1_0-24010b28-910v2.ckpt) | +| swin_tiny | 80.90 | 94.90 | 33.38 | 256 | 8 | 466.6 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swin/swin_tiny-72b3c5e6-910v2.ckpt) | +| swinv2_tiny_window8 | 81.38 | 95.46 | 28.78 | 128 | 8 | 335.18 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-70c5e903-910v2.ckpt) | +| vgg13 | 72.81 | 91.02 | 133.04 | 32 | 8 | 30.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg13-7756f33c-910v2.ckpt) | +| vgg19 | 75.24 | 92.55 | 143.66 | 32 | 8 | 39.17 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg19-5104d1ea-910v2.ckpt) | +| visformer_tiny | 78.40 | 94.30 | 10.33 | 128 | 8 | 201.14 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/visformer/visformer_tiny-df995ba4-910v2.ckpt) | +| xcit_tiny_12_p16_224 | 77.27 | 93.56 | 7.00 | 128 | 8 | 229.25 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-bd90776e-910v2.ckpt) | diff --git a/configs/README.md b/configs/README.md index 5e4fd655..03f18982 100644 --- a/configs/README.md +++ b/configs/README.md @@ -33,17 +33,20 @@ Please follow the outline structure and **table format** shown in [densenet/READ
-| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download | -|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------| -| densenet_121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) | +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | +| densenet121 | 75.67 | 92.77 | 8.06 | 32 | 8 | 47,34 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |
Illustration: - Model: model name in lower case with _ seperator. -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point. - Params (M): # of model parameters in millions (10^6). Keep **2 digits** after the decimal point +- Batch Size: Training batch size +- Cards: # of cards +- Ms/step: Time used on training per step in ms +- Jit_level: Jit level of mindspore context, which contains 3 levels: O0(kbk mode)/O1(dvm mode)/O2(ge mode) - Recipe: Training recipe/configuration linked to a yaml config file. - Download: url of the pretrained model weights @@ -62,10 +65,10 @@ Illustration: For consistency, it is recommended to provide distributed training commands based on `msrun --bind_core=True --worker_num {num_devices} python train.py`, instead of using shell script such as `distrubuted_train.sh`. ```shell - # standalone training on a gpu or ascend device + # standalone training on single NPU device python train.py --config configs/densenet/densenet_121_gpu.yaml --data_dir /path/to/dataset --distribute False - # distributed training on gpu or ascend divices + # distributed training on NPU divices msrun --bind_core=True --worker_num 8 python train.py --config configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/imagenet ``` diff --git a/configs/bit/README.md b/configs/bit/README.md index bb09f71a..075e8359 100644 --- a/configs/bit/README.md +++ b/configs/bit/README.md @@ -17,25 +17,24 @@ too low. 5) With BiT fine-tuning, good performance can be achieved even if there Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode *coming soon* -performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params(M) | Batch Size | Recipe | Download | -| ------------ | --------- | --------- | --------- | ---------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | -| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) | + +| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------ | --------- | --------- | --------- | ---------- | ----- |---------| --------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | +| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | 8 | 74.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -44,7 +43,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -57,11 +56,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -72,7 +70,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -84,10 +82,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References diff --git a/configs/cmt/README.md b/configs/cmt/README.md index 4c2dd2fb..e531d53d 100644 --- a/configs/cmt/README.md +++ b/configs/cmt/README.md @@ -14,24 +14,23 @@ on ImageNet-1K dataset. Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode *coming soon* -performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params(M) | Batch Size | Recipe | Download | -| --------- | --------- | --------- | --------- | ---------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | -| cmt_small | 83.24 | 96.41 | 26.09 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) | + +| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download | +| --------- | --------- | --------- | --------- | ---------- | ----- |---------| --------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | +| cmt_small | 83.24 | 96.41 | 26.09 | 128 | 8 | 500.64 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -40,7 +39,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -53,11 +52,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -68,7 +66,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -80,10 +78,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/). - ## References diff --git a/configs/coat/README.md b/configs/coat/README.md index cef0f69b..a78b3d01 100644 --- a/configs/coat/README.md +++ b/configs/coat/README.md @@ -10,23 +10,23 @@ Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode *coming soon* -performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Weight | -| --------- | --------- | --------- | ---------- | ---------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- | -| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | Weight | +| --------- | --------- | --------- | ---------- | ---------- | ----- |---------| --------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- | +| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | 8 | 254.95 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |
#### Notes -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. @@ -35,7 +35,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -47,12 +47,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -63,7 +62,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -75,10 +74,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -To deploy online inference services with the trained model efficiently, please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/). - ## References [1] Han D, Yun S, Heo B, et al. Rethinking channel dimensions for efficient model design[C]//Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2021: 732-741. diff --git a/configs/convit/README.md b/configs/convit/README.md index 3ac41cac..c322cbb4 100644 --- a/configs/convit/README.md +++ b/configs/convit/README.md @@ -24,30 +24,30 @@ while offering a much improved sample efficiency.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ----------- | --------- | --------- | ------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- | -| convit_tiny | 73.79 | 91.70 | 342.81 | 5.71 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convit/convit_tiny-1961717e-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- | +| convit_tiny | 73.79 | 91.70 | 5.71 | 256 | 8 | 226.51 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convit/convit_tiny-1961717e-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ----------- | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- | -| convit_tiny | 73.66 | 91.72 | 5.71 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- | +| convit_tiny | 73.66 | 91.72 | 5.71 | 256 | 8 | 231.62 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -55,7 +55,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -67,11 +67,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/convit/convit_tiny_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -82,7 +81,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/convit/convit_tiny_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -94,10 +93,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/convit/convit_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References diff --git a/configs/convnext/README.md b/configs/convnext/README.md index ad64d9bb..d5bfcca9 100644 --- a/configs/convnext/README.md +++ b/configs/convnext/README.md @@ -21,31 +21,31 @@ simplicity and efficiency of standard ConvNets.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ------------- | --------- | --------- | ------- | ---------- | ---------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -| convnext_tiny | 81.28 | 95.61 | 54.08 | 28.59 | 16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnext/convnext_tiny-db11dc82-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------- | --------- | --------- | ---------- | ---------- | ----- |---------| --------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | +| convnext_tiny | 81.28 | 95.61 | 28.59 | 16 | 8 | 48.7 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnext/convnext_tiny-db11dc82-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ------------- | --------- | --------- | ---------- | ---------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | -| convnext_tiny | 81.91 | 95.79 | 28.59 | 16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------- | --------- | --------- | ---------- | ---------- | ----- |---------| --------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | +| convnext_tiny | 81.91 | 95.79 | 28.59 | 16 | 8 | 66.79 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -53,7 +53,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -65,12 +65,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/convnext/convnext_tiny_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -81,7 +80,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/convnext/convnext_tiny_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -93,10 +92,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/convnext/convnext_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References [1] Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11976-11986. diff --git a/configs/convnextv2/README.md b/configs/convnextv2/README.md index 4f7dcd38..7deb007a 100644 --- a/configs/convnextv2/README.md +++ b/configs/convnextv2/README.md @@ -20,29 +20,28 @@ benchmarks, including ImageNet classification, COCO detection, and ADE20K segmen Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode - +- ascend 910* with graph mode +
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| --------------- | --------- | --------- | ------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | -| convnextv2_tiny | 82.39 | 95.95 | 360.29 | 28.64 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-a35b79ce-910v2.ckpt) | +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| --------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | +| convnextv2_tiny | 82.39 | 95.95 | 28.64 | 128 | 8 | 257.2 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-a35b79ce-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| --------------- | --------- | --------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | -| convnextv2_tiny | 82.43 | 95.98 | 28.64 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| --------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | +| convnextv2_tiny | 82.43 | 95.98 | 28.64 | 128 | 8 | 400.20 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -50,7 +49,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -62,12 +61,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/convnextv2/convnextv2_tiny_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -78,7 +76,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/convnextv2/convnextv2_tiny_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -90,10 +88,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/convnextv2/convnextv2_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References [1] Woo S, Debnath S, Hu R, et al. ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders[J]. arXiv preprint arXiv:2301.00808, 2023. diff --git a/configs/crossvit/README.md b/configs/crossvit/README.md index 1c8c130e..a1aa17a8 100644 --- a/configs/crossvit/README.md +++ b/configs/crossvit/README.md @@ -19,29 +19,29 @@ Fusion is achieved by an efficient cross-attention module, in which each transfo Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ---------- | --------- | --------- | ------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -| crossvit_9 | 73.38 | 91.51 | 711.19 | 8.55 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/crossvit/crossvit_9-32c69c96-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | +| crossvit_9 | 73.38 | 91.51 | 8.55 | 256 | 8 | 514.36 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/crossvit/crossvit_9-32c69c96-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ---------- | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | -| crossvit_9 | 73.56 | 91.79 | 8.55 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| crossvit_9 | 73.56 | 91.79 | 8.55 | 256 | 8 | 550.79 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -49,7 +49,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -61,11 +61,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/crossvit/crossvit_15_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -76,7 +75,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/crossvit/crossvit_15_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -88,10 +87,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/crossvit/crossvit_15_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References diff --git a/configs/densenet/README.md b/configs/densenet/README.md index ffa1cdef..668b5111 100644 --- a/configs/densenet/README.md +++ b/configs/densenet/README.md @@ -26,7 +26,6 @@ propagation, encourage feature reuse, and substantially reduce the number of par diff --git a/configs/dpn/README.md b/configs/dpn/README.md index fc742004..fe8883d0 100644 --- a/configs/dpn/README.md +++ b/configs/dpn/README.md @@ -21,7 +21,6 @@ fewer computation cost compared with ResNet and DenseNet on ImageNet-1K dataset. diff --git a/configs/edgenext/README.md b/configs/edgenext/README.md index 89c1516b..0be42e16 100644 --- a/configs/edgenext/README.md +++ b/configs/edgenext/README.md @@ -21,31 +21,31 @@ to implicitly increase the receptive field and encode multi-scale features.[[1]( Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ----------------- | --------- | --------- | ------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | -| edgenext_xx_small | 70.64 | 89.75 | 295.88 | 1.33 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/edgenext/edgenext_xx_small-cad13d2c-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | +| edgenext_xx_small | 70.64 | 89.75 | 1.33 | 256 | 8 | 239.38 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/edgenext/edgenext_xx_small-cad13d2c-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ----------------- | --------- | --------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | -| edgenext_xx_small | 71.02 | 89.99 | 1.33 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | +| edgenext_xx_small | 71.02 | 89.99 | 1.33 | 256 | 8 | 191.24 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -54,7 +54,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -67,12 +67,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/edgenext/edgenext_small_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -83,7 +82,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/edgenext/edgenext_small_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -95,10 +94,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/edgenext/edgenext_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References diff --git a/configs/efficientnet/README.md b/configs/efficientnet/README.md index ed9da8c4..a0e6eb00 100644 --- a/configs/efficientnet/README.md +++ b/configs/efficientnet/README.md @@ -22,7 +22,6 @@ and resolution scaling could be found. EfficientNet could achieve better model p diff --git a/configs/ghostnet/README.md b/configs/ghostnet/README.md index 2db1f713..e33a6d3b 100644 --- a/configs/ghostnet/README.md +++ b/configs/ghostnet/README.md @@ -25,24 +25,23 @@ dataset.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode *coming soon* -performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ------------ | --------- | --------- | ---------- | ---------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- | -| ghostnet_050 | 66.03 | 86.64 | 2.60 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- | +| ghostnet_050 | 66.03 | 86.64 | 2.60 | 128 | 8 | 211.13 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -50,7 +49,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -62,13 +61,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/ghostnet/ghostnet_100_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -79,7 +77,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/ghostnet/ghostnet_100_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -91,10 +89,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/ghostnet/ghostnet_100_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References [1] Han K, Wang Y, Tian Q, et al. Ghostnet: More features from cheap operations[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1580-1589. diff --git a/configs/googlenet/README.md b/configs/googlenet/README.md index 25b13850..aaf32c2e 100644 --- a/configs/googlenet/README.md +++ b/configs/googlenet/README.md @@ -21,30 +21,30 @@ training results.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| --------- | --------- | --------- | ------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -| googlenet | 72.89 | 90.89 | 24.29 | 6.99 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/googlenet/googlenet-de74c31d-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| --------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | +| googlenet | 72.89 | 90.89 | 6.99 | 32 | 8 | 23.5 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/googlenet/googlenet-de74c31d-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| --------- | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | -| googlenet | 72.68 | 90.89 | 6.99 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| --------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| googlenet | 72.68 | 90.89 | 6.99 | 32 | 8 | 21.40 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -52,7 +52,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -64,13 +64,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/googlenet/googlenet_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -81,7 +80,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/googlenet/googlenet_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -93,10 +92,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/googlenet/googlenet_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References [1] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9. diff --git a/configs/halonet/README.md b/configs/halonet/README.md index 6b68dbf2..6130e6dc 100644 --- a/configs/halonet/README.md +++ b/configs/halonet/README.md @@ -29,23 +29,22 @@ Down Sampling:In order to reduce the amount of computation, each block is samp Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode *coming soon* -performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ----------- | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | -| halonet_50t | 79.53 | 94.79 | 22.79 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/halonet/halonet_50t_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/halonet/halonet_50t-533da6be.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| halonet_50t | 79.53 | 94.79 | 22.79 | 64 | 8 | 421.66 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/halonet/halonet_50t_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/halonet/halonet_50t-533da6be.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -53,7 +52,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -65,13 +64,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/halonet/halonet_50t_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -82,7 +80,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/halonet/halonet_50t_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -94,10 +92,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/halonet/halonet_50t_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References [1] Vaswani A, Ramachandran P, Srinivas A, et al. Scaling local self-attention for parameter efficient visual backbones[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 12894-12904. diff --git a/configs/hrnet/README.md b/configs/hrnet/README.md index 9e7aeb2c..19ff75d8 100644 --- a/configs/hrnet/README.md +++ b/configs/hrnet/README.md @@ -21,7 +21,6 @@ High-resolution representations are essential for position-sensitive vision prob diff --git a/configs/inceptionv3/README.md b/configs/inceptionv3/README.md index 2ebddbf9..81548c34 100644 --- a/configs/inceptionv3/README.md +++ b/configs/inceptionv3/README.md @@ -22,30 +22,30 @@ regularization and effectively reduces overfitting.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ------------ | --------- | --------- | ------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- | -| inception_v3 | 79.25 | 94.47 | 79.87 | 27.20 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v3/inception_v3-61a8e9ed-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- | +| inception_v3 | 79.25 | 94.47 | 27.20 | 32 | 8 | 70.83 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v3/inception_v3-61a8e9ed-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ------------ | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ | -| inception_v3 | 79.11 | 94.40 | 27.20 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ | +| inception_v3 | 79.11 | 94.40 | 27.20 | 32 | 8 | 76.42 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -53,7 +53,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -65,13 +65,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/inceptionv3/inception_v3_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -82,7 +81,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/inceptionv3/inception_v3_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -94,10 +93,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/inceptionv3/inception_v3_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. - ## References [1] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826. diff --git a/configs/inceptionv4/README.md b/configs/inceptionv4/README.md index c76c5dae..bb1534c5 100644 --- a/configs/inceptionv4/README.md +++ b/configs/inceptionv4/README.md @@ -19,29 +19,29 @@ performance with Inception-ResNet v2.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ------------ | --------- | --------- | ------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- | -| inception_v4 | 80.98 | 95.25 | 84.59 | 42.74 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v4/inception_v4-56e798fc-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- | +| inception_v4 | 80.98 | 95.25 | 42.74 | 32 | 8 | 80.97 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v4/inception_v4-56e798fc-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ------------ | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ | -| inception_v4 | 80.88 | 95.34 | 42.74 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ | +| inception_v4 | 80.88 | 95.34 | 42.74 | 32 | 8 | 76.19 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -49,7 +49,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -61,13 +61,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/inceptionv4/inception_v4_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -78,7 +77,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/inceptionv4/inception_v4_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -90,9 +89,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/inceptionv4/inception_v4_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/mixnet/README.md b/configs/mixnet/README.md index 001364d3..6ca7df11 100644 --- a/configs/mixnet/README.md +++ b/configs/mixnet/README.md @@ -21,31 +21,31 @@ and efficiency for existing MobileNets on both ImageNet classification and COCO Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| -------- | --------- | --------- | ------- | ---------- | ---------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | -| mixnet_s | 75.58 | 95.54 | 306.16 | 4.17 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mixnet/mixnet_s-fe4fcc63-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| -------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | +| mixnet_s | 75.58 | 95.54 | 4.17 | 128 | 8 | 228.03 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mixnet/mixnet_s-fe4fcc63-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| -------- | --------- | --------- | ---------- | ---------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | -| mixnet_s | 75.52 | 92.52 | 4.17 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| -------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | +| mixnet_s | 75.52 | 92.52 | 4.17 | 128 | 8 | 252.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -53,7 +53,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -71,7 +71,6 @@ msrun --bind_core=True --worker_num 8 python train.py --config configs/mixnet/mi -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -82,7 +81,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/mixnet/mixnet_s_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -94,9 +93,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/mixnet/mixnet_s_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/mnasnet/README.md b/configs/mnasnet/README.md index 4b818311..99ec3ff2 100644 --- a/configs/mnasnet/README.md +++ b/configs/mnasnet/README.md @@ -16,31 +16,31 @@ Designing convolutional neural networks (CNN) for mobile devices is challenging Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ----------- | --------- | --------- | ------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -| mnasnet_075 | 71.77 | 90.52 | 177.22 | 3.20 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mnasnet/mnasnet_075-083b2bc4-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | -------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | +| mnasnet_075 | 71.77 | 90.52 | 3.20 | 256 | 8 | 175.85 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mnasnet/mnasnet_075-083b2bc4-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ----------- | --------- | --------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | -| mnasnet_075 | 71.81 | 90.53 | 3.20 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| mnasnet_075 | 71.81 | 90.53 | 3.20 | 256 | 8 | 165.43 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -48,7 +48,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -60,13 +60,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/mnasnet/mnasnet_0.75_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -77,7 +76,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/mnasnet/mnasnet_0.75_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -89,9 +88,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/mnasnet/mnasnet_0.75_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/mobilenetv1/README.md b/configs/mobilenetv1/README.md index 778f1403..29cc492c 100644 --- a/configs/mobilenetv1/README.md +++ b/configs/mobilenetv1/README.md @@ -16,31 +16,31 @@ Compared with the traditional convolutional neural network, MobileNetV1's parame Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ---------------- | --------- | --------- | ------- | ---------- | ---------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | -| mobilenet_v1_025 | 54.05 | 77.74 | 43.85 | 0.47 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-cbe3d3b3-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | +| mobilenet_v1_025 | 54.05 | 77.74 | 0.47 | 64 | 8 | 47.47 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-cbe3d3b3-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ---------------- | --------- | --------- | ---------- | ---------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -| mobilenet_v1_025 | 53.87 | 77.66 | 0.47 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | +| mobilenet_v1_025 | 53.87 | 77.66 | 0.47 | 64 | 8 | 42.43 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -48,7 +48,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -60,13 +60,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -77,7 +76,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -89,9 +88,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/mobilenetv2/README.md b/configs/mobilenetv2/README.md index ec2c1bdd..1334e004 100644 --- a/configs/mobilenetv2/README.md +++ b/configs/mobilenetv2/README.md @@ -18,31 +18,31 @@ The main innovation of the model is the proposal of a new layer module: The Inve Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ---------------- | --------- | --------- | ------- | ---------- | ---------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | -| mobilenet_v2_075 | 69.73 | 89.35 | 170.41 | 2.66 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-755932c4-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | +| mobilenet_v2_075 | 69.73 | 89.35 | 2.66 | 256 | 8 | 174.65 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-755932c4-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ---------------- | --------- | --------- | ---------- | ---------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -| mobilenet_v2_075 | 69.98 | 89.32 | 2.66 | 256 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | +| mobilenet_v2_075 | 69.98 | 89.32 | 2.66 | 256 | 8 | 155.94 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -50,7 +50,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -62,13 +62,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -79,7 +78,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -91,9 +90,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/mobilenetv3/README.md b/configs/mobilenetv3/README.md index 0b87493b..88dd3473 100644 --- a/configs/mobilenetv3/README.md +++ b/configs/mobilenetv3/README.md @@ -18,31 +18,31 @@ mobilenet-v3 offers two versions, mobilenet-v3 large and mobilenet-v3 small, for Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ---------------------- | --------- | --------- | ------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- | -| mobilenet_v3_small_100 | 68.07 | 87.77 | 51.97 | 2.55 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-6fa3c17d-910v2.ckpt) | -| mobilenet_v3_large_100 | 75.59 | 92.57 | 52.55 | 5.51 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-bd4e7bdc-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- | +| mobilenet_v3_small_100 | 68.07 | 87.77 | 2.55 | 75 | 8 | 52.38 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-6fa3c17d-910v2.ckpt) | +| mobilenet_v3_large_100 | 75.59 | 92.57 | 5.51 | 75 | 8 | 55.89 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-bd4e7bdc-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ---------------------- | --------- | --------- | ---------- | ---------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------- | -| mobilenet_v3_small_100 | 68.10 | 87.86 | 2.55 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) | -| mobilenet_v3_large_100 | 75.23 | 92.31 | 5.51 | 75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ---------------------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------- | +| mobilenet_v3_small_100 | 68.10 | 87.86 | 2.55 | 75 | 8 | 48.14 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) | +| mobilenet_v3_large_100 | 75.23 | 92.31 | 5.51 | 75 | 8 | 47.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -50,7 +50,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -62,13 +62,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/mobilenetv3/mobilenet_v3_small_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -79,7 +78,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/mobilenetv3/mobilenet_v3_small_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -91,9 +90,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/mobilenetv3/mobilenet_v3_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/mobilevit/README.md b/configs/mobilevit/README.md index 16579283..c2d83251 100644 --- a/configs/mobilevit/README.md +++ b/configs/mobilevit/README.md @@ -16,31 +16,31 @@ MobileViT, a light-weight and general-purpose vision transformer for mobile devi Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ------------------ | --------- | --------- | ------- | ---------- | ---------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | -| mobilevit_xx_small | 67.11 | 87.85 | 64.91 | 1.27 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-6f2745c3-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | +| mobilevit_xx_small | 67.11 | 87.85 | 1.27 | 64 | 8 | 67.24 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-6f2745c3-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ------------------ | --------- | --------- | ---------- | ---------- | ---------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- | -| mobilevit_xx_small | 68.91 | 88.91 | 1.27 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------------------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ---------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- | +| mobilevit_xx_small | 68.91 | 88.91 | 1.27 | 64 | 8 | 53.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -48,7 +48,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -60,12 +60,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/mobilevit/mobilevit_xx_small_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -76,7 +75,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/mobilevit/mobilevit_xx_small_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -87,7 +86,3 @@ To validate the accuracy of the trained model, you can use `validate.py` and par ``` python validate.py -c configs/mobilevit/mobilevit_xx_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` - -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. diff --git a/configs/nasnet/README.md b/configs/nasnet/README.md index 3ee1b4a5..e243bdfb 100644 --- a/configs/nasnet/README.md +++ b/configs/nasnet/README.md @@ -22,7 +22,6 @@ compared with previous state-of-the-art methods on ImageNet-1K dataset.[[1](#ref diff --git a/configs/pit/README.md b/configs/pit/README.md index d4e509da..a7615f0b 100644 --- a/configs/pit/README.md +++ b/configs/pit/README.md @@ -18,31 +18,31 @@ PiT (Pooling-based Vision Transformer) is an improvement of Vision Transformer ( Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| ------ | --------- | --------- | ------- | ---------- | ---------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | -| pit_ti | 73.26 | 91.57 | 343.45 | 4.85 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pit/pit_ti-33466a0d-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | +| pit_ti | 73.26 | 91.57 | 4.85 | 128 | 8 | 266.47 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pit/pit_ti-33466a0d-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| ------ | --------- | --------- | ---------- | ---------- | ---------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | -| pit_ti | 72.96 | 91.33 | 4.85 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_ti-e647a593.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| ------ | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | ---------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | +| pit_ti | 72.96 | 91.33 | 4.85 | 128 | 8 | 271.50 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_ti-e647a593.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -50,7 +50,7 @@ performance tested on ascend 910(8p) with graph mode ### Preparation #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation. @@ -62,13 +62,12 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/pit/pit_xs_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -79,7 +78,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/pit/pit_xs_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -91,9 +90,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par python validate.py -c configs/pit/pit_xs_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/poolformer/README.md b/configs/poolformer/README.md index 678fc079..4efbd75a 100644 --- a/configs/poolformer/README.md +++ b/configs/poolformer/README.md @@ -16,29 +16,29 @@ Figure 2. (a) The overall framework of PoolFormer. (b) The architecture of PoolF Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| :------------: | :-------: | :-------: | :-----: | :--------: | ---------- | ------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | -| poolformer_s12 | 77.49 | 93.55 | 294.54 | 11.92 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/poolformer/poolformer_s12-c7e14eea-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :------------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | +| poolformer_s12 | 77.49 | 93.55 | 11.92 | 128 | 8 | 211.81 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/poolformer/poolformer_s12-c7e14eea-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -| :------------: | :-------: | :-------: | :--------: | ---------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | -| poolformer_s12 | 77.33 | 93.34 | 11.92 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :------------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | +| poolformer_s12 | 77.33 | 93.34 | 11.92 | 128 | 8 | 220.13 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) |
#### Notes - -- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode. - Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K. ## Quick Start @@ -47,7 +47,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -60,12 +60,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201 It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/poolformer/poolformer_s12_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -76,7 +75,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/poolformer/poolformer_s12_ascend.yaml --data_dir /path/to/imagenet --distribute False ``` @@ -86,10 +85,6 @@ python train.py --config configs/poolformer/poolformer_s12_ascend.yaml --data_di validation of poolformer has to be done in amp O3 mode which is not supported, coming soon... ``` -### Deployment - -To deploy online inference services with the trained model efficiently, please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/). - ## References [1]. Yu W, Luo M, Zhou P, et al. Metaformer is actually what you need for vision[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 10819-10829. diff --git a/configs/pvt/README.md b/configs/pvt/README.md index 6cf4d334..bad43692 100644 --- a/configs/pvt/README.md +++ b/configs/pvt/README.md @@ -16,23 +16,25 @@ overhead.[[1](#References)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| :------: | :-------: | :-------: | :-----: | :--------: | ---------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------- | -| pvt_tiny | 74.88 | 92.12 | 308.02 | 13.23 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt/pvt_tiny-6676051f-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------- | +| pvt_tiny | 74.88 | 92.12 | 13.23 | 128 | 8 | 237.5 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt/pvt_tiny-6676051f-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -|:--------:|:---------:|:---------:|:----------:|------------|--------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| -| pvt_tiny | 74.81 | 92.18 | 13.23 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------- | +| pvt_tiny | 74.81 | 92.18 | 13.23 | 128 | 8 | 229.63 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) |
@@ -46,7 +48,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -61,13 +63,12 @@ It is easy to reproduce the reported results with the pre-defined training recip Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/pvt/pvt_tiny_ascend.yaml --data_dir /path/to/imagenet ``` > If use Ascend 910 devices, need to open SATURATION_MODE via `export MS_ASCEND_CHECK_OVERFLOW_MODE="SATURATION_MODE"` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -80,7 +81,7 @@ the global batch size unchanged for reproduction or adjust the learning rate lin If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/pvt/pvt_tiny_ascend.yaml --data_dir /path/to/imagenet --distribute False ``` @@ -95,10 +96,6 @@ with `--ckpt_path`. python validate.py --model=pvt_tiny --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -To deploy online inference services with the trained model efficiently, please refer to -the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/). ## References diff --git a/configs/pvtv2/README.md b/configs/pvtv2/README.md index 72928a27..d440125c 100644 --- a/configs/pvtv2/README.md +++ b/configs/pvtv2/README.md @@ -21,23 +21,25 @@ segmentation.[[1](#references)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| :-------: | :-------: | :-------: | :-----: | :--------: | ---------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | -| pvt_v2_b0 | 71.25 | 90.50 | 343.22 | 3.67 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-d9cd9d6a-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :-------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | +| pvt_v2_b0 | 71.25 | 90.50 | 3.67 | 128 | 8 | 255.76 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-d9cd9d6a-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -|:---------:|:---------:|:---------:|:----------:|------------|-----------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------| -| pvt_v2_b0 | 71.50 | 90.60 | 3.67 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :-------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | +| pvt_v2_b0 | 71.50 | 90.60 | 3.67 | 128 | 8 | 269.38 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) |
@@ -51,7 +53,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-ecosystem/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -72,7 +74,6 @@ msrun --bind_core=True --worker_num 8 python train.py --config configs/pvtv2/pvt -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -85,7 +86,7 @@ keep the global batch size unchanged for reproduction or adjust the learning rat If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/pvtv2/pvt_v2_b0_ascend.yaml --data_dir /path/to/dataset --distribute False ``` @@ -98,9 +99,6 @@ with `--ckpt_path`. python validate.py -c configs/pvtv2/pvt_v2_b0_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV. ## References diff --git a/configs/regnet/README.md b/configs/regnet/README.md index 5f14169a..c8758cf1 100644 --- a/configs/regnet/README.md +++ b/configs/regnet/README.md @@ -25,23 +25,25 @@ has a higher concentration of good models.[[1](#References)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode
-| Model | Top-1 (%) | Top-5 (%) | ms/step | Params (M) | Batch Size | Recipe | Download | -| :------------: | :-------: | :-------: | :-----: | :--------: | ---------- | --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | -| regnet_x_800mf | 76.11 | 93.00 | 50.29 | 7.26 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/regnet/regnet_x_800mf-68fe1cca-910v2.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :------------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | +| regnet_x_800mf | 76.11 | 93.00 | 7.26 | 64 | 8 | 50.74 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/regnet/regnet_x_800mf-68fe1cca-910v2.ckpt) |
-performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -|:--------------:|:---------:|:---------:|:----------:|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------| -| regnet_x_800mf | 76.04 | 92.97 | 7.26 | 64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :------------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- | +| regnet_x_800mf | 76.04 | 92.97 | 7.26 | 64 | 8 | 42.49 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) |
@@ -55,7 +57,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -70,12 +72,11 @@ It is easy to reproduce the reported results with the pre-defined training recip Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/regnet/regnet_x_800mf_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -88,7 +89,7 @@ the global batch size unchanged for reproduction or adjust the learning rate lin If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/regnet/regnet_x_800mf_ascend.yaml --data_dir /path/to/imagenet --distribute False ``` @@ -101,10 +102,6 @@ with `--ckpt_path`. python validate.py --model=regnet_x_800mf --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -To deploy online inference services with the trained model efficiently, please refer to -the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/). ## References diff --git a/configs/repmlp/README.md b/configs/repmlp/README.md index de1b93be..ffbfb0c8 100644 --- a/configs/repmlp/README.md +++ b/configs/repmlp/README.md @@ -28,17 +28,18 @@ Figure 1. RepMLP Block.[[1](#References)] Our reproduced model performance on ImageNet-1K is reported as follows. -performance tested on ascend 910*(8p) with graph mode +- ascend 910* with graph mode *coming soon* -performance tested on ascend 910(8p) with graph mode +- ascend 910 with graph mode
-| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Download | -|:-----------:|:---------:|:---------:|:----------:|------------|--------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------| -| repmlp_t224 | 76.71 | 93.30 | 38.30 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) | + +| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download | +| :---------: | :-------: | :-------: | :--------: | ---------- | ----- | ------- | --------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- | +| repmlp_t224 | 76.71 | 93.30 | 38.30 | 128 | 8 | 578.23 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) |
@@ -52,7 +53,7 @@ performance tested on ascend 910(8p) with graph mode #### Installation -Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV. +Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV. #### Dataset Preparation @@ -67,12 +68,11 @@ It is easy to reproduce the reported results with the pre-defined training recip Ascend 910 devices, please run ```shell -# distributed training on multiple GPU/Ascend devices +# distributed training on multiple NPU devices msrun --bind_core=True --worker_num 8 python train.py --config configs/repmlp/repmlp_t224_ascend.yaml --data_dir /path/to/imagenet ``` -Similarly, you can train the model on multiple GPU devices with the above `msrun` command. For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py). @@ -85,7 +85,7 @@ the global batch size unchanged for reproduction or adjust the learning rate lin If you want to train or finetune the model on a smaller dataset without distributed training, please run: ```shell -# standalone training on a CPU/GPU/Ascend device +# standalone training on single NPU device python train.py --config configs/repmlp/repmlp_t224_ascend.yaml --data_dir /path/to/imagenet --distribute False ``` @@ -98,10 +98,6 @@ with `--ckpt_path`. python validate.py --model=repmlp_t224 --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt ``` -### Deployment - -To deploy online inference services with the trained model efficiently, please refer to -the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/). ## References diff --git a/configs/repvgg/README.md b/configs/repvgg/README.md index 4319fa71..079284d9 100644 --- a/configs/repvgg/README.md +++ b/configs/repvgg/README.md @@ -27,7 +27,6 @@ previous methods.[[1](#references)]