Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, the authors systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, the authors propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection.[1]
Figure 1. Architecture of MixNet [1]
Our reproduced model performance on ImageNet-1K is reported as follows.
Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
---|---|---|---|---|---|---|
mixnet_s | D910x8-G | 75.52 | 92.52 | 4.17 | yaml | weights |
mixnet_m | D910x8-G | 76.64 | 93.05 | 5.06 | yaml | weights |
mixnet_l | D910x8-G | 78.73 | 94.31 | 7.38 | yaml | weights |
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
Please refer to the installation instruction in MindCV.
Please download the ImageNet-1K dataset for model training and validation.
- Distributed Training
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
# distrubted training on multiple GPU/Ascend devices
mpirun -n 8 python train.py --config configs/mixnet/mixnet_s_ascend.yaml --data_dir /path/to/imagenet
If the script is executed by the root user, the
--allow-run-as-root
parameter must be added tompirun
.
Similarly, you can train the model on multiple GPU devices with the above mpirun
command.
For detailed illustration of all hyper-parameters, please refer to config.py.
Note: As the global batch size (batch_size x num_devices) is an important hyper-parameter, it is recommended to keep the global batch size unchanged for reproduction or adjust the learning rate linearly to a new global batch size.
- Standalone Training
If you want to train or finetune the model on a smaller dataset without distributed training, please run:
# standalone training on a CPU/GPU/Ascend device
python train.py --config configs/mixnet/mixnet_s_ascend.yaml --data_dir /path/to/dataset --distribute False
To validate the accuracy of the trained model, you can use validate.py
and parse the checkpoint path with --ckpt_path
.
python validate.py -c configs/mixnet/mixnet_s_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
Please refer to the deployment tutorial in MindCV.
[1] Tan M, Le Q V. Mixconv: Mixed depthwise convolutional kernels[J]. arXiv preprint arXiv:1907.09595, 2019.