Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
yuedongli1 committed Aug 11, 2023
1 parent 97e9810 commit 2a00417
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 4 deletions.
10 changes: 7 additions & 3 deletions examples/finetune_SHWD/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,8 @@ __BASE__: [
'../../configs/yolov7/yolov7-tiny.yaml',
]

per_batch_size: 16 # 16 * 8 = 128
img_size: 640 # 单卡batchsize,总的batchsize=per_batch_size * device_num
per_batch_size: 16 # 单卡batchsize,总的batchsize=per_batch_size * device_num
img_size: 640 # image sizes
weight: ./yolov7-tiny_pretrain.ckpt
strict_load: False # 是否按严格加载ckpt内参数,默认True,若设成False,当分类数不一致,丢掉最后一层分类器的weight
log_interval: 10 # 每log_interval次迭代打印一次loss结果
Expand Down Expand Up @@ -104,7 +104,11 @@ optimizer:
自定义数据集类别数通常与COCO数据集不一致,MindYOLO中各模型的检测头head结构跟数据集类别数有关,直接将预训练模型导入可能会因为shape不一致而导入失败,可以在yaml配置文件中设置strict_load参数为False,MindYOLO将自动舍弃shape不一致的参数,并抛出该module参数并未导入的告警
#### 模型微调(Finetune)
模型微调过程中,可首先按照默认配置进行训练,如效果不佳,可考虑调整一下参数:
* 学习率可调小一些,防止loss难以收敛
* per_batch_size可根据实际显存占用调整,通常per_batch_size越大,梯度计算越精确
* epochs可根据loss是否收敛进行调整
* anchor可根据实际物体大小进行调整
由于SHWD训练集只有约6000张图片,选用yolov7-tiny模型进行训练。
* 在多卡NPU/GPU上进行分布式模型训练,以8卡为例:
Expand Down
3 changes: 2 additions & 1 deletion requirements/cpu_requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ pillow == 9.5.0
mindspore
pylint
pytest
opencv-python
opencv-python
download

0 comments on commit 2a00417

Please sign in to comment.