Skip to content

Releases: zjykzj/YOLOv1

Update ultralytics/yolov5 Transforms

07 Jul 15:40
Compare
Choose a tag to compare
Pre-release
  1. update ultralytics/yolov5(485da42) transforms, add Mosaic/Perspective and so on.
  • Train using the VOC07+12 trainval dataset and test using the VOC2007 Test dataset with an input size of 448x448. give the result as follows
Original (darknet) Original (darknet) abeardear/pytorch-YOLO-v1 zjykzj/YOLOv1(This) zjykzj/YOLOv1(This) zjykzj/YOLOv1(This) zjykzj/YOLOv1(This)
ARCH YOLOv1 FastYOLOv1 ResNet_YOLOv1 YOLOv1(S=14) FastYOLOv1(S=14) YOLOv1 FastYOLOv1
VOC AP[IoU=0.50] 63.4 52.7 66.5 71.71 60.38 66.85 52.89

After this update, the implementation of zjykzj/YOLOv1 has completely surpassed the training results of the paper

Refactor Data Module

25 Jun 17:34
Compare
Choose a tag to compare
Refactor Data Module Pre-release
Pre-release
  1. Refactor data module;
  2. Fix demo.py;
  3. Update latest training results.
  • Train using the VOC07+12 trainval dataset and test using the VOC2007 Test dataset with an input size of 448x448. give the result as follows
Original (darknet) Original (darknet) abeardear/pytorch-YOLO-v1 zjykzj/YOLOv1(This) zjykzj/YOLOv1(This) zjykzj/YOLOv1(This) zjykzj/YOLOv1(This)
ARCH YOLOv1 FastYOLOv1 ResNet_YOLOv1 YOLOv1(S=14) FastYOLOv1(S=14) YOLOv1 FastYOLOv1
VOC AP[IoU=0.50] 63.4 52.7 66.5 66.06 54.81 63.06 49.89

SURPASS

16 May 13:39
Compare
Choose a tag to compare
SURPASS Pre-release
Pre-release
  1. Adding the use of IGNORE_THRESH in YOLOv1Loss.
  2. Reset lambda_ *, refer to YOLOv1 paper settings.

In this version of the implementation, the training results on the VOC dataset have exceeded the YOLOv1 paper implementation.

Original (darknet) Original (darknet) abeardear/pytorch-YOLO-v1 zjykzj/YOLOv1(This) zjykzj/YOLOv1(This)
arch YOLOv1 FastYOLOv1 ResNet_YOLOv1 YOLOv1 FastYOLOv1
train VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval
val VOC2007 Test VOC2007 Test VOC2007 Test VOC2007 Test VOC2007 Test
VOC AP[IoU=0.50] 63.4 52.7 66.5 67.21 54.93
conf_thre / / 0.5 0.005 0.005
nms_thre / / / 0.45 0.45
input_size 448 448 448 448 448

OPTIMIZE

16 May 13:08
Compare
Choose a tag to compare
OPTIMIZE Pre-release
Pre-release
  1. Cancel the last reduction of receptive field to optimize YOLOv1 benchmark network.
  2. Use F.cross_entropy instead of F.mse_loss for pred_probs training.
Original (darknet) Original (darknet) abeardear/pytorch-YOLO-v1 zjykzj/YOLOv1(This) zjykzj/YOLOv1(This)
arch YOLOv1 FastYOLOv1 ResNet_YOLOv1 YOLOv1 FastYOLOv1
train VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval
val VOC2007 Test VOC2007 Test VOC2007 Test VOC2007 Test VOC2007 Test
VOC AP[IoU=0.50] 63.4 52.7 66.5 62.55 50.46
conf_thre / / 0.5 0.005 0.005
nms_thre / / / 0.45 0.45
input_size 448 448 448 448 448

UPDATE

14 May 01:15
Compare
Choose a tag to compare
UPDATE Pre-release
Pre-release

Based on zjykzj/YOLOv2, the entire project has been restructured, including training modules, data modules, and inference modules. In addition, the loss function YOLOv1Loss is reimplemented, which is completely defined by reference to YOLOv1 paper.

Original (darknet) Original (darknet) abeardear/pytorch-YOLO-v1 zjykzj/YOLOv1(This) zjykzj/YOLOv1(This)
arch YOLOv1 FastYOLOv1 ResNet_YOLOv1 YOLOv1 FastYOLOv1
train VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval VOC07+12 trainval
val VOC2007 Test VOC2007 Test VOC2007 Test VOC2007 Test VOC2007 Test
VOC AP[IoU=0.50] 63.4 52.7 66.5 52.95 43.65
conf_thre / / 0.5 0.005 0.005
nms_thre / / / 0.45 0.45
input_size 448 448 448 448 448

INIT

13 May 23:35
Compare
Choose a tag to compare
INIT Pre-release
Pre-release

In the initial version, the basic training and loss function are implemented with reference to aberdear/pythoch-YOLO-v1. In addition, YOLOv1 and FastYOLOv1 model definitions are implemented based on ChatGPT.

At present, there is no effective evaluation code implemented, and only the loss is output during the training process. From the results, the effect is not ideal.