https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch
https://gist.github.com/akTwelve/dc79fc8b9ae66828e7c7f648049bc42d
https://github.com/mnslarcher/kmeans-anchors-ratios
https://github.com/alzayats/DeepFish
https://github.com/l3p-cv/lost
The project was intended to create an interface for multiple object detectors for fish detection. Due to time issues we just implemented one detector (EfficientDet). Also the utilities are so far just hardcoded for EfficientDet use. We conducted several experiments on different fish datasets. We also used publicly available datasets such as fish 4 knowledge and deepfish.
Here are the mAPs we managed to achieve with this implementation of the efficientdet. mAP50:95 16,7, mAP50 29,7. (Single Class)mAP50:95 9,7 , mAP50 15,5 (all classes)
mAP50:95 26, mAP50 41,4. (Single Class) mAP50:95 73, mAP50 93. (Single Class)- python setup_ds.py
- conda activate FishDet
COCO pretrained EfficientDet Weights, plus Deepfish trained FishDet
python get_pretrained_weights.py
python setup_ds.py --mode coco --c {compound_coefficient} --ds_name {name of dataset/project} --path {path/to/COCO/Dataset/Folder}
Folder structure must be like this
/ds_name
/annotations
/instances_train.json
/instances_val.sjon
/train
/img1
/img2
..
/imgn
/val
/img1
/img2
..
/imgn
So far just "deepfish" supported.
python setup_ds.py --mode known --ds_name {ds_name}
This will use the project config I generated with --c 4. Also this will use bbox annotations I generated from the original per pixel annotations.
For inference purposes.
python setup_ds.py --mode any --ds_name {ds_name} --path {path/to/imagefolder}
python setup_ds.py --mode lost --c {compound_coefficient} --ds_name {name of dataset/project} --path {path/to/LOST/Dataset/Folder} --anno_file {name of anno file. Must be .csv}
Directory structure must be as follows:
/ds_name
/annos_out
/anno_file
/imgs
/img1
..
/imgn
python combine_ds.py --ds1 {name of first dataset/project} --ds2 {name of second dataset/project} --ds_name {name of new dataset} --c {compound_coefficient}
python Interface.py --do train --project {name of project/dataset} --c {compound_coefficient} --load_weights {weights in Yet-another-EfficientDet/weights. If ommited, training will be conducted on randomly initialized weights} --detector EfficientDet --batch_size {batch_size} --lr {learnrate} --num_epochs {number of epochs} --head_only (if True will just train the regression / classification layers. If false will train whole network)
Best weights will be the newest .pth file in Yet-another-EfficientDet/logs/{ds_name}
Does COCO Style evaluation on dataset.
python Interface.py --do eval --project {name of project/ds} --load_weights {.pth file in Yet-another-EfficientDet/weights/} --detector EfficientDet --c {compound_coefficient}
python Interface.py --do infer --project {name of project/dataset} --c {compound_coefficient} --load_weights {weights in Yet-another-EfficientDet/weights. If ommited will throw error} --detector EfficientDet --infer_mode { Must be in ["lost", "coco", "viz", "all"]. Lost creates lost style annotations, coco creates coco style annotations, viz saves the images plus draws bboxes in them, all does everything} --path {path/to/images/ to do inference on} --conf_threshold {confidence threshold. Will be used to filter bboxes by confidence}
Inference results can be found in Yet-another-EfficientDet/inference/{timestamp of session}. Also there is still a bug within "viz" and bboxes might be visualized wrongly
python lost_pipeline.py --path {path/to/lost} --c 4
Helpful tips and instructions should be printed by the script. If you have by any chance problem running this, contact me or run the used os commands in the script one after another by hand. You have to set the dataset names and paths probably by yourself though.