- Verifying mAP for the custom dataset with results generated from AlexeyAB/darknet
$ ./darknet detector test baby.data baby.cfg baby.weights -thresh 0.001 -dont_show -ext_output < test.txt > result.txt
- Removing lines from the text file containing specific words
$ grep -vE "(Detection|Enter)" result.txt > result2.txt
- Modify
separator_key='...'
inpred_yolo2json.py
- Run the code
demo-baby.py
and Yolo Darknet Detection/GT files will be converted to pycocotools json format- It might be a good idea to rename the file names, e.g.
train1.jpg, train2.jpg, ... train5011.jpg
- It might be a good idea to rename the file names, e.g.
- Run the code
demo-mAP.py
and mAP will be shown on screen- Reset maxDets if needed
CLICK ME - mAP with COCO API
- mAP with pycocotools (baby-v4) train
- Reset maxDets
cocoEval.params.maxDets = [1, 100, 1000]
- mAP@[IoU=0.50] with Darknet 97.61 %
- Reset maxDets
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.704
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.971
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.850
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.580
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.739
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.805
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.388
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.751
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.755
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.645
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.788
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.848
- mAP with pycocotools (baby-v4) validation
- mAP@[IoU=0.50] with Darknet 96.64 %
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.685
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.962
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.821
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.554
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.708
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.783
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.380
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.707
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.745
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.626
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.765
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827
- mAP with pycocotools (car-v4-tiny) train
- mAP@[IoU=0.50] with Darknet 95.95 %
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.753
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.956
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.913
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.636
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.810
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.850
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.702
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.796
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.796
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.696
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.848
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.875
- mAP with pycocotools (car-v4-tiny) validation
- mAP@[IoU=0.50] with Darknet 99.81 %
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.625
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.997
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.705
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.464
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.628
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.717
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.645
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.692
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.692
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.578
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.695
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.717
- mAP with pycocotools (emotion-v4-tiny) validation
- mAP@[IoU=0.50] with Darknet 68.22 %
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.484
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.683
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.635
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.568
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.464
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.479
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.761
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.761
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.761
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.746
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.709
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.770
V6baby - mAP with COCO API
- mAP with pycocotools images291
- yolov4
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.855
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.984
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.954
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.649
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.845
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.918
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.407
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.885
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.887
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.711
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.881
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.943
AP@[ IoU=0.50 ] (%)
***********************
Category : AH : 97.17
Category : BH : 99.56
AP@[ IoU=0.50:0.95 ] (%)
***********************
Category : AH : 80.14
Category : BH : 90.88
- mAP with pycocotools images291
- yolov5x
Class Images Labels P R mAP@.5 mAP@.5:.95:
all 291 951 0.991 0.962 0.986 0.866
AdultHead 291 678 0.991 0.931 0.977 0.808
RealBabyHead 291 273 0.991 0.993 0.996 0.923
- mAP with pycocotools validation
- yolov4
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.685
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.962
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.821
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.554
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.707
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.783
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.380
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.707
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.745
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.626
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.765
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827
AP@[ IoU=0.50 ] (%)
***********************
Category : B : 96.94
Category : Y : 94.88
Category : W : 96.33
Category : R : 93.79
Category : AH : 96.45
Category : BH : 98.92
AP@[ IoU=0.50:0.95 ] (%)
***********************
Category : B : 66.45
Category : Y : 62.23
Category : W : 64.93
Category : R : 62.45
Category : AH : 70.47
Category : BH : 84.64
- mAP with pycocotools validation
- yolov5x
Class Images Labels P R mAP@.5 mAP@.5:.95:
all 3401 7804 0.961 0.923 0.963 0.725
blue 3401 559 0.967 0.938 0.974 0.693
yellow 3401 813 0.952 0.902 0.945 0.651
white 3401 786 0.956 0.907 0.957 0.672
red 3401 749 0.933 0.878 0.934 0.647
AdultHead 3401 2349 0.964 0.923 0.969 0.764
RealBabyHead 3401 2548 0.996 0.991 0.996 0.922
V7baby - mAP with COCO API
- mAP with pycocotools images291
- yolov4
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.880
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.994
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.983
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.716
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.883
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.923
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.406
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.907
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.908
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.760
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.910
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.949
AP@[ IoU=0.50 ] (%)
***********************
Category : AH : 98.86
Category : BH : 100.00
AP@[ IoU=0.50:0.95 ] (%)
***********************
Category : AH : 83.22
Category : BH : 92.80
- mAP with pycocotools images291
- yolov5x
Class Images Labels P R mAP@.5 mAP@.5:.95:
all 291 951 0.998 0.998 0.997 0.92
AdultHead 291 678 0.999 0.996 0.997 0.888
RealBabyHead 291 273 0.998 1 0.996 0.952
- mAP with pycocotools images291
- yolov5m (640)
Class Images Labels P R mAP@.5 mAP@.5:.95:
all 291 951 1 0.997 0.997 0.92
AdultHead 291 678 1 0.994 0.997 0.887
RealBabyHead 291 273 0.999 1 0.997 0.953
- mAP with pycocotools validation
- yolov4
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.686
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.963
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.822
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.556
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.709
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.781
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.379
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.706
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.745
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.625
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.766
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827
AP@[ IoU=0.50 ] (%)
***********************
Category : B : 97.19
Category : Y : 94.91
Category : W : 95.98
Category : R : 94.06
Category : AH : 96.48
Category : BH : 98.92
AP@[ IoU=0.50:0.95 ] (%)
***********************
Category : B : 66.65
Category : Y : 62.23
Category : W : 64.49
Category : R : 62.97
Category : AH : 70.71
Category : BH : 84.40
- mAP with pycocotools validation
- yolov5x
Class Images Labels P R mAP@.5 mAP@.5:.95:
all 3401 7804 0.961 0.924 0.963 0.726
blue 3401 559 0.963 0.937 0.971 0.689
yellow 3401 813 0.943 0.894 0.944 0.653
white 3401 786 0.957 0.91 0.957 0.672
red 3401 749 0.938 0.888 0.942 0.654
AdultHead 3401 2349 0.967 0.927 0.967 0.766
RealBabyHead 3401 2548 0.997 0.991 0.996 0.918
- mAP with pycocotools validation
- yolov5m (640)
Class Images Labels P R mAP@.5 mAP@.5:.95:
all 3401 7804 0.951 0.936 0.964 0.727
blue 3401 559 0.952 0.957 0.973 0.7
yellow 3401 813 0.94 0.915 0.949 0.651
white 3401 786 0.947 0.917 0.962 0.68
red 3401 749 0.916 0.903 0.937 0.652
AdultHead 3401 2349 0.96 0.93 0.969 0.766
RealBabyHead 3401 2548 0.994 0.992 0.995 0.912
V7baby - Comparison Darknet v.s. tkDNN-TensorRT (FPS)
- Inference FPS of Yolov4 with Darknet and tkDNN-TensorRT on custom trained model
- Platform: GeForce RTX 2080 Ti:
- Video Dimensions: 848 x 480
Network Size | Darknet AVG_FPS | tkDNN-TensorRT FP32 (B=1) | tkDNN-TensorRT FP32 (B=4) | tkDNN-TensorRT FP16 (B=1) | tkDNN-TensorRT FP16 (B=4) |
---|---|---|---|---|---|
Yolov4 512 | 78.3 | 102.8 | 124.0 | 154.2 | 202.9 |
-
We can modify
cocoapi/PythonAPI/pycocotools/cocoeval.py
to calculate AP for each class (https://stackoverflow.com/questions/56247323/coco-api-evaluation-for-subset-of-classes). Add following code between lines 458-464num_classes = 6 avg_ap = 0.0 if ap == 1:
for i in range(0, num_classes): print('category : {0} : {1}'.format(i,np.mean(s[:,:,i,:]))) avg_ap +=np.mean(s[:,:,i,:]) print('(all categories) mAP : {}'.format(avg_ap / num_classes))
CLICK ME - Issues related to COCO API
Goals:
- Convert Yolo Darknet Ground Truth Files to pycocotools json (Done)
- Convert Yolo Darknet Detection Files to pycocotools json (Done)
- Convert Yolo Darknet Ground Truth/Detection Files to /groundtruths /detections folder usable by rafaelpadilla/Object-Detection-Metrics
- Customizable Ground Truth/Detection format for custom datasets
Current state:
Verifying mAP for the 5k validation dataset with results generated from AlexeyAB/darknet .
./darknet detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -thresh 0.005 -dont_show -ext_output < /5k.txt > result.txt
Refer to demo.ipynb for details