-
Notifications
You must be signed in to change notification settings - Fork 8k
discrepancy between two ways of measuring the map@IoU=0.5 #5643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I don't know the reason. For checking accuracy for MSCOCO models, I use pycocotool or codalab-evaluation server. |
I thought of some potential reasons: On the pycocotools side:
On the darknet side, there are some discrepancies between using
I have tried to not ignore iscrowd boxes, and not filtering using maxDets on pycocotools, while using same Do you have any further thoughts @AlexeyAB ? Thanks. |
@ZhengRui I don't know. Line 1281 in 0ef5052
Try to use the same Also try to set 11 pr-points instead of 101 points in both Darknet and Pycocotool, for easier debugging. Then compare Precision and Recall for one of classes between Darknet and Pycocotool (but not for person-class to avoid crowd issue). |
@ZhengRui |
@tand826 Unfortunately I didn't got time to further look into this |
@AlexeyAB Thanks for the great work ! I followed #2145 (comment)
to get the map@IoU=0.5 of
yolov4.weights
model on COCO2017 validation set../darknet detector map ~/Work/Datasets/yolo_data/coco2017/coco.data cfg/yolov4.cfg weights/yolov4.weights -iou_thresh 0.50 -points 101
gives me map@IoU=0.5 73.54, the end of the log is like this:./darknet detector valid ~/Work/Datasets/yolo_data/coco2017/coco.data cfg/yolov4.cfg weights/yolov4.weights
to generatecoco_results.json
insideresults
foldercoco_eval.py
to run the evaluation:python coco2017_data/coco_eval.py ../../Datasets/coco/annotations/instances_val2017.json ./results/coco_results.json bbox
gives map@IoU=0.5 74.9, and the log is:Do you know why these two methods give different map@IoU=0.5, maybe I misunderstood something?
The text was updated successfully, but these errors were encountered: