Crowdpose example
https://github.com/jin-s13/xtcocoapi/blob/master/demos/demo_crowdpose.py
[1]:
import logging
import numpy as np
import faster_coco_eval
from faster_coco_eval import COCO, COCOeval_faster
print(f"{faster_coco_eval.__version__=}")
logging.root.setLevel("INFO")
logging.debug("Запись.")
faster_coco_eval.__version__='1.6.4'
[ ]:
gt_file = '../tests/dataset/example_crowdpose_val.json'
preds = '../tests/dataset/example_crowdpose_preds.json'
[3]:
sigmas = np.array([
.79, .79, .72, .72, .62, .62, 1.07, 1.07, .87, .87, .89, .89, .79,
.79
]) / 10.0
[4]:
cocoGt = COCO(gt_file)
cocoDt = cocoGt.loadRes(preds)
cocoEval = COCOeval_faster(cocoGt, cocoDt, 'keypoints_crowd', kpt_oks_sigmas=sigmas, use_area=False)
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
cocoEval.stats_as_dict
INFO:faster_coco_eval.core.cocoeval:Evaluate annotation type *keypoints_crowd*
INFO:faster_coco_eval.core.cocoeval:COCOeval_opt.evaluate() finished...
INFO:faster_coco_eval.core.cocoeval:DONE (t=0.00s).
INFO:faster_coco_eval.core.cocoeval:Accumulating evaluation results...
INFO:faster_coco_eval.core.cocoeval:COCOeval_opt.accumulate() finished...
INFO:faster_coco_eval.core.cocoeval:DONE (t=0.00s).
INFO:faster_coco_eval.core.cocoeval: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.788
INFO:faster_coco_eval.core.cocoeval: Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.988
INFO:faster_coco_eval.core.cocoeval: Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.731
INFO:faster_coco_eval.core.cocoeval: Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.822
INFO:faster_coco_eval.core.cocoeval: Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 1.000
INFO:faster_coco_eval.core.cocoeval: Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.778
INFO:faster_coco_eval.core.cocoeval: Average Precision (AP) @[ IoU=0.50:0.95 | type= easy | maxDets= 20 ] = 1.000
INFO:faster_coco_eval.core.cocoeval: Average Precision (AP) @[ IoU=0.50:0.95 | type=medium | maxDets= 20 ] = 0.980
INFO:faster_coco_eval.core.cocoeval: Average Precision (AP) @[ IoU=0.50:0.95 | type= hard | maxDets= 20 ] = 0.412
[4]:
{'AP_all': 0.7877215935879303,
'AP_50': 0.9881188118811886,
'AP_75': 0.7314356435643564,
'AR_all': 0.8222222222222223,
'AR_50': 1.0,
'AR_75': 0.7777777777777778,
'AP_easy': 1.0,
'AP_medium': 0.9802,
'AP_hard': 0.4116}
Orig Code
from xtcocotools.coco import COCO
from xtcocotools.cocoeval import COCOeval
cocoGt = COCO(gt_file)
cocoDt = cocoGt.loadRes(preds)
cocoEval = COCOeval(cocoGt, cocoDt, 'keypoints_crowd', sigmas, use_area=False)
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
Orig result
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints_crowd*
DONE (t=0.00s).
Accumulating evaluation results...
DONE (t=0.00s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.788
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.988
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.731
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.822
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 1.000
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.778
Average Precision (AP) @[ IoU=0.50:0.95 | type= easy | maxDets= 20 ] = 1.000
Average Precision (AP) @[ IoU=0.50:0.95 | type=medium | maxDets= 20 ] = 0.980
Average Precision (AP) @[ IoU=0.50:0.95 | type= hard | maxDets= 20 ] = 0.412