COCOeval_faster
- class faster_coco_eval.core.COCOeval_faster(cocoGt=None, cocoDt=None, iouType='segm', print_function=<bound method Logger.info of <Logger faster_coco_eval.core.cocoeval (WARNING)>>, extra_calc=False, kpt_oks_sigmas=None, use_area=True, lvis_style=False, separate_eval=False, boundary_dilation_ratio=0.02, boundary_cpu_count=4)
Bases:
COCOeval
This is a slightly modified version of the original COCO API, where the functions evaluateImg() and accumulate() are implemented in C++ to speedup evaluation.
- accumulate()
Accumulate per image evaluation results and store the result in self.eval.
Does not support changing parameter settings from those used by self.evaluate()
- static calc_auc(recall_list, precision_list, method='c++')
Calculate area under precision recall curve.
- Parameters:
recall_list (Union[List[float], np.ndarray]) – List or array of recall values.
precision_list (Union[List[float], np.ndarray]) – List or array of precision values.
method (str, optional) – Method to calculate auc. Defaults to “c++”.
- Returns:
Area under precision recall curve.
- Return type:
float
- compute_mAUC()
Compute the mean Area Under Curve (mAUC) metric.
- Returns:
Mean AUC across all categories and area ranges.
- Return type:
float
- compute_mIoU()
Compute the mean Intersection over Union (mIoU) metric.
- Returns:
Mean IoU across all matched detections and ground truths.
- Return type:
float
- evaluate()
Run per image evaluation on given images and store results in self.evalImgs_cpp, a datastructure that isn’t readable from Python but is used by a c++ implementation of accumulate().
Unlike the original COCO PythonAPI, we don’t populate the datastructure self.evalImgs because this datastructure is a computational bottleneck.
- property extended_metrics
Computes extended evaluation metrics for object detection results.
Calculates per-class and overall (macro) metrics such as mean average precision (mAP) at IoU thresholds, precision, recall, and F1-score. Results are computed using evaluation results stored in the object. For each class, if categories are used, metrics are reported separately and for the overall dataset.
- Returns:
- A dictionary with the following keys:
- ’class_map’ (list of dict): List of per-class and overall metrics, each as a dictionary containing:
’map’ (float): Overall mean average precision at IoU 0.50.
’precision’ (float): Macro-averaged precision for the best F1-score.
’recall’ (float): Macro-averaged recall for the best F1-score.
- Return type:
dict
Notes
Uses COCO-style evaluation results (precision and scores arrays).
Filters out classes with NaN results in any metric.
The best F1-score across recall thresholds is used to select macro precision and recall.
- math_matches()
Analyze matched detections and ground truths to assign true positive, false positive, and false negative flags, and update detection and ground truth annotations in-place.
- Returns:
None
- run()
Wrapper function which runs the evaluation.
Calls evaluate(), accumulate(), and summarize() in sequence.
- Returns:
None
- property stats_as_dict
Return the evaluation statistics as a dictionary with descriptive labels.
- Returns:
Dictionary mapping metric names to their values.
- Return type:
dict[str, float]
- summarize()
Summarize and finalize the statistics of the evaluation.
- Returns:
None