Faster-COCO-Eval
Why should you replace pycocotools with faster-coco-eval?
Aspect |
pycocotools |
faster-coco-eval |
---|---|---|
Support & Development |
Outdated and not actively maintained. Issues and incompatibilities arise with new releases. |
Actively maintained, continuously evolving, and regularly updated with new features and bug fixes. |
Transparency & Reliability |
Lacks comprehensive testing, making updates risky and results less predictable. |
Emphasizes extensive test coverage and code quality, ensuring trustworthy and reliable results. |
Performance |
Significantly slower, especially on large datasets or distributed workloads. |
Several times faster due to C++ optimizations and modern algorithms. |
Functionality |
Limited to basic COCO format evaluation. |
Offers extended metrics, support for new IoU types, compatibility with more datasets (e.g., CrowdPose, LVIS), advanced visualizations, and seamless integration with PyTorch/TorchVision. |
By choosing faster-coco-eval, you benefit from:
Reliability and confidence in your results
High processing speed
Modern functionality and support for new tasks
An active community and prompt response to your requests
Switch to faster-coco-eval and experience a new standard in working with COCO annotations!
Install
Basic implementation identical to pycocotools
pip install faster-coco-eval
Additional visualization options
Only 1 additional package needed opencv-python-headless
pip install faster-coco-eval[extra]
Conda install
conda install conda-forge::faster-coco-eval
Basic usage
import faster_coco_eval
# Replace pycocotools with faster_coco_eval
faster_coco_eval.init_as_pycocotools()
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
anno = COCO(str(anno_json)) # init annotations api
pred = anno.loadRes(str(pred_json)) # init predictions api (must pass string, not Path)
val = COCOeval(anno, pred, "bbox")
val.evaluate()
val.accumulate()
val.summarize()
Faster-COCO-Eval base
This package wraps a facebook C++ implementation of COCO-eval operations found in the pycocotools package. This implementation greatly speeds up the evaluation time for coco’s AP metrics, especially when dealing with a high number of instances in an image.
Comparison
For our use case with a test dataset of 5000 images from the coco val dataset. Testing was carried out using the mmdetection framework and the eval_metric.py script. The indicators are presented below.
Visualization of testing colab_example.ipynb available in directory examples/comparison
Summary for 5000 imgs
Type |
faster-coco-eval |
pycocotools |
Profit |
---|---|---|---|
bbox |
5.812 |
22.72 |
3.909 |
segm |
7.413 |
24.434 |
3.296 |
Feautures
This library provides not only validation functions, but also error visualization functions. Including visualization of errors in the image. You can study in more detail in the examples and Wiki.
Usage
Code examples for using the library are available on the Wiki
Examples
Update history
Available via link history.md
Star History
License
The original module was licensed with apache 2, I will continue with the same license. Distributed under the apache version 2.0 license, see license for more information.
Citation
If you use this benchmark in your research, please cite this project.
@article{faster-coco-eval,
title = {{Faster-COCO-Eval}: Faster interpretation of the original COCOEval},
author = {MiXaiLL76},
year = {2024}
}