-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(metrics): ✨ initial mean average recall metrics added #1656
Conversation
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
c77b625
to
87994dc
Compare
Thanks for matching the structure of prior metrics, It'll be much easier to review. |
- Add max_detections parameter [1, 10, 100] for AR@k evaluation - Update recall calculation to use standard object detection metrics - Calculate maximum recall across different k values - Use standard area thresholds for object size categories (32², 96²) Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
d3af692
to
6274ea9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Son of a gun, you actually did it from first principles 😄
I made a handful of docs-related changes, removed a bit of dead code. A few things feel slightly off about the algo, and I left the comments - happy to have a call / discussion to figure out how it should work (I might definitely be wrong).
pred_mask = ( | ||
predictions.class_id == class_id | ||
if not self._class_agnostic | ||
else slice(None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I learnt something today, thank you! This might be a key to a more elegant class_agnostic
implementation.
However, I think this isn't needed as when we're in class-agnostic mode, we're setting class_id
to -1
.
|
||
class_iou = iou[target_mask] | ||
if any(pred_mask): | ||
class_iou = class_iou[:, pred_mask] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General comment: I don't see the sorting of predictions by confidence anywhere. I believe we should evaluate the top-1, top-10, top-100 most confident predictions.
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
I got an implementation out, and it's very similar to Torchmetrics, typically diverging by ~0.02 or so, up to 0.1. In hindsight, my thinking about what max_detections is was incorrect yesterday. mAR @ 1 only considers 1 detection, but does NOT drop any targets. Therefore, the recall is almost always very small. To get mAR of >0.5 is pretty tough - I bet values we've seen yesterday indicate that mAR@1 is too high. |
My filtering is implemented in #1661, mean_average_precision.py, |
Description
Initial mean average recall, metrics added. fixed #1583
How has this change been tested, please provide a testcase or example of how you tested the change?
Docs