-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intersection F1 calculation code #80
Comments
Hi, This is not an "obscure" clone but the official one with peoepl who have developed it still answering question on their free time. This is a simple question of ownership since it has been developed at Audio Analytic. They transferred it to the DCASE-REPO so the community could still access it even after their acquisition (without having to go on an 'obscure' repo as you said). About your question on the True Positive Ratio, can you please move it to the other repo please ? I remember having conversations about your problem since we had the same one, I don't recall what has been the final word (I'll ask around). Best, |
Hi, Also it turns out what i was trying to mean in the previous post was more close to "stale" than 'obscure'. I'm not a native speaker so excuse my english. regarding the issue i was facing, replacing the line
with
seems to resolve the issue. Best. |
Hi, If we do that, we should probably comment why we skip the last value (added by PSDS). Maybe writing a "if" method is cleaner ? Could you please open the issue on the other repo so we can discuss it there and open a pull request. |
I just created the issue on the psds_eval repo, where i mentioned the logic for slicing [:-1] too. While we're on the subject of intersection F1 calculation , another issue that actually concerns this repo is the following line:
I dont really understand why the F-1 s calculated over different threshold values are being averaged here. what would that even mean? is there for example a reference in the literature for this measure? |
Since PSDS Eval package has been removed from github and so the support for it, is there a plan to have a separate standalone code for evaluation of this metric in the repo without having to import from a somewhat obscure psds_eval package that has been removed from github?
I was getting NaNs in my "per class F1 score" so i had to go through the psds_eval package only to discover that its due to the line:
where it calculates num_gts in this really bizarre way assuming tp_ratio never being zero(!) and hence yields the false negatives and F1 of all classes with zero TP, to become nan.
This behaviour by the way could easily lead to a significant overestimation of macro intersection F1 using this code, because if the model's output for a rare class yields zero TPs, the macro F1 in this package ignores it and calculates average across the rest of the classes.
So I think it would be helpful to have a more transparent and clean standalone code for intersection based F1 in the repo.
The text was updated successfully, but these errors were encountered: