Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

score of FBoW is always 1 #4

Closed
IaroslavS opened this issue Jul 17, 2021 · 8 comments
Closed

score of FBoW is always 1 #4

IaroslavS opened this issue Jul 17, 2021 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@IaroslavS
Copy link

Hi !
I've got problem with computing score via FBoW.
When I run FBoW on custom images (not sequence) I get BoW similiarity score between random two images = 1.
I tried to find out why, compiled FBoW with string

std::cout << "inter score = " << vi * wi << "\n";

in the https://github.com/OpenVSLAM-Community/FBoW/blob/master/src/fbow.cpp#L432

Then I saw:

inter score = 26.5767
inter score = 30.6879
inter score = 92.7026
inter score = 99.7655
inter score = 39.6566
inter score = 34.9797
inter score = 24.8903
inter score = 62.8143

It means that final similiarity score is above value of 100. But in the end of calculating score it's limited from above with value 1
https://github.com/OpenVSLAM-Community/FBoW/blob/master/src/fbow.cpp#L458-L459

Is it correct to limit score value with 1 ?

@ymd-stella
Copy link
Contributor

Is there a set of images that you have tried and gotten results that don't match your intuition? Can you provide those?

In addition, I think we need to do some benchmarking. Do you know of any open data sets that could be used for evaluation?

@IaroslavS
Copy link
Author

IaroslavS commented Jul 19, 2021

Is there a set of images that you have tried and gotten results that don't match your intuition? Can you provide those?

In addition, I think we need to do some benchmarking. Do you know of any open data sets that could be used for evaluation?

Yes, I've run on a dataset, but this dataset will be open in few months. I have run DBoW2 on pairs of images and got scores from 0.003428 to 0.154669. And most similiar images have DBoW2 score only 0.1 - 0.15, instead of 0.9 or something like that.
But as for FBoW, similiarity score is always 1, saying us that each pair of images is absolutely similiar - identical, that's incorrect.

@ymd-stella
Copy link
Contributor

@IaroslavS
I think this has been resolved by 560c339. What do you think?

@ymd-stella ymd-stella added the bug Something isn't working label Jul 20, 2021
@IaroslavS
Copy link
Author

@IaroslavS
I think this has been resolved by 560c339. What do you think?

Now, that is much better. The score similiarity ranges from 0.002500 to 0.126358 on the same dataset (DBoW2 had score from 0.003428 to 0.154669). The quality of FBoW is almost the same as DBoW2.
By the way, the value of score itself doesn't matter for me, I take the pair of images that has the biggest score among the others pairs.

Thanks for resolving the problem. I'll give later (in 1-2 months or even sooner) link to the paper, containing quality and time performance between different approaches of visual localization, image retrievals and so on, including DBoW2 and FBoW.

@ymd-stella
Copy link
Contributor

I'll give later (in 1-2 months or even sooner) link to the paper, containing quality and time performance between different approaches of visual localization, image retrievals and so on, including DBoW2 and FBoW.

The benchmarking issue will be addressed in #5. Thank you for your contributions.

@decamargo10
Copy link

hey @IaroslavS I would be really interested to read the paper. If you could provide me a link that would be great, thank you!

@IaroslavS
Copy link
Author

hey @IaroslavS I would be really interested to read the paper. If you could provide me a link that would be great, thank you!

Hi ! I'm sorry, but our paper was rejected and we are looking for other conferences and journals. You can have a look at some results of localization on our HPointLoc dataset here https://github.com/cds-mipt/HPointLoc.

@decamargo10
Copy link

hey @IaroslavS I would be really interested to read the paper. If you could provide me a link that would be great, thank you!

Hi ! I'm sorry, but our paper was rejected and we are looking for other conferences and journals. You can have a look at some results of localization on our HPointLoc dataset here https://github.com/cds-mipt/HPointLoc.

That looks very interesting, thank you! Good luck with your paper!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants