-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contradiction between paper results and evaluation results for KITTI #44
Comments
Hi Jena, did you manage to reproduce the results from on the KITTI dataset? Zan |
Nope. Still stuck where I was, unfortunately. |
I have tried retraining the whole model on flying things and I can get 0.196m on KITTI with the removed ground which is approximately the same as with the pretrained model that is provided |
I see. Thanks for letting me know. I get 0.17m on KITTI with removed grounds. However, according to the paper, EPE is 0.211 with ground points and 0.122 without ground points. So basically since both of us evaluated without ground points, both of our results are worse than mentioned in the paper. Or am I missing something? |
Hi,
For KITTI evaluation, you've stated: "Note that the model used for evaluation is in
model_concat_upsa_eval_kitti.py
instead of the model used for training. The average 3D EPE result is approximately 0.175m, better than what was reported in the paper". However, this is true only for the dataset with ground points. However, your preprocessed dataset here is without ground points, for which the EPE mentioned in the paper is 0.122, much lower (better) than the EPE you're getting with the KITTI evaluation script. Please help me resolve this discrepancy. Is it that the dataset is with ground points or is there an issue with the KITTI evaluation script?Best Regards
The text was updated successfully, but these errors were encountered: