-
by @LawVP-SP |
Beta Was this translation helpful? Give feedback.
Replies: 16 comments
-
Hello @LawVP-SP, for the visdrone results, COCO pretrained models are finetuned on Visdrone dataset with all 10 classes ignoring Finetunings are performed for 24 epochs. All other training details can be found on the config files under https://github.com/fcakyon/sahi-benchmark/tree/main/mmdet_configs |
Beta Was this translation helpful? Give feedback.
-
@fcakyon Thank you very much for the clarifications. Could you publish the json files to obtain these results (mAP)? You use the following classes as I have seen: CLASSES = ("pedestrian", "people", "bicycle", "car", "van", "truck", "tricycle", "awning-tricycle", "bus", "motor") but these classes are not included in COCO, see therefore the following list: (https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/) |
Beta Was this translation helpful? Give feedback.
-
@LawVP-SP you mean the dataset json files or results json files? |
Beta Was this translation helpful? Give feedback.
-
@LawVP-SP we did not provide any result on COCO classes in the paper. We provide the results for |
Beta Was this translation helpful? Give feedback.
-
@fcakyon The result json files. It would be interesting to obtain them in order to be able to filter the mAP obtained by each class. |
Beta Was this translation helpful? Give feedback.
-
@LawVP-SP if you tell me which model and dataset you are interested in, I can upload the class-wise evaluation results for it 👍 (together with results json if you need it) |
Beta Was this translation helpful? Give feedback.
-
Thank you very much @fcakyon I would greatly appreciate if you could upload to the repository both the json of the GTs from the Visdrone dataset together with the json of the annotations obtained by the models (FCOS, VFNet and TOOD) after using your proposal. |
Beta Was this translation helpful? Give feedback.
-
@LawVP-SP I will try my best to upload all the result json files in a week (a bit busy with the deadlines currently) |
Beta Was this translation helpful? Give feedback.
-
@IvanGarcia7 you can find the fine-tuning configs here: https://github.com/fcakyon/sahi-benchmark/tree/main/mmdet_configs. Before fine-tuning, you can slice your COCO formatted dataset using sahi slicing utilities (https://github.com/obss/sahi/blob/main/docs/slicing.md) or use a crop augmentation during training (https://github.com/fcakyon/sahi-benchmark/blob/9477519fec806cfee7d95d0c168cd00ca7a928c6/mmdet_configs/xview_tood/tood_crop_300_500_cls_60.py#L91). After, fine-tuning, use sahi inference: https://github.com/obss/sahi#framework-agnostic-slicedstandard-prediction |
Beta Was this translation helpful? Give feedback.
-
@fcakyon |
Beta Was this translation helpful? Give feedback.
-
@IvanGarcia7 yes. You can check this script to see how we converted Visdrone into COCO format: https://github.com/fcakyon/sahi-benchmark/blob/main/visdrone/visdrone_to_coco.py |
Beta Was this translation helpful? Give feedback.
-
Great, thank you very much! |
Beta Was this translation helpful? Give feedback.
-
@IvanGarcia7 @LawVP-SP @lightyear12 all result and evaluation files + classwise error plots for xivew experiments are now publicly available at https://github.com/fcakyon/sahi-benchmark#xview-results Will also upload all visdrone results soon 👍 |
Beta Was this translation helpful? Give feedback.
-
visdrone result files are also present now: https://github.com/fcakyon/sahi-benchmark/blob/main/README.md#visdrone-results |
Beta Was this translation helpful? Give feedback.
-
Did the inference result get better when you only did SF and didn't do SAHI? |
Beta Was this translation helpful? Give feedback.
-
@youngjae-avikus yes it increases results slightly but not as much as with SAHI, check Table 1 and Table 2 in the official ICIP22 paper: https://arxiv.org/pdf/2202.06934.pdf |
Beta Was this translation helpful? Give feedback.
Hello @LawVP-SP, for the visdrone results, COCO pretrained models are finetuned on Visdrone dataset with all 10 classes ignoring
ignore
andother
classes. Check this script for more detail on visdrone preprocessing: https://github.com/fcakyon/sahi-benchmark/blob/main/visdrone/visdrone_to_coco.pyFinetunings are performed for 24 epochs. All other training details can be found on the config files under https://github.com/fcakyon/sahi-benchmark/tree/main/mmdet_configs