General Place Recognition papers list. Search among 170 papers!
- List of papers
Topic | Name | Year | Image type | Environment | Illumination | Viewpoint | Ground Truth | Labels | Extra Information |
---|---|---|---|---|---|---|---|---|---|
Generic | New College and City Centre | 2008 | RGB | Outdoor | slight | ✔️ | ✔️ | ✔️ | GPS |
New College Vision and Laser | 2009 | Gray. | Outdoor | slight | ✔️ | ✔️ | GPS, IMU, LiDAR | ||
Rawseeds | 2006 | RGB | Indoor/Outdoor | ✔️ | ✔️ | GPS, LiDAR | |||
Ford Campus | 2011 | RGB | Urban | slight | ✔️ | GPS, IMU, LiDAR | |||
Malaga Parking 6L | 2009 | RGB | Outdoor | ✔️ | GPS, IMU, LiDAR | ||||
KITTI Odometry | 2012 | Gray./ RGB | Urban | slight | ✔️ | GPS, IMU, LiDAR | |||
Long-term | St. Lucia | 2010 | RGB | Urban | ✔️ | slight | GPS | ||
COLD | 2009 | RGB | Indoor | ✔️ | ✔️ | ✔️ | ✔️ | LiDAR | |
Oxford RobotCar | 2017 | RGB | Urban | ✔️ | ✔️ | GPS, IMU, LiDAR | |||
Gardens Point Walking | 2014 | RGB | Indoor/ Outdoor | ✔️ | ✔️ | - | |||
MSLS | 2020 | RGB | Urban | ✔️ | ✔️ | ✔️ | GPS | ||
Across seasons | Nurburgring and Alderley | 2012 | RGB | Urban | ✔️ | ✔️ | ✔️ | - | |
Nordland | 2013 | RGB | Outdoor | ✔️ | ✔️ | GPS | |||
CMU | 2011 | RGB | Urban | ✔️ | ✔️ | ✔️ | GPS | ||
Freiburg (FAS) | 2014 | RGB | Urban | ✔️ | ✔️ | ✔️ | GPS | ||
VPRiCE | 2015 | RGB | Outdoor | ✔️ | ✔️ | - | |||
RGB-D | TUM RGB-D | 2012 | RGB-D | Indoor | ✔️ | ✔️ | IMU | ||
Microsoft 7-Scenes | 2013 | RGB-D | Indoor | ✔️ | ✔️ | ✔️ | - | ||
ICL-NUIM | 2014 | RGB-D | Indoor | ✔️ | ✔️ | - | |||
Semantic | KITTI Semantic | 2019 | RGB | Urban | ✔️ | ✔️ | GPS, IMU, LiDAR | ||
Cityscapes | 2016 | RGB | Urban | ✔️ | ✔️ | GPS | |||
CSC | 2019 | RGB | Outdoor | ✔️ | ✔️ | LiDAR | |||
Train networks | Cambridge Landmarks | 2015 | RGB | Outdoor | ✔️ | ✔️ | ✔️ | ✔️ | - |
Pittsburgh250k | 2013 | RGB | Urban | ✔️ | ✔️ | ✔️ | ✔️ | GPS | |
Tokyo 24/7 | 2015 | RGB | Urban | ✔️ | ✔️ | ✔️ | GPS | ||
SPED | 2017 | RGB | Outdoor | ✔️ | ✔️ | - | |||
Omni-directional | New College Vision and Laser | 2009 | Gray. | Outdoor | slight | ✔️ | ✔️ | GPS, IMU, LiDAR | |
MOLP | 2018 | Gray./D | Outdoor | ✔️ | ✔️ | GPS | |||
NCLT | 2016 | RGB | Outdoor | ✔️ | ✔️ | ✔️ | GPS, LiDAR | ||
Aerial/UAV | Shopping Street 1/2 | 2018 | Gray. | Urban | slight | ✔️ | ✔️ | - | |
EuRoC | 2016 | Gray. | Indoor | ✔️ | ✔️ | IMU | |||
Underwater | UWSim | 2016 | RGB | Under-water | ✔️ | GPS | |||
Range sensors | MulRan | 2020 | 3D Point clouds | Urban | ✔️ | ✔️ | LiDAR, RADAR |
- S. Lowry, N. S underhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke,and M. J. Milford, Visual place recognition: A survey, IEEE Transactions on robotics, vol. 32, no. 1, pp. 1–19, 2015.
- X. Zhang, L. Wang, and Y. Su, Visual place recognition: A surveyfrom deep learning perspective, Pattern Recognition, vol. 113, p.107760, 2021.
- S. Garg, T. Fischer, and M. Milford, Where is your place, visual placerecognition? arXiv preprint arXiv:2103.06443, 2021.
- M. Zaffar, S. Garg, M. Milford, J. Kooij, D. Flynn, K. McDonald-Maier, and S. Ehsan, Vpr-bench: An open-source visual place recogni-tion evaluation framework with quantifiable viewpoint and appearancechange, Int. J. Comput. Vision, vol. 129, no. 7, pp. 2136–2174, jul2021.
- J. Miao, K. Jiang, T. Wen, Y. Wang, P. Jia, X. Zhao, Z. Xiao, J. Huang,Z. Zhong, and D. Yang, A survey on monocular re-localization:From the perspective of scene map representation, arXiv preprintarXiv:2311.15643, 2023.
- H. Yin, X. Xu, S. Lu, X. Chen, R. Xiong, S. Shen, C. Stachniss, and Y. Wang, A survey on global lidar localization: Challenges, advances and open problems, International Journal of Computer Vision, pp. 1–33, 2024.
- M. Zaffar, S. Ehsan, M. Milford, and K. McDonald-Maier, Co-hog: A light-weight, compute-efficient, and training-free visual placerecognition technique for changing environments, IEEE Robotics andAutomation Letters, vol. 5, no. 2, pp. 1835–1842, 2020.
- D. Galvez-Lopez and J. D. Tardos, Bags of binary words for fastplace recognition in image sequences, IEEE Transactions on Robotics,vol. 28, no. 5, pp. 1188–1197, 2012.
- D. Scaramuzza, Omnidirectional Camera. Boston, MA: Springer US,2014, pp. 552–560.
- J. Jiao, H. Wei, T. Hu, X. Hu, Y. Zhu, Z. He, J. Wu, J. Yu,X. Xie, H. Huang et al., Fusionportable: A multi-sensor campus-scenedataset for evaluation of localization and mapping accuracy on diverseplatforms, in 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE, 2022, pp. 3851–3856.
- X. Chen, T. L ¨abe, A. Milioto, T. R ¨ohling, O. Vysotska, A. Haag,J. Behley, and C. Stachniss, Overlapnet: Loop closing for lidar-basedSLAM, CoRR, vol. abs/2105.11344, 2021.
- Z. Hong, Y. Petillot, A. Wallace, and S. Wang, Radarslam: A robustsimultaneous localization and mapping system for all weather condi-tions, The International Journal of Robotics Research, vol. 41, no. 5,pp. 519–542, 2022.
- R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, Netvlad:Cnn architecture for weakly supervised place recognition, in 2016IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 5297–5307.
- G. Tolias, R. Sicre, and H. Jegou, Particular object retrieval with inte-gral max-pooling of cnn activations, arXiv preprint arXiv:1511.05879,2015.
- F. Radenovic, G. Tolias, and O. Chum, Fine-tuning cnn image retrievalwith no human annotation, IEEE transactions on pattern analysis andmachine intelligence, vol. 41, no. 7, pp. 1655–1668, 2018.
- G. Berton, C. Masone, and B. Caputo, Rethinking visual geo-localization for large-scale applications, in Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition,2022, pp. 4878–4888.
- R. Wang, Y. Shen, W. Zuo, S. Zhou, and N. Zheng, Transvpr:Transformer-based place recognition with multi-level attention aggre-gation, in Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition, 2022, pp. 13 648–13 657.
- M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V. Khali-dov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby et al., Dinov2:Learning robust visual features without supervision, arXiv preprintarXiv:2304.07193, 2023.
- N. Keetha, A. Mishra, J. Karhade, K. M. Jatavallabhula, S. Scherer, M. Krishna, and S. Garg, Anyloc: Towards universal visual placerecognition, IEEE Robotics and Automation Letters, 2023.
- N. Piasco, D. Sidibe, V. Gouet-Brunet, and C. Demonceaux, Learningscene geometry for visual localization in challenging conditions, in2019 International Conference on Robotics and Automation (ICRA).IEEE, 2019, pp. 9094–9100.
- G. Peng, Y. Yue, J. Zhang, Z. Wu, X. Tang, and D. Wang, Semanticreinforced attention learning for visual place recognition, in 2021IEEE International Conference on Robotics and Automation (ICRA),2021, pp. 13 415–13 422.
- A. Oertel, T. Cieslewski, and D. Scaramuzza, Augmenting visualplace recognition with structural cues, IEEE Robotics and AutomationLetters, vol. 5, no. 4, pp. 5534–5541, 2020.
- J. Komorowski, M. Wysoczanska, and T. Trzcinski, Minkloc++:lidar and monocular image fusion for place recognition, in 2021International Joint Conference on Neural Networks (IJCNN). IEEE,2021, pp. 1–8.
- A. J. Lee and A. Kim, Eventvlad: Visual place recognition with recon-structed edges from event cameras, in 2021 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS), 2021, pp. 2247–2252.
- R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, Pointnet:Deep learning on point sets for 3d classification and segmentation, in2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2017, pp. 77–85.
- C. Choy, J. Gwak, and S. Savarese, 4d spatio-temporal convnets:Minkowski convolutional neural networks, in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition, 2019, pp.3075–3084.
- G. Kim and A. Kim, Scan context: Egocentric spatial descriptorfor place recognition within 3d point cloud map, in 2018 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS),2018, pp. 4802–4809.
- G. Kim, S. Choi, and A. Kim, Scan context++: Structural place recog-nition robust to rotation and lateral variations in urban environments,IEEE Transactions on Robotics, vol. 38, no. 3, pp. 1856–1874, 2022.
- Y. Wang, Z. Sun, C.-Z. Xu, S. E. Sarma, J. Yang, and H. Kong,Lidar iris for loop-closure detection, in 2020 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS). IEEE, 2020,pp. 5769–5775.
- X. Xu, S. Lu, J. Wu, H. Lu, Q. Zhu, Y. Liao, R. Xiong, and Y. Wang,Ring++: Roto-translation invariant gram for global localization on a17sparse scan map, IEEE Transactions on Robotics, vol. 39, no. 6, pp.4616–4635, 2023.
- C. Yuan, J. Lin, Z. Liu, H. Wei, X. Hong, and F. Zhang, Btc: Abinary and triangle combined descriptor for 3-d place recognition,IEEE Transactions on Robotics, vol. 40, pp. 1580–1599, 2024.
- M. Jiang, Y. Wu, T. Zhao, Z. Zhao, and C. Lu, Pointsift: A sift-like network module for 3d point cloud semantic segmentation, arXivpreprint arXiv:1807.00652, 2018.
- M. A. Uy and G. H. Lee, Pointnetvlad: Deep point cloud basedretrieval for large-scale place recognition, in 2018 IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2018, pp. 4470–4479.
- R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, Netvlad:Cnn architecture for weakly supervised place recognition, in 2016IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2016, pp. 5297–5307.
- Z. Liu, S. Zhou, C. Suo, P. Yin, W. Chen, H. Wang, H. Li, and Y. Liu,Lpd-net: 3d point cloud learning for large-scale place recognition andenvironment analysis, in 2019 IEEE/CVF International Conference onComputer Vision (ICCV), 2019, pp. 2831–2840.
- Y. Xia, Y. Xu, S. Li, R. Wang, J. Du, D. Cremers, and U. Stilla, Soe-net: A self-attention and orientation encoding network for point cloudbased place recognition, in Proceedings of the IEEE/CVF Conferenceon computer vision and pattern recognition, 2021, pp. 11 348–11 357.
- Z. Fan, Z. Song, W. Zhang, H. Liu, J. He, and X. Du, Rpr-net: A pointcloud-based rotation-aware large scale place recognition network, inEuropean Conference on Computer Vision. Springer, 2022, pp. 709–725.
- Y. You, Y. Lou, R. Shi, Q. Liu, Y.-W. Tai, L. Ma, W. Wang, and C. Lu,Prin/sprin: On extracting point-wise rotation invariant features, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 44,no. 12, pp. 9489–9502, 2021.
- J. Komorowski, Minkloc3d: Point cloud based large-scale place recog-nition, in 2021 IEEE Winter Conference on Applications of ComputerVision (WACV), 2021, pp. 1789–1798.
- K. Vidanapathirana, M. Ramezani, P. Moghadam, S. Sridharan, andC. Fookes, Logg3d-net: Locally guided global descriptor learning for3d place recognition, in 2022 International Conference on Roboticsand Automation (ICRA), 2022, pp. 2215–2221.
- P. Yin, F. Wang, A. Egorov, J. Hou, Z. Jia, and J. Han, Fast sequence-matching enhanced viewpoint-invariant 3-d place recognition, IEEETransactions on Industrial Electronics, vol. 69, no. 2, pp. 2127–2135,2022.
- L. Luo, S. Zheng, Y. Li, Y. Fan, B. Yu, S.-Y. Cao, J. Li, and H.-L. Shen,Bevplace: Learning lidar-based place recognition using bird’s eye viewimages, in Proceedings of the IEEE/CVF International Conference onComputer Vision, 2023, pp. 8700–8709.
- K. ˙Zywanowski, A. Banaszczyk, M. R. Nowicki, and J. Komorowski,Minkloc3d-si: 3d lidar place recognition with sparse convolutions,spherical coordinates, and intensity, IEEE Robotics and AutomationLetters, vol. 7, no. 2, pp. 1079–1086, 2022.
- X. Chen, T. L ¨abe, A. Milioto, T. R ¨ohling, O. Vysotska, A. Haag,J. Behley, and C. Stachniss, Overlapnet: Loop closing for lidar-basedSLAM, CoRR, vol. abs/2105.11344, 2021.
- J. Ma, J. Zhang, J. Xu, R. Ai, W. Gu, and X. Chen, Overlaptrans-former: An efficient and yaw-angle-invariant transformer network forlidar-based place recognition, IEEE Robotics and Automation Letters,vol. 7, no. 3, pp. 6958–6965, 2022.
- L. Li, X. Kong, X. Zhao, T. Huang, W. Li, F. Wen, H. Zhang, andY. Liu, Rinet: Efficient 3d lidar-based place recognition using rotationinvariant neural network, IEEE Robotics and Automation Letters,vol. 7, no. 2, pp. 4321–4328, 2022.
- P. Yin, L. Xu, J. Zhang, H. Choset, and S. Scherer, i3dloc: Image-to-range cross-domain localization robust to inconsistent environmentalconditions, in Proceedings of Robotics: Science and Systems (RSS’21). Robotics: Science and Systems 2021, 2021.
- S. Zhao, P. Yin, G. Yi, and S. Scherer, Spherevlad++: Attention-basedand signal-enhanced viewpoint invariant descriptor, 2022.
- X. Xu, H. Yin, Z. Chen, Y. Li, Y. Wang, and R. Xiong, Disco: Differ-entiable scan context with orientation, IEEE Robotics and AutomationLetters, vol. 6, no. 2, pp. 2791–2798, 2021.
- S. Saftescu, M. Gadd, D. De Martini, D. Barnes, and P. Newman,Kidnapped radar: Topological radar localisation using rotationally-invariant metric learning, in 2020 IEEE International Conference onRobotics and Automation (ICRA), 2020, pp. 4358–4364.
- K. Cait, B. Wang, and C. X. Lu, Autoplace: Robust place recognitionwith single-chip automotive radar, in 2022 International Conferenceon Robotics and Automation (ICRA), 2022, pp. 2222–2228.
- C. Meng, Y. Duan, C. He, D. Wang, X. Fan, and Y. Zhang, mmplace:Robust place recognition with intermediate frequency signal of low-cost single-chip millimeter wave radar, IEEE Robotics and AutomationLetters, 2024.
- N. Hughes, Y. Chang, and L. Carlone, Hydra: A real-time spatialperception system for 3d scene graph construction and optimization,arXiv preprint arXiv:2201.13360, 2022.
- M. Gadd, D. De Martini, and P. Newman, Look around you:Sequence-based radar place recognition with learned rotational invari-ance, in 2020 IEEE/ION Position, Location and Navigation Sympo-sium (PLANS), 2020, pp. 270–276.
- T. Y. Tang, D. D. Martini, S. Wu, and P. Newman, Self-supervisedlearning for using overhead imagery as maps in outdoor range sensorlocalization, The International Journal of Robotics Research, vol. 40,no. 12-14, pp. 1488–1509, 2021, pMID: 34992328.
- M. Gadd, D. De Martini, and P. Newman, Contrastive learning forunsupervised radar place recognition, in 2021 20th InternationalConference on Advanced Robotics (ICAR), 2021, pp. 344–349.
- N. Hughes, Y. Chang, and L. Carlone, Hydra: A real-time spatialperception system for 3d scene graph construction and optimization,arXiv preprint arXiv:2201.13360, 2022.
- E. Stumm, C. Mei, S. Lacroix, J. Nieto, M. Hutter, and R. Siegwart,Robust visual place recognition with graph kernels, in Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition,2016, pp. 4535–4544.
- N. Kim, O. Kwon, H. Yoo, Y. Choi, J. Park, and S. Oh, Topologicalsemantic graph memory for image-goal navigation, in Conference onRobot Learning. PMLR, 2023, pp. 393–402.
- X. Kong, X. Yang, G. Zhai, X. Zhao, X. Zeng, M. Wang, Y. Liu, W. Li,and F. Wen, Semantic graph based place recognition for 3d pointclouds, in 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE, 2020, pp. 8216–8223.
- K. Vidanapathirana, P. Moghadam, B. Harwood, M. Zhao, S. Sridharan,and C. Fookes, Locus: Lidar-based place recognition using spatiotem-poral higher-order pooling, in 2021 IEEE International Conference onRobotics and Automation (ICRA), 2021, pp. 5075–5081.
- N. Hughes, Y. Chang, and L. Carlone, Hydra: A real-time spatial perception system for 3d scene graph construction and optimization, arXiv preprint arXiv:2201.13360, 2022
- O. Kwon, J. Park, and S. Oh, Renderable neural radiance map forvisual navigation, in Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition, 2023, pp. 9099–9108.
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal,G. Sastry, A. Askell, P. Mishkin, J. Clark et al., Learning transferablevisual models from natural language supervision, in Internationalconference on machine learning. PMLR, 2021, pp. 8748–8763.
- C. Kassab, M. Mattamala, L. Zhang, and M. Fallon, Language-extended indoor slam (lexis): A versatile system for real-time visualscene understanding, arXiv preprint arXiv:2309.15065, 2023.
- J. Chen, D. Barath, I. Armeni, M. Pollefeys, and H. Blum, where ami? scene retrieval with language, arXiv preprint arXiv:2404.14565, 2024.
- Z. Hong, Y. Petillot, D. Lane, Y. Miao, and S. Wang, Textplace: Visualplace recognition and topological localization through reading scenetexts, in Proceedings of the IEEE/CVF International Conference onComputer Vision, 2019, pp. 2861–2870.
- P. Yin, L. Xu, J. Zhang, H. Choset, and S. Scherer, i3dloc: Image-to-range cross-domain localization robust to inconsistent environmentalconditions, in Proceedings of Robotics: Science and Systems (RSS’21). Robotics: Science and Systems 2021, 2021.
- S. Garg and M. Milford, Seqnet: Learning descriptors for sequence-based hierarchical place recognition, IEEE Robotics and AutomationLetters, vol. 6, no. 3, pp. 4305–4312, 2021.
- G. Peng, Y. Yue, J. Zhang, Z. Wu, X. Tang, and D. Wang, Semanticreinforced attention learning for visual place recognition, in 2021IEEE International Conference on Robotics and Automation (ICRA),2021, pp. 13 415–13 422.
- N. Merrill and G. Huang, Calc2.0: Combining appearance, semanticand geometric information for robust and efficient visual loop closure,in2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS), 2019, pp. 4554–4561.
- S. Hausler, S. Garg, M. Xu, M. Milford, and T. Fischer, Patch-netvlad: Multi-scale fusion of locally-global descriptors for placerecognition, in 2021 IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR), 2021, pp. 14 136–14 147.
- P. Yin, L. Xu, X. Li, C. Yin, Y. Li, R. A. Srivatsan, L. Li, J. Ji, andY. He, A multi-domain feature learning method for visual place recog-nition, in 2019 International Conference on Robotics and Automation(ICRA), 2019, pp. 319–324.
- P. Yin, I. Cisneros, S. Zhao, J. Zhang, H. Choset, and S. Scherer,isimloc: Visual global localization for previously unseen environmentswith simulated images, IEEE Transactions on Robotics, 2023.
- P. Yin, L. Xu, J. Zhang, H. Choset, and S. Scherer, i3dloc: Image-to-range cross-domain localization robust to inconsistent environmentalconditions, in Proceedings of Robotics: Science and Systems (RSS’21). Robotics: Science and Systems 2021, 2021.
- M. J. Milford and G. F. Wyeth, Seqslam: Visual route-based navigationfor sunny summer days and stormy winter nights, in 2012 IEEEInternational Conference on Robotics and Automation, 2012, pp. 1643–1649.
- F. Lu, B. Chen, X.-D. Zhou, and D. Song, Sta-vpr: Spatio-temporalalignment for visual place recognition, IEEE Robotics and AutomationLetters, vol. 6, no. 3, pp. 4297–4304, 2021.
- S. Garg and M. Milford, Seqnet: Learning descriptors for sequence-based hierarchical place recognition, IEEE Robotics and AutomationLetters, vol. 6, no. 3, pp. 4305–4312, 2021.
- S. M. Siam and H. Zhang, Fast-seqslam: A fast appearance basedplace recognition algorithm, in 2017 IEEE International Conferenceon Robotics and Automation (ICRA), 2017, pp. 5702–5708.
- P. Yin, F. Wang, A. Egorov, J. Hou, J. Zhang, and H. Choset, Se-qspherevlad: Sequence matching enhanced orientation-invariant placerecognition, in 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS), 2020, pp. 5024–5029.
- L. Bampis, A. Amanatiadis, and A. Gasteratos, Fast loop-closuredetection using visual-word-vectors from image sequences, The In-ternational Journal of Robotics Research, vol. 37, no. 1, pp. 62–82,2018.
- P. Yin, I. Cisneros, S. Zhao, J. Zhang, H. Choset, and S. Scherer,isimloc: Visual global localization for previously unseen environmentswith simulated images, IEEE Transactions on Robotics, 2023.
- J. Ma, J. Zhang, J. Xu, R. Ai, W. Gu, and X. Chen, Overlaptrans-former: An efficient and yaw-angle-invariant transformer network forlidar-based place recognition, IEEE Robotics and Automation Letters,vol. 7, no. 3, pp. 6958–6965, 2022.
- Z. Fan, Z. Song, W. Zhang, H. Liu, J. He, and X. Du, Rpr-net: A pointcloud-based rotation-aware large scale place recognition network, inEuropean Conference on Computer Vision. Springer, 2022, pp. 709–725.
- Y. You, Y. Lou, R. Shi, Q. Liu, Y.-W. Tai, L. Ma, W. Wang, and C. Lu,Prin/sprin: On extracting point-wise rotation invariant features, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 44,no. 12, pp. 9489–9502, 2022.
- A. Ali-Bey, B. Chaib-Draa, and P. Giguere, Mixvpr: Feature mixingfor visual place recognition, in Proceedings of the IEEE/CVF WinterConference on Applications of Computer Vision, 2023, pp. 2998–3007.
- S. Hausler, S. Garg, M. Xu, M. Milford, and T. Fischer, Patch-netvlad: Multi-scale fusion of locally-global descriptors for placerecognition, in 2021 IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR), 2021, pp. 14 136–14 147.
- P. Yin, S. Zhao, H. Lai, R. Ge, J. Zhang, H. Choset, and S. Scherer,Automerge: A framework for map assembling and smoothing in city-scale environments, IEEE Transactions on Robotics, 2023.18
- J. Knights, P. Moghadam, M. Ramezani, S. Sridharan, and C. Fookes,Incloud: Incremental learning for point cloud place recognition, in2022 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). IEEE, 2022, pp. 8559–8566.
- G. Kim and A. Kim, Scan context: Egocentric spatial descriptorfor place recognition within 3d point cloud map, in 2018 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS),2018, pp. 4802–4809.
- P. Yin, L. Xu, J. Zhang, H. Choset, and S. Scherer, i3dloc: Image-to-range cross-domain localization robust to inconsistent environmentalconditions, in Proceedings of Robotics: Science and Systems (RSS’21). Robotics: Science and Systems 2021, 2021.
- P. Yin, L. Xu, J. Zhang, and H. Choset, Fusionvlad: A multi-viewdeep fusion networks for viewpoint-free 3d place recognition, IEEERobotics and Automation Letters, vol. 6, no. 2, pp. 2304–2310, 2021.
- B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, Explor-ing generalization in deep learning, Advances in neural informationprocessing systems, vol. 30, 2017.
- K. Simonyan and A. Zisserman, Very deep convolutional networksfor large-scale image recognition, in 3rd International Conference onLearning Representations, ICLR 2015, San Diego, CA, USA, May 7-9,2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds.,2015.
- R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, Netvlad:Cnn architecture for weakly supervised place recognition, in 2016IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2016, pp. 5297–5307.
- S. Zhao, P. Yin, G. Yi, and S. Scherer, Spherevlad++: Attention-basedand signal-enhanced viewpoint invariant descriptor, 2022.
- N. Keetha, A. Mishra, J. Karhade, K. M. Jatavallabhula, S. Scherer,M. Krishna, and S. Garg, Anyloc: Towards universal visual placerecognition, IEEE Robotics and Automation Letters, 2023.
- P. Yin, L. Xu, Z. Feng, A. Egorov, and B. Li, Pse-match: A viewpoint-free place recognition method with parallel semantic embedding, IEEETransactions on Intelligent Transportation Systems, pp. 1–12, 2021.
- V. Paolicelli, A. Tavera, C. Masone, G. Berton, and B. Caputo,Learning semantics for visual place recognition through multi-scaleattention, in Image Analysis and Processing – ICIAP 2022, S. Sclaroff,C. Distante, M. Leo, G. M. Farinella, and F. Tombari, Eds. Cham:Springer International Publishing, 2022, pp. 454–466.
- H. Lai, P. Yin, and S. Scherer, Adafusion: Visual-lidar fusion withadaptive weights for place recognition, 2021.
- J. Komorowski, M. Wysoczanska, and T. Trzcinski, Minkloc++:lidar and monocular image fusion for place recognition, in 2021International Joint Conference on Neural Networks (IJCNN). IEEE,2021, pp. 1–8.
- M. A. Uy and G. H. Lee, Pointnetvlad: Deep point cloud basedretrieval for large-scale place recognition, in 2018 IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2018, pp. 4470–4479.
- P. Yin, F. Wang, A. Egorov, J. Hou, Z. Jia, and J. Han, Fast sequence-matching enhanced viewpoint-invariant 3-d place recognition, IEEETransactions on Industrial Electronics, vol. 69, no. 2, pp. 2127–2135,2022.
- T. Barros, L. Garrote, R. Pereira, C. Premebida, and U. J. Nunes,Attdlnet: Attention-based dl network for 3d lidar place recognition,2021.
- L. Li, X. Kong, X. Zhao, T. Huang, W. Li, F. Wen, H. Zhang, andY. Liu, Rinet: Efficient 3d lidar-based place recognition using rotationinvariant neural network, IEEE Robotics and Automation Letters,vol. 7, no. 2, pp. 4321–4328, 2022.
- D. Gao, C. Wang, and S. Scherer, Airloop: Lifelong loop closure de-tection, in 2022 International Conference on Robotics and Automation(ICRA). IEEE, 2022, pp. 10 664–10 671.
- J. Knights, P. Moghadam, M. Ramezani, S. Sridharan, and C. Fookes,Incloud: Incremental learning for point cloud place recognition, in2022 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). IEEE, 2022, pp. 8559–8566.
- A.-D. Doan, Y. Latif, T.-J. Chin, and I. Reid, Hm4: Hidden markovmodel with memory management for visual place recognition, IEEERobotics and Automation Letters, vol. 6, no. 1, pp. 167–174, 2021.
- P. Yin, A. Abuduweili, S. Zhao, L. Xu, C. Liu, and S. Scherer,Bioslam: A bioinspired lifelong memory system for general placerecognition, IEEE Transactions on Robotics, 2023.
- D. Galvez-Lopez and J. D. Tardos, Bags of binary words for fastplace recognition in image sequences, IEEE Transactions on Robotics,vol. 28, no. 5, pp. 1188–1197, 2012.
- G. Kim and A. Kim, Scan context: Egocentric spatial descriptorfor place recognition within 3d point cloud map, in 2018 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS),2018, pp. 4802–4809.
- G. Kim, S. Choi, and A. Kim, Scan context++: Structural place recog-nition robust to rotation and lateral variations in urban environments,IEEE Transactions on Robotics, vol. 38, no. 3, pp. 1856–1874, 2022.
- H. Wang, C. Wang, and L. Xie, Intensity scan context: Codingintensity and geometry relations for loop closure detection, in 2020IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2020, pp. 2095–2101.
- M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, Mo-bilenetv2: Inverted residuals and linear bottlenecks, in Proceedings ofthe IEEE conference on computer vision and pattern recognition, 2018,pp. 4510–4520.
- B. Ferrarini, M. J. Milford, K. D. McDonald-Maier, and S. Ehsan,Binary neural networks for memory-efficient and effective visual placerecognition in changing environments, IEEE Transactions on Robotics,vol. 38, no. 4, pp. 2617–2631, 2022.
- O. Grainge, M. Milford, I. Bodala, S. D. Ramchurn, and S. Ehsan,Design space exploration of low-bit quantized neural networks forvisual place recognition, arXiv preprint arXiv:2312.09028, 2023.
- W. Maass, Networks of spiking neurons: the third generation of neuralnetwork models, Neural networks, vol. 10, no. 9, pp. 1659–1671,1997.
- A. D. Hines, P. G. Stratton, M. Milford, and T. Fischer, Vprtempo:A fast temporally encoded spiking neural network for visual placerecognition, arXiv preprint arXiv:2309.10225, 2023.
- S. Hussaini, M. Milford, and T. Fischer, Applications of spik-ing neural networks in visual place recognition, arXiv preprintarXiv:2311.13186, 2023.
- M. J. Milford and G. F. Wyeth, Seqslam: Visual route-based navigationfor sunny summer days and stormy winter nights, in 2012 IEEEInternational Conference on Robotics and Automation, 2012, pp. 1643–1649.
- Y. Liu and H. Zhang, Towards improving the efficiency of sequence-based slam, in 2013 IEEE International Conference on Mechatronicsand Automation, 2013, pp. 1261–1266.
- S. M. Siam and H. Zhang, Fast-seqslam: A fast appearance basedplace recognition algorithm, in 2017 IEEE International Conferenceon Robotics and Automation (ICRA), 2017, pp. 5702–5708.
- P. Hansen and B. Browning, Visual place recognition using hmmsequence matchingv, in 2014 IEEE/RSJ International Conference onIntelligent Robots and Systems, 2014, pp. 4549–4555.
- P. Yin, R. A. Srivatsan, Y. Chen, X. Li, H. Zhang, L. Xu, L. Li, Z. Jia,J. Ji, and Y. He, Mrs-vpr: a multi-resolution sampling based globalvisual place recognition method, in 2019 International Conference onRobotics and Automation (ICRA), 2019, pp. 7137–7142.
- P. Yin, F. Wang, A. Egorov, J. Hou, J. Zhang, and H. Choset, Se-qspherevlad: Sequence matching enhanced orientation-invariant placerecognition, in 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS), 2020, pp. 5024–5029.
- L. Carlone, G. C. Calafiore, C. Tommolillo, and F. Dellaert, Planarpose graph optimization: Duality, optimal solutions, and verification,IEEE Transactions on Robotics, vol. 32, no. 3, pp. 545–565, 2016.
- P. Yin, S. Zhao, H. Lai, R. Ge, J. Zhang, H. Choset, and S. Scherer,Automerge: A framework for map assembling and smoothing in city-scale environments, IEEE Transactions on Robotics, 2023.18
- X. Hu, L. Zheng, J. Wu, R. Geng, Y. Yu, H. Wei, X. Tang, L. Wang,J. Jiao, and M. Liu, Paloc: Advancing slam benchmarking with prior-assisted 6-dof trajectory generation and uncertainty estimation, arXivpreprint arXiv:2401.17826, 2024.
- Y. Gal and Z. Ghahramani, Dropout as a bayesian approximation:Representing model uncertainty in deep learning, in InternationalConference on Machine Learning, 2016, pp. 1050–1059.
- B. Lakshminarayanan, A. Pritzel, and C. Blundell, Simple and scalablepredictive uncertainty estimation using deep ensembles, Advances inneural information processing systems, vol. 30, 2017.
- P. Yun and M. Liu, Laplace approximation based epistemic uncertaintyestimation in 3d object detection, in Conference on Robot Learning.PMLR, 2023, pp. 1125–1135.
- A. Kendall and Y. Gal, What uncertainties do we need in bayesiandeep learning for computer vision? Advances in neural informationprocessing systems, vol. 30, 2017.
- K. Cai, C. X. Lu, and X. Huang, Stun: Self-teaching uncertaintyestimation for place recognition, in 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS). IEEE, 2022,pp. 6614–6621.
- M. Sensoy, L. Kaplan, and M. Kandemir, Evidential deep learningto quantify classification uncertainty, Advances in neural informationprocessing systems, vol. 31, 2018.
- K. Mason, J. Knights, M. Ramezani, P. Moghadam, and D. Miller,Uncertainty-aware lidar place recognition in novel environments, in2023 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS). IEEE, 2023, pp. 3366–3373.
- Y. Shi and A. K. Jain, Probabilistic face embeddings, in Proceedingsof the IEEE/CVF International Conference on Computer Vision, 2019,pp. 6902–6911.
- L. Chen, P. Wu, K. Chitta, B. Jaeger, A. Geiger, and H. Li, End-to-end autonomous driving: Challenges and frontiers, arXiv preprintarXiv:2306.16927, 2023.19
- M. Tranzatto, T. Miki, M. Dharmadhikari, L. Bernreiter, M. Kulkarni,F. Mascarich, O. Andersson, S. Khattak, M. Hutter, R. Siegwart et al.,Cerberus in the darpa subterranean challenge, Science Robotics,vol. 7, no. 66, p. eabp9742, 2022.
- C. Campos, R. Elvira, J. J. G. Rodrıguez, J. M. Montiel, and J. D.Tardos, Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam, IEEE Transactions on Robotics, vol. 37,no. 6, pp. 1874–1890, 2021.
- T. Qin, P. Li, and S. Shen, Vins-mono: A robust and versatile monoc-ular visual-inertial state estimator, IEEE Transactions on Robotics,vol. 34, no. 4, pp. 1004–1020, 2018.
- D. Galvez-Lopez and J. D. Tardos, Bags of binary words for fastplace recognition in image sequences, IEEE Transactions on Robotics,vol. 28, no. 5, pp. 1188–1197, 2012.
- T. Shan, B. Englot, C. Ratti, and D. Rus, Lvi-sam: Tightly-coupledlidar-visual-inertial odometry via smoothing and mapping, in 2021IEEE international conference on robotics and automation (ICRA).IEEE, 2021, pp. 5692–5698.
- D. Adolfsson, M. Karlsson, V. Kubelka, M. Magnusson, and H. An-dreasson, Tbv radar slam–trust but verify loop candidates, IEEERobotics and Automation Letters, 2023.
- W. Chen, L. Zhu, Y. Guan, C. R. Kube, and H. Zhang, Submap-basedpose-graph visual slam: A robust visual exploration and localizationsystem, in 2018 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS). IEEE, 2018, pp. 6851–6856.
- M. Kuse and S. Shen, Learning whole-image descriptors for real-timeloop detection and kidnap recovery under large viewpoint difference,Robotics and Autonomous Systems, vol. 143, p. 103813, 2021.
- M. A. Uy and G. H. Lee, Pointnetvlad: Deep point cloud basedretrieval for large-scale place recognition, in 2018 IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2018, pp. 4470–4479.
- D. DeTone, T. Malisiewicz, and A. Rabinovich, Superpoint: Self-supervised interest point detection and description, in Proceedingsof the IEEE conference on computer vision and pattern recognitionworkshops, 2018, pp. 224–236.
- P.-E. Sarlin, C. Cadena, R. Siegwart, and M. Dymczyk, From coarse tofine: Robust hierarchical localization at large scale, in 2019 IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR),2019, pp. 12 708–12 717.
- P. Yin, L. Xu, J. Zhang, H. Choset, and S. Scherer, i3dloc: Image-to-range cross-domain localization robust to inconsistent environmentalconditions, in Proceedings of Robotics: Science and Systems (RSS’21). Robotics: Science and Systems 2021, 2021.
- L. Liu and H. Li, Lending orientation to neural networks for cross-view geo-localization, in 2019 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR), 2019, pp. 5617–5626.
- F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, and S. Shen, Teach-repeat-replan: A complete and robust system for aggressive flight incomplex environments, IEEE Transactions on Robotics, vol. 36, no. 5,pp. 1526–1545, 2020.
- M. Mattamala, M. Ramezani, M. Camurri, and M. Fallon, Learningcamera performance models for active multi-camera visual teach andrepeat, in 2021 IEEE International Conference on Robotics andAutomation (ICRA). IEEE, 2021, pp. 14 346–14 352.
- Y. Chen and T. D. Barfoot, Self-supervised feature learning forlong-term metric visual localization, IEEE Robotics and AutomationLetters, vol. 8, no. 2, pp. 472–479, 2022.
- L. Suomela, J. Kalliola, A. Dag, H. Edelman, and J.-K. K ¨am¨ar¨ainen,Placenav: Topological navigation through place recognition, arXivpreprint arXiv:2309.17260, 2023.
- N. Hughes, Y. Chang, and L. Carlone, Hydra: A real-time spatialperception system for 3d scene graph construction and optimization,arXiv preprint arXiv:2201.13360, 2022.
- O. Michel, A. Bhattad, E. VanderBilt, R. Krishna, A. Kembhavi, andT. Gupta, Object 3dit: Language-guided 3d-aware image editing,Advances in Neural Information Processing Systems, vol. 36, 2024.
- A. T. Fragoso, C. T. Lee, A. S. McCoy, and S.-J. Chung, A seasonallyinvariant deep transform for visual terrain-relative navigation, ScienceRobotics, vol. 6, no. 55, p. eabf3320, 2021.
- P. Yin, I. Cisneros, S. Zhao, J. Zhang, H. Choset, and S. Scherer,isimloc: Visual global localization for previously unseen environmentswith simulated images, IEEE Transactions on Robotics, 2023.
- M. Bianchi and T. D. Barfoot, Uav localization using autoencodedsatellite images, IEEE Robotics and Automation Letters, vol. 6, no. 2,pp. 1761–1768, 2021.
- B. Patel, T. D. Barfoot, and A. P. Schoellig, Visual localization withgoogle earth images for robust global pose estimation of uavs, in 2020IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2020, pp. 6491–6497.
- L. Liu and H. Li, Lending orientation to neural networks for cross-view geo-localization, in 2019 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR), 2019, pp. 5617–5626.
- P.-E. Sarlin, E. Trulls, M. Pollefeys, J. Hosang, and S. Lynen, Snap:Self-supervised neural maps for visual positioning and semantic under-standing, Advances in Neural Information Processing Systems, vol. 36,2024.
- T. Y. Tang, D. De Martini, and P. Newman, Get to the point: Learninglidar place recognition and metric localisation using overhead imagery,Proceedings of Robotics: Science and Systems, 2021, 2021.
- Y. Shi, F. Wu, A. Perincherry, A. V ora, and H. Li, Boosting 3-dofground-to-satellite camera localization accuracy via geometry-guidedcross-view transformer, in Proceedings of the IEEE/CVF InternationalConference on Computer Vision, 2023, pp. 21 516–21 526.
- A. Witze et al., Nasa has launched the most ambitious mars roverever built: Here’s what happens next, Nature, vol. 584, no. 7819, pp.15–16, 2020.
- L. Ding, R. Zhou, Y. Yuan, H. Yang, J. Li, T. Yu, C. Liu, J. Wang,S. Li, H. Gao, Z. Deng, N. Li, Z. Wang, Z. Gong, G. Liu, J. Xie,S. Wang, Z. Rong, D. Deng, X. Wang, S. Han, W. Wan, L. Richter,L. Huang, S. Gou, Z. Liu, H. Yu, Y. Jia, B. Chen, Z. Dang, K. Zhang,L. Li, X. He, S. Liu, and K. Di, A 2-year locomotive exploration andscientific investigation of the lunar farside by the yutu-2 rover, ScienceRobotics, vol. 7, no. 62, p. eabj6660, 2022.
- I. D. Miller, F. Cladera, T. Smith, C. J. Taylor, and V. Kumar,Stronger together: Air-ground robotic collaboration using semantics,IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9643–9650,2022.
- J. Yan, X. Lin, Z. Ren, S. Zhao, J. Yu, C. Cao, P. Yin, J. Zhang,and S. Scherer, Mui-tare: Multi-agent cooperative exploration withunknown initial position, arXiv preprint arXiv:2209.10775, 2022.
- Y. Tian, Y. Chang, F. H. Arias, C. Nieto-Granda, J. P. How, andL. Carlone, Kimera-multi: Robust, distributed, dense metric-semanticslam for multi-robot systems, IEEE Transactions on Robotics, pp. 1–17, 2022.
- D. Van Opdenbosch and E. Steinbach, Collaborative visual slam usingcompressed feature exchange, IEEE Robotics and Automation Letters,vol. 4, no. 1, pp. 57–64, 2019.
- T. Sasaki, K. Otsu, R. Thakker, S. Haesaert, and A.-a. Agha-mohammadi, Where to map? iterative rover-copter path planning formars exploration, IEEE Robotics and Automation Letters, vol. 5, no. 2,pp. 2123–2130, 2020.
- K. Ebadi, Y. Chang, M. Palieri, A. Stephens, A. Hatteland, E. Heiden,A. Thakur, N. Funabiki, B. Morrell, S. Wood, L. Carlone, and A.-a. Agha-mohammadi, Lamp: Large-scale autonomous mapping andpositioning for exploration of perceptually-degraded subterranean en-vironments, in 2020 IEEE International Conference on Robotics andAutomation (ICRA), 2020, pp. 80–86.
- Y. Chang, N. Hughes, A. Ray, and L. Carlone, Hydra-multi: Collabo-rative online construction of 3d scene graphs with multi-robot teams,arXiv preprint arXiv:2304.13487, 2023.
- M. Labbe and F. Michaud, Rtab-map as an open-source lidar andvisual simultaneous localization and mapping library for large-scaleand long-term online operation, Journal of Field Robotics, vol. 36,no. 2, pp. 416–446, 2019.
- H. Xu, Y. Zhang, B. Zhou, L. Wang, X. Yao, G. Meng, and S. Shen,Omni-swarm: A decentralized omnidirectional visual–inertial–uwbstate estimation system for aerial swarms, Ieee transactions onrobotics, vol. 38, no. 6, pp. 3374–3394, 2022.
- P. Yin, S. Zhao, H. Lai, R. Ge, J. Zhang, H. Choset, and S. Scherer,Automerge: A framework for map assembling and smoothing in city-scale environments, IEEE Transactions on Robotics, 2023.18
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoor-thi, and R. Ng, Nerf: Representing scenes as neural radiance fieldsfor view synthesis, Communications of the ACM, vol. 65, no. 1, pp.99–106, 2021.
- B. Kerbl, G. Kopanas, T. Leimk ¨uhler, and G. Drettakis, 3d gaussiansplatting for real-time radiance field rendering, ACM Transactions onGraphics, vol. 42, no. 4, pp. 1–14, 2023.
- Tesla, Inc., Autopilot support, 2023, accessed: 2023-03-30.
- A. Witze et al., Nasa has launched the most ambitious mars roverever built: Here’s what happens next, Nature, vol. 584, no. 7819, pp.15–16, 2020.
- L. Ding, R. Zhou, Y. Yuan, H. Yang, J. Li, T. Yu, C. Liu, J. Wang,S. Li, H. Gao, Z. Deng, N. Li, Z. Wang, Z. Gong, G. Liu, J. Xie,S. Wang, Z. Rong, D. Deng, X. Wang, S. Han, W. Wan, L. Richter,L. Huang, S. Gou, Z. Liu, H. Yu, Y. Jia, B. Chen, Z. Dang, K. Zhang,L. Li, X. He, S. Liu, and K. Di, A 2-year locomotive exploration andscientific investigation of the lunar farside by the yutu-2 rover, ScienceRobotics, vol. 7, no. 62, p. eabj6660, 2022.
- G. D. Tipaldi, D. Meyer-Delius, and W. Burgard, Lifelong localizationin changing environments, The International Journal of RoboticsResearch, vol. 32, no. 14, pp. 1662–1678, 2013.
- M. Zhao, X. Guo, L. Song, B. Qin, X. Shi, G. H. Lee, and G. Sun, Ageneral framework for lifelong localization and mapping in changingenvironment, in 2021 IEEE/RSJ International Conference on Intelli-gent Robots and Systems (IROS), 2021, pp. 3305–3312.
- C. Chow and C. Liu, Approximating discrete probability distributionswith dependence trees, IEEE transactions on Information Theory,vol. 14, no. 3, pp. 462–467, 1968.
- S. Zhu, X. Zhang, S. Guo, J. Li, and H. Liu, Lifelong localization insemi-dynamic environment, in 2021 IEEE International Conferenceon Robotics and Automation (ICRA), 2021, pp. 14 389–14 395.
- M. Zaffar, S. Garg, M. Milford, J. Kooij, D. Flynn, K. McDonald-Maier, and S. Ehsan, VPR-bench: An open-source visual placerecognition evaluation framework with quantifiable viewpoint andappearance change, International Journal of Computer Vision., vol.129, no. 7, pp. 2136–2174, May 2021.
- K. MacTavish, M. Paton, and T. D. Barfoot, Visual triage: A bag-of-words experience selector for long-term visual route following,in2017 IEEE International Conference on Robotics and Automation(ICRA), 2017, pp. 2065–2072.
- A.-D. Doan, Y. Latif, T.-J. Chin, and I. Reid, Hm4: Hidden markovmodel with memory management for visual place recognition, IEEERobotics and Automation Letters, vol. 6, no. 1, pp. 167–174, 2021.
- P. Yin, A. Abuduweili, S. Zhao, L. Xu, C. Liu, and S. Scherer,Bioslam: A bioinspired lifelong memory system for general placerecognition, IEEE Transactions on Robotics, 2023.
- N. S ¨underhauf, P. Neubert, and P. Protzel, Are we there yet? challeng-ing seqslam on a 3000 km journey across all four seasons, in Proc. ofworkshop on long-term autonomy, IEEE international conference onrobotics and automation (ICRA), 2013, p. 2013.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, Vision meets robotics:The kitti dataset, The International Journal of Robotics Research,vol. 32, no. 11, pp. 1231–1237, 2013.20
- W. Maddern, G. Pascoe, C. Linegar, and P. Newman, 1 year, 1000 km:The oxford robotcar dataset, The International Journal of RoboticsResearch, vol. 36, no. 1, pp. 3–15, 2017.
- F. Warburg, S. Hauberg, M. Lopez-Antequera, P. Gargallo, Y. Kuang,and J. Civera, Mapillary street-level sequences: A dataset for lifelongplace recognition, in 2020 IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR), 2020, pp. 2623–2632.
- Y. Liao, J. Xie, and A. Geiger, KITTI-360: A novel dataset andbenchmarks for urban scene understanding in 2d and 3d, CoRR, vol.abs/2109.13410, 2021.
- C. Ivan, P. Yin, J. Zhang, H. Choset, and S. Scherer, Alto: A large-scale dataset for uav visual place recognition and localization, arXivpreprint arXiv:2205.30737, 2022.
- P. Yin, S. Zhao, R. Ge, I. Cisneros, R. Fu, J. Zhang, H. Choset,and S. Scherer, Alita: A large-scale incremental dataset for long-termautonomy, 2022.
- D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, Theoxford radar robotcar dataset: A radar extension to the oxford robotcardataset, in 2020 IEEE International Conference on Robotics andAutomation (ICRA). IEEE, 2020, pp. 6433–6438.
- P. Yin, L. Xu, X. Li, C. Yin, Y. Li, R. A. Srivatsan, L. Li, J. Ji, andY. He, A multi-domain feature learning method for visual place recog-nition, in 2019 International Conference on Robotics and Automation(ICRA), 2019, pp. 319–324.
- N. S ¨underhauf, P. Neubert, and P. Protzel, Are we there yet? challeng-ing seqslam on a 3000 km journey across all four seasons, in Proc. ofworkshop on long-term autonomy, IEEE international conference onrobotics and automation (ICRA), 2013, p. 2013.
- F. Warburg, S. Hauberg, M. Lopez-Antequera, P. Gargallo, Y. Kuang,and J. Civera, Mapillary street-level sequences: A dataset for lifelongplace recognition, in 2020 IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR), 2020, pp. 2623–2632.
- C. Ivan, P. Yin, J. Zhang, H. Choset, and S. Scherer, Alto: A large-scale dataset for uav visual place recognition and localization, arXivpreprint arXiv:2205.30737, 2022.
- K. Somasundaram, J. Dong, H. Tang, J. Straub, M. Yan, M. Goe-sele, J. J. Engel, R. De Nardi, and R. Newcombe, Project aria:A new tool for egocentric multi-modal ai research, arXiv preprintarXiv:2308.13561, 2023.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, Vision meets robotics:The kitti dataset, The International Journal of Robotics Research,vol. 32, no. 11, pp. 1231–1237, 2013.20
- W. Maddern, G. Pascoe, C. Linegar, and P. Newman, 1 year, 1000 km:The oxford robotcar dataset, The International Journal of RoboticsResearch, vol. 36, no. 1, pp. 3–15, 2017.
- M. Ramezani, Y. Wang, M. Camurri, D. Wisth, M. Mattamala, andM. Fallon, The newer college dataset: Handheld lidar, inertial andvision with ground truth, in 2020 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS), 2020, pp. 4353–4360.
- D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, Theoxford radar robotcar dataset: A radar extension to the oxford robotcardataset, in 2020 IEEE International Conference on Robotics andAutomation (ICRA). IEEE, 2020, pp. 6433–6438.
- K. Burnett, D. J. Yoon, Y. Wu, A. Z. Li, H. Zhang, S. Lu, J. Qian,W.-K. Tseng, A. Lambert, K. Y. Leung et al., Boreas: A multi-seasonautonomous driving dataset, The International Journal of RoboticsResearch, vol. 42, no. 1-2, pp. 33–42, 2023.
- M. Zaffar, S. Garg, M. Milford, J. Kooij, D. Flynn, K. McDonald-Maier, and S. Ehsan, VPR-bench: An open-source visual placerecognition evaluation framework with quantifiable viewpoint andappearance change, International Journal of Computer Vision., vol.129, no. 7, pp. 2136–2174, May 2021.
- P. Yin, S. Zhao, R. Ge, I. Cisneros, R. Fu, J. Zhang, H. Choset,and S. Scherer, Alita: A large-scale incremental dataset for long-termautonomy, 2022.
- B. Talbot, S. Garg, and M. Milford, Openseqslam2.0: An open sourcetoolbox for visual place recognition under changing conditions, in2018 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS), 2018, pp. 7758–7765.
- M. Zaffar, S. Garg, M. Milford, J. Kooij, D. Flynn, K. McDonald-Maier, and S. Ehsan, Vpr-bench: An open-source visual place recogni-tion evaluation framework with quantifiable viewpoint and appearancechange, Int. J. Comput. Vision, vol. 129, no. 7, pp. 2136–2174, jul2021.
- M. Humenberger, Y. Cabon, N. Guerin, J. Morat, V. Leroy, J. Revaud,P. Rerole, N. Pion, C. de Souza, and G. Csurka, Robust imageretrieval-based visual localization using kapture, 2020.