| [1] LYU M, ZHAO Y, HUANG C, et al. Unmanned Aerial Vehicles for Search and Rescue: A Survey[J]. Remote Sensing, 2023, 15(13): 3266.[2] 冷佳旭, 莫梦竟成, 周应华, 等. 无人机视角下的目标检测研究进展[J]. 中国图象图形学报, 2024, 28(9): 2563-2586.LENG J X, MO M JC, ZHOU Y H, et al. Re-cent advances in drone-view object detection. Journal of Image and Grphics,2024, 28(09):2563-2586 (in Chinese).[3] CANDIAGO S, REMONDINO F, DE GIGLIO M, et al. Evaluating Multispectral Images and Vegetation Indices for Precision Farming Applications from UAV Images[J]. Remote Sensing, 2015, 7(4): 4026-4047.[4] MENOUAR H, GUVENC I, AKKAYA K, et al. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges[J]. IEEE Communications Maga-zine, 2017, 55(3): 22-28.[5] GYAGENDA N, HATILIMA J V, ROTH H, et al. A review of GNSS-independent UAV navigation techniques[J]. Robotics and Au-tonomous Systems, 2022, 152: 104069.[6] 吴成一. GNSS拒止条件下的无人机视觉导航研究[D]. 西安电子科技大学, 2021. WU C Y. GNSS-denied UAV Visual Naviga-tion Research [D]. Xidian University, 2021. (in Chinese).[7] ARAFAT M Y, ALAM M M, MOH S. Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges [J]. Drones, 2023, 7(2): 89.[8] GUPTA A, FERNANDO X. Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges[J]. Drones, 2022, 6(4): 85.[9] 袁媛, 孙柏, 刘赶超. 景象匹配无人机视觉定位[J]. 自动化学报, 2025, 51(02): 287-311.YUAN Y. SUN B, LIU G C. Drone-based scene matching visual geo-localization[J]. Ac-ta Automatica Sinica.2025.51(2):287-311 (in Chinese).[10] VAN DALEN G J, MAGREE D P, JOHNSON E N. Absolute localization using image alignment and particle filter-ing[C]//AIAA Guidance, Navigation, and Control Conference. 2016: 0647.[11] MANTELLI M, PITTOL D, NEULAND R, et al. A novel measurement model based on abBRIEF for global localization of a UAV over satellite images[J]. Robotics and Auton-omous Systems, 2019, 112: 304-319.[12] COUTURIER A, AKHLOUFI M A. UAV navigation in GPS-denied environment using particle filtered RVL[C]//Situation Awareness in Degraded Environments 2019: Vol. 11019. SPIE, 2019: 188-198.[13] MOSKALENKO I, KORNILOVA A, FERRER G. Visual place recognition for aeri-al imagery: A survey[J]. Robotics and Auton-omous Systems, 2025, 183: 104837.[14] XU W, YAO Y, CAO J, et al. UAV-VisLoc: A Large-scale Dataset for UAV Visual Localiza-tion [DB/OL]. arXiv preprint:2405.11936, 2024.[15] CISNEROS I, YIN P, ZHANG J, et al. ALTO: A Large-Scale Dataset for UAV Visual Place Recognition and Localization[DB/OL]. arXiv preprint:2207.12317, 2022.[16] ZHU S, YANG T, CHEN C. VIGOR: Cross-View Image Geo-localization beyond One-to-one Retrieval[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 3640-3649.[17] ZHENG Z, WEI Y, YANG Y. University-1652: A multi-view multi-source benchmark for drone-based geo-localization[C]//Proceedings of the 28th ACM international conference on Multimedia. 2020: 1395-1403.[18] BERMAN M, JéGOU H, VEDALDI A, et al. MultiGrain: a unified image embedding for classes and instances[DB/OL]. arXiv preprint: 1902.05509, 2019.[19] ARANDJELOVIC R, GRONAT P, TORII A, et al. NetVLAD: CNN architecture for weakly supervised place recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 5297-5307.[20] BERTON G, MASONE C, CAPUTO B. Re-thinking visual geo-localization for large-scale applications[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 4878-4888.[21] BERTON G, TRIVIGNO G, CAPUTO B, et al. EigenPlaces: Training Viewpoint Robust Models for Visual Place Recognition[C]//2023 IEEE/ CVF International Conference on Computer Vision (ICCV). 2023: 11046-11056.[22] ALI-BEY A, CHAIB-DRAA B, GIGUERE P. Mixvpr: Feature mixing for visual place recognition[C]//Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2023: 2998-3007.[23] LU F, ZHANG L, LAN X, et al. Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition[C]//The Twelfth International Conference on Learning Repre-sentations. 2023.[24] OQUAB M, DARCET T, MOUTAKANNI T, et al. DINOv2: Learning Robust Visual Fea-tures without Supervision[DB/OL]. arXiv preprint: 2304.07193, 2024.[25] LIU X, ZHANG C, ZHANG L. Vision Mam-ba: A Comprehensive Survey and Taxonomy [DB/OL]. arXiv preprint:2405.04404, 2024.[26] TOLSTIKHIN I, HOULSBY N, KOLESNIKOV A, et al. MLP-mixer: an all-MLP architecture for vision[C]//Proceedings of the 35th International Conference on Neu-ral Information Processing Systems. Red Hook, NY, USA: Curran Associates Inc., 2021: 24261-24272.[27] KEETHA N, MISHRA A, KARHADE J, et al. Anyloc: Towards universal visual place recognition [J]. IEEE Robotics and Automa-tion Letters, 2023.[28] CHEN S, GE C, TONG Z, et al. AdaptFormer: adapting vision transformers for scalable vis-ual recognition[C]//Proceedings of the 36th International Conference on Neural Infor-mation Processing Systems. Red Hook, NY, USA: Curran Associates Inc., 2022: 16664-16678.[29] HERMANS A, BEYER L, LEIBE B. In De-fense of the Triplet Loss for Person Re-Identification [DB/OL]. arXiv pre-print:1703.07737, 2017.[30] LIU Z, MAO H, WU C Y, et al. A ConvNet for the 2020s[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022: 11966-11976.[31] SHI Y, LIU L, YU X, et al. Spatial-aware fea-ture aggregation for cross-view image based geo-localization[M]//Proceedings of the 33rd International Conference on Neural Infor-mation Processing Systems. Red Hook, NY, USA: Curran Associates Inc., 2019: 10090-10100.[32] ZHU S, SHAH M, CHEN C. TransGeo: Transformer Is All You Need for Cross-view Image Geo-localization[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022: 1152-1161.[33] DEUSER F, HABEL K, OSWALD N. Sam-ple4Geo: Hard Negative Sampling For Cross-View Geo-Localisation[C]//2023 IEEE/ CVF International Conference on Computer Vision (ICCV). 2023: 16801-16810.[34] WANG Z, SHI D, QIU C, et al. Sequence Matching for Image-Based UAV-to-Satellite Geolocalization[J]. IEEE Transactions on Ge-oscience and Remote Sensing, 2024, 62: 1-15. |