参考文献
[1] 吴付杰,王博文,齐静雅,等. 机载多孔径全景图像合成技术研究进展[J]. 航空学报, 2024: 1-24.
F J WU, B W WANG, J Y QI, et al. A Re-view of Airborne Multi-aperture Panoram-ic Image Compositing[J]. Acta Aeronauti-ca et Astronautica Sinica, 2024: 1-24 (in Chinese).
[2] 张洲宇,曹云峰,范彦铭. 低空小型无人机空域冲突视觉感知技术研究进展[J]. 航空学报, 2022, 43(08): 197-220.
Z Y ZHANG, Y F CAO, Y M FAN. Re-search progress of vision based aerospace conflict sensirg technologies for small unmanned aerial vehicle in low altitude[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43(08): 197-220 (in Chinese).
[3] 唐霖峰,张浩,徐涵,等. 基于深度学习的图像融合方法综述[J]. 中国图象图形学报, 2023, 28(01): 3-36.
L F TANG, H ZHANG, H XU, et al. Deep learning-based image fusion: a survey[J]. Journal of Image and Graphics, 2023, 28(01): 3-36 (in Chinese).
[4] ZHANG H, XU H, TIAN X, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion, 2021, 76: 323-336.
[5] KARIM S, TONG G, LI J, et al. Current advances and future perspectives of image fusion: A comprehensive review[J]. In-formation Fusion, 2023, 90: 185-217.
[6] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019, 45: 153-178.
[7] X. Z, Y. D. Visible and Infrared Image Fu-sion Using Deep Learning[J]. IEEE Trans-actions On Pattern Analysis and Machine Intelligence, 2023, 45(8): 10535-10554.
[8] 李霖,王红梅,李辰凯. 红外与可见光图像深度学习融合方法综述[J]. 红外与激光工程, 2022, 51(12): 337-356.
L LI, H M WANG, C K LI. A review of deep learning fusion methods for infrared and visible images[J]. Infrared and Laser Engineering, 2022, 51(12): 337-356 (in Chinese).
[9] P. Z, J. Z, D. W, et al. Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline: 2022 IEEE/CVF Con-ference on Computer Vision and Pattern Recognition (CVPR)[Z]. Piscataway: IEEE Press, 20228876-8885.
[10] Y. S, B. C, P. Z, et al. Drone-Based RGB-Infrared Cross-Modality Vehicle Detec-tion Via Uncertainty-Aware Learning[J]. IEEE Transactions On Circuits and Sys-tems for Video Technology, 2022, 32(10): 6700-6713.
[11] SPETH S, GON?ALVES A, RIGAULT B, et al. Deep learning with RGB and thermal images onboard a drone for monitoring operations[J]. Journal of Field Robotics, 2022, 39(6): 840-868.
[12] D. B, C. R H, K. L. WiSARD: A Labeled Visual and Thermal Image Dataset for Wilderness Search and Rescue: 2022 IEEE/RSJ International Conference on In-telligent Robots and Systems (IROS)[Z]. Piscataway: IEEE Press, 20229467-9474.
[13] JAMES G, EDWARD O. RGB-LWIR La-beled Dataset for Air-based platforms[Z].
[14] UAV Cross-Modal Image Registration: Large-Scale Dataset and Transformer-Based Approach: International Conference on Brain Inspired Cognitive Systems[Z]. Singapore: Springer Nature Singapore, 2023166-176.
[15] FENG H, ZHANG L, ZHANG S, et al. RTDOD: A large-scale RGB-thermal do-main-incremental object detection dataset for UAVs[J]. Image and Vision Computing, 2023, 140: 104856.
[16] ZHU Y, WANG Q, LI C, et al. Visible-Thermal Multiple Object Tracking: Large-scale Video Dataset and Progressive Fu-sion Approach2024/2024-08-02.
[17] MEI L, HU X, YE Z, et al. GTMFuse: Group-attention transformer-driven mul-tiscale dense feature-enhanced network for infrared and visible image fusion[J]. Knowledge-Based Systems, 2024, 293: 111658.
[18] 肖云,曹丹,李成龙,等. 基于高空无人机平台的多模态跟踪数据集[J]. 中国图象图形学报, 2024.
Y XIAO, D CAO, C L LI, et al. A bench-mark dataset for high altitude UAV multi-modal tracking[J]. JOURNAL OF IMAGE AND GRAPHICS, 2024 (in Chinese).
[19] 樊一江,李卫波. 我国低空经济阶段特征及应用场景研究[J]. 中国物价, 2024(04): 98-103.
Y J FAN, W B LI. The development stage and application scenarios of Chin.a's low-altitude economy[J]. China Price Journal, 2024(04): 98-103 (in Chinese).
[20] SHAMSOSHOARA A, AFGHAH F, RAZI A, et al. Aerial imagery pile burn detection using deep learning: The FLAME da-taset[J]. Computer Networks, 2021, 193: 108001.
[21] Y L, C Z, X L, et al. Forest Fire Monitoring Method Based on UAV Visual and Infra-red Image Fusion[J]. Remote Sensing, 2023, 15(12): 3173.
[22] R L, Z W, H S, et al. Automatic Identifica-tion of Earth Rock Embankment Piping Hazards in Small and Medium Rivers Based on UAV Thermal Infrared and Visi-ble Images[J]. Remote Sensing, 2023, 15(18): 4492.
[23] RUI X, LI Z, ZHANG X, et al. A RGB-Thermal based adaptive modality learning network for day–night wildfire identifica-tion[J]. International Journal of Applied Earth Observation and Geoinformation, 2023, 125: 103554.
[24] KULARATNE W. et al. Fireman-uav-rgbt. 1.0.0-beta[Z].
[25] XU C, LI Q, SHEN Y, et al. Edge-guided two-stage feature matching for infrared and visible image registration in electric power scenes[J]. Infrared Physics & Tech-nology, 2024, 136: 104999.
[26] YETGIN ? E, GEREK ? N. Powerline Im-age Dataset (Infrared-IR and Visible Light-VL)[Z].
[27] Y H, M C, R V, et al. An Approach to Se-mantically Segmenting Building Compo-nents and Outdoor Scenes Based on Mul-tichannel Aerial Imagery Datasets[J]. Re-mote Sensing, 2021, 13(21): 4357.
[28] KAHN J, MAYER Z, HOU Y, et al. Hyper-spectral (RGB + Thermal) drone images of Karlsruhe, Germany - Raw images for the Thermal Bridges on Building Rooftops (TBBR) dataset (v1.1)[Z].
[29] P N, G R, S P, et al. Texture Analysis to Enhance Drone-Based Multi-Modal In-spection of Structures[J]. Drones, 2022, 6(12): 407.
[30] C. X, Q. L, X. J, et al. Dual-Space Graph-Based Interaction Network for RGB-Thermal Semantic Segmentation in Elec-tric Power Scene[J]. IEEE Transactions On Circuits and Systems for Video Technolo-gy, 2023, 33(4): 1577-1592.
[31] X. X, G. L, D. P B, et al. Fast Detection Fusion Network (FDFnet): An End to End Object Detection Framework Based on Heterogeneous Image Fusion for Power Facility Inspection[J]. IEEE Transactions On Power Delivery, 2022, 37(6): 4496-4505.
[32] H. C, J. P Y, B. J K, et al. Attention-Based Multimodal Image Feature Fusion Module for Transmission Line Detection[J]. IEEE Transactions On Industrial Informatics, 2022, 18(11): 7686-7695.
[33] ZHANG C, ZOU Y, DIMYADI J, et al. Thermal-textured BIM generation for building energy audit with UAV image fu-sion and histogram-based enhancement[J]. Energy and Buildings, 2023, 301: 113710.
[34] WANG P, XIAO J, QIANG X, et al. An au-tomatic building fa?ade deterioration de-tection system using infrared-visible im-age fusion and deep learning[J]. Journal of Building Engineering, 2024, 95: 110122.
[35] Z. Z, H. B, Y. Z, et al. DDFM: De-noising Diffusion Model for Multi-Modality Image Fusion: 2023 IEEE/CVF International Conference on Computer Vi-sion (ICCV)[Z]. Ithaca: 20238048-8059.
[36] H. L, X. J W. DenseFuse: A Fusion Ap-proach to Infrared and Visible Images[J]. IEEE Transactions On Image Processing, 2019, 28(5): 2614-2623.
[37] H. L, X. J W, T. D. NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spa-tial/Channel Attention Models[J]. IEEE Transactions On Instrumentation and Measurement, 2020, 69(12): 9645-9656.
[38] LI H, WU X, KITTLER J. RFN-Nest: An end-to-end residual fusion network for in-frared and visible images[J]. Information Fusion, 2021, 73: 72-86.
[39] LIN T, MAIRE M, BELONGIE S, et al. Mi-crosoft COCO : Common Objects in Con-text: Proceedings of the 13th European Conference on Computer Vision[Z]. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND: Switzerland: Springer, 2014: 8693, 740-755.
[40] H. X, H. Z, J. M. Classification Saliency-Based Rule for Visible and Infrared Image Fusion[J]. IEEE Transactions On Compu-tational Imaging, 2021, 7: 824-836.
[41] L. J, X. Y, Z. L, et al. SEDRFuse: A Sym-metric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion[J]. IEEE Transactions On In-strumentation and Measurement, 2021, 70: 1-15.
[42] J. L, X. F, J. J, et al. Learning a Deep Multi-Scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion[J]. IEEE Transactions On Circuits and Sys-tems for Video Technology, 2022, 32(1): 105-119.
[43] Z. Z, S. X, J. Z, et al. Efficient and Model-Based Infrared and Visible Image Fusion via Algorithm Unrolling[J]. IEEE Transac-tions On Circuits and Systems for Video Technology, 2022, 32(3): 1186-1196.
[44] ZHAO Z, XU S, ZHANG C, et al. DID-Fuse : Deep Image Decomposition for In-frared and Visible Image Fusion[C]. Pro-ceedings of the Twenty-Ninth Internation-al Conference on International Joint Con-ferences on Artificial Intelligence. Frei-burg: IJCAI,2020:970-976: 2020.
[45] Z. Z, H. B, J. Z, et al. CDDFuse: Correla-tion-Driven Dual-Branch Feature Decom-position for Multi-Modality Image Fusion: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)[Z]. Piscataway:IEEE Press, 20235906-5916.
[46] XU H, GONG M, TIAN X, et al. CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition[J]. Computer Vision and Image Understand-ing, 2022, 218: 103407.
[47] 徐涵,梅晓光,樊凡,等. 信息分离和质量引导的红外与可见光图像融合[J]. 中国图象图形学报, 2022, 27(11): 3316-3330.
H XU, X G MEI, F FAN, et al. Information decomposition and quality guided infrared and visible image fusion[J]. Journal of Im-age and Graphics, 2022, 27(11): 3316-3330 (in Chinese).
[48] H. X, X. W, J. M. DRF: Disentangled Rep-resentation for Visible and Infrared Image Fusion[J]. IEEE Transactions On Instru-mentation and Measurement, 2021, 70: 1-13.
[49] X. L, Y. G, A. W, et al. IFSepR: A General Framework for Image Fusion Based on Separate Representation Learning[J]. IEEE Transactions On Multimedia, 2023, 25: 608-623.
[50] Z. W, J. W, Y. W, et al. UNFusion: A Uni-fied Multi-Scale Densely Connected Net-work for Infrared and Visible Image Fu-sion[J]. IEEE Transactions On Circuits and Systems for Video Technology, 2022, 32(6): 3360-3374.
[51] H. X, J. M, J. J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network[J]. IEEE Transactions On Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518.
[52] LONG Y, JIA H, ZHONG Y, et al. RXDN-Fuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion, 2021, 69: 128-141.
[53] R. H, D. Z, R. N, et al. VIF-Net: An Unsu-pervised Framework for Infrared and Visi-ble Image Fusion[J]. IEEE Transactions On Computational Imaging, 2020, 6: 640-651.
[54] ZHAO F, ZHAO W, YAO L, et al. Self-supervised feature adaption for infrared and visible image fusion[J]. Information Fusion, 2021, 76: 189-203.
[55] P L, J J, X L. Fusion from decomposition: A self-supervised decomposition ap-proach for image fusion[: European Con-ference on Computer Vision[Z]. European Conference on Computer Vision.Cham: Springer, 2022: 719-735: Cham: Springer, 2022719-735.
[56] LI J, NIE R, CAO J, et al. LRFE-CL: A self-supervised fusion network for infrared and visible image via low redundancy fea-ture extraction and contrastive learning[J]. Expert Systems with Applications, 2024, 251: 124125.
[57] WANG X, GUAN Z, QIAN W, et al. CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fu-sion by estimating feature compensation map[J]. Information Fusion, 2024, 102: 102039.
[58] J. M, L. T, M. X, et al. STDFusionNet: An Infrared and Visible Image Fusion Net-work Based on Salient Target Detection[J]. IEEE Transactions On Instrumentation and Measurement, 2021, 70: 1-13.
[59] D. Z, W. Z, Y. J, et al. IPLF: A Novel Image Pair Learning Fusion Network for Infrared and Visible Image[J]. IEEE Sensors Jour-nal, 2022, 22(9): 8808-8817.
[60] X. W, Z. G, S. Y, et al. Infrared and Visible Image Fusion via Decoupling Network[J]. IEEE Transactions On Instrumentation and Measurement, 2022, 71: 1-13.
[61] TANG L, YUAN J, MA J. Image fusion in the loop of high-level vision tasks: A se-mantic-aware real-time infrared and visi-ble image fusion network[J]. Information Fusion, 2022, 82: 28-42.
[62] Y S, B C, P Z. DetFusion: A Detection-driven Infrared and Visible Image Fusion Network: Proceedings of the 30th ACM in-ternational conference on multimedia[Z]. Proceedings of the 30th ACM internation-al conference on multimedia, New York: ACM, 2022: 4003-4011: New York: ACM, 20224003-4011.
[63] TANG L, ZHANG H, XU H, et al. Rethink-ing the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity[J]. Information Fusion, 2023, 99: 101870.
[64] LIU X, HUO H, LI J, et al. A semantic-driven coupled network for infrared and visible image fusion[J]. Information Fu-sion, 2024, 108: 102352.
[65] W. Z, S. X, F. Z, et al. MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding from Object Detection: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)[Z]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2023: 13955-13965: Piscataway: IEEE Press, 202313955-13965.
[66] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: 20202020/2020-10-22.
[67] J. G, K. H, H. W, et al. CMT: Convolution-al Neural Networks Meet Vision Trans-formers: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)[Z]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion (CVPR). Piscataway: IEEE Press, 2022: 12165-12175.: Piscataway: IEEE Press, 202212165-12175.
[68] K. Y, S. G, Z. L, et al. Incorporating Con-volution Designs into Visual Transformers: 2021 IEEE/CVF International Conference on Computer Vision (ICCV)[Z]. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2021: 559-568: Piscataway: IEEE Press, 2021559-568.
[69] A K, Z R, A S, et al. A survey of the vision transformers and their CNN-transformer based variants[J]. Artificial Intelligence Review, 2023, 56(Suppl 3): 2917-2970.
[70] Z. C, Z. F, S. Y, et al. AFT: Adaptive Fu-sion Transformer for Visible and Infrared Images[J]. IEEE Transactions On Image Processing, 2023, 32: 2077-2092.
[71] Z. W, Y. C, W. S, et al. SwinFuse: A Resid-ual Swin Transformer Fusion Network for Infrared and Visible Images[J]. IEEE Transactions On Instrumentation and Measurement, 2022, 71: 1-12.
[72] J. L, B. Y, L. B, et al. TFIV: Multigrained Token Fusion for Infrared and Visible Im-age via Transformer[J]. IEEE Transactions On Instrumentation and Measurement, 2023, 72: 1-14.
[73] J. M, L. T, F. F, et al. SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(7): 1200-1217.
[74] CHEN J, DING J, YU Y, et al. THFuse: An infrared and visible image fusion network using transformer and hybrid feature ex-tractor[J]. Neurocomputing, 2023, 527: 71-82.
[75] LI B, LU J, LIU Z, et al. SBIT-Fuse: Infra-red and visible image fusion based on Symmetrical Bilateral interaction and Transformer[J]. Infrared Physics & Tech-nology, 2024, 138: 105269.
[76] S. P, A. G V, C. L. Cross-Modal Trans-formers for Infrared and Visible Image Fu-sion[J]. IEEE Transactions On Circuits and Systems for Video Technology, 2024, 34(2): 770-785.
[77] TANG W, HE F, LIU Y. TCCFusion: An infrared and visible image fusion method based on transformer and cross correla-tion[J]. Pattern Recognition, 2023, 137: 109295.
[78] H. J, Y. Y, H. S, et al. Low Light RGB and IR Image Fusion with Selective CNN-Transformer Network: 2023 IEEE Interna-tional Conference on Image Processing (ICIP)[Z]. 2023 IEEE International Confer-ence on Image Processing (ICIP). Pisca-taway: IEEE Press, 2023: 1255-1259: Piscataway: IEEE Press, 20231255-1259.
[79] Y. Z, Q. Z, P. Z, et al. TUFusion: A Trans-former-Based Universal Fusion Algorithm for Multimodal Images[J]. IEEE Transac-tions On Circuits and Systems for Video Technology, 2024, 34(3): 1712-1725.
[80] J. L, J. Z, C. L, et al. CGTF: Convolution-Guided Transformer for Infrared and Visi-ble Image Fusion[J]. IEEE Transactions On Instrumentation and Measurement, 2022, 71: 1-14.
[81] W. T, F. H, Y. L, et al. DATFuse: Infrared and Visible Image Fusion via Dual Atten-tion Transformer[J]. IEEE Transactions On Circuits and Systems for Video Technolo-gy, 2023, 33(7): 3159-3172.
[82] YANG X, HUO H, WANG R, et al. DGLT-Fusion: A decoupled global–local infrared and visible image fusion transformer[J]. Infrared Physics & Technology, 2023, 128: 104522.
[83] WANG Z, YANG F, SUN J, et al. AITFuse: Infrared and visible image fusion via adaptive interactive transformer learn-ing[J]. Knowledge-Based Systems, 2024, 299: 111949.
[84] MA J, YU W, LIANG P, et al. FusionGAN: A generative adversarial network for in-frared and visible image fusion[J]. Infor-mation Fusion, 2019, 48: 11-26.
[85] FU Y, WU X, DURRANI T. Image fusion based on generative adversarial network consistent with perception[J]. Information Fusion, 2021, 72: 110-125.
[86] J. M, H. Z, Z. S, et al. GANMcC: A Genera-tive Adversarial Network With Multiclas-sification Constraints for Infrared and Vis-ible Image Fusion[J]. IEEE Transactions On Instrumentation and Measurement, 2021, 70: 1-14.
[87] RAO Y, WU D, HAN M, et al. AT-GAN: A generative adversarial network with atten-tion and transition for infrared and visible image fusion[J]. Information Fusion, 2023, 92: 336-349.
[88] Y. Y, J. L, S. H, et al. Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network[J]. IEEE Transactions On Circuits and Systems for Video Technology, 2021, 31(12): 4771-4783.
[89] Le Chang, HUANG Y, LI Q, et al. DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discrimina-tor[J]. Neurocomputing, 2024, 578: 127391.
[90] J. M, H. X, J. J, et al. DDcGAN: A Dual-Discriminator Conditional Generative Ad-versarial Network for Multi-Resolution Image Fusion[J]. IEEE Transactions On Image Processing, 2020, 29: 4980-4995.
[91] H. Z, W. W, Y. Z, et al. Semantic-Supervised Infrared and Visible Image Fu-sion Via a Dual-Discriminator Generative Adversarial Network[J]. IEEE Transac-tions On Multimedia, 2023, 25: 635-648.
[92] H. Z, J. Y, X. T, et al. GAN-FM: Infrared and Visible Image Fusion Using GAN With Full-Scale Skip Connection and Dual Markovian Discriminators[J]. IEEE Trans-actions On Computational Imaging, 2021, 7: 1134-1147.
[93] J. L, H. H, C. L, et al. AttentionFGAN: In-frared and Visible Image Fusion Using At-tention-Based Generative Adversarial Networks[J]. IEEE Transactions On Mul-timedia, 2021, 23: 1383-1396.
[94] SONG A, DUAN H, PEI H, et al. Triple-discriminator generative adversarial net-work for infrared and visible image fu-sion[J]. Neurocomputing, 2022, 483: 183-194.
[95] MA J, LIANG P, YU W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fu-sion, 2020, 54: 85-98.
[96] KARIM S, TONG G, LI J, et al. Current advances and future perspectives of image fusion: A comprehensive review[J]. In-formation Fusion, 2023, 90: 185-217.
[97] REN L, PAN Z, CAO J, et al. Infrared and visible image fusion based on variational auto-encoder and infrared feature com-pensation[J]. Infrared Physics & Technol-ogy, 2021, 117: 103839.
[98] DUFFHAUSS F A V N. FusionVAE: A Deep Hierarchical Variational Autoen-coder for RGB Image Fusion: European Conference on Computer Vision[Z]. Cham: Springer, 2022674-691.
[99] 闫志浩,周长兵,李小翠. 生成扩散模型研究综述[J]. 计算机科学, 2024, 51(1): 273-283.
Z H YAN, Z B ZHOU, X C LI. European Conference on Computer Vision[J]. Com-puter Science, 2024, 51(1): 273-283 (in Chinese).
[100] CAO H, TAN C, GAO Z, et al. A Survey on Generative Diffusion Models[J]. IEEE Transactions On Knowledge and Data En-gineering, 2024, 36(7): 2814-2830.
[101] CROITORU F, HONDRU V, IONESCU R T, et al. Diffusion Models in Vision: A Sur-vey[J]. IEEE Transactions On Pattern Analysis and Machine Intelligence, 2023, 45(9): 10850-10869.
[102] YANG L, ZHANG Z, SONG Y, et al. Diffu-sion Models: A Comprehensive Survey of Methods and Applications[J]. ACM Com-puting Surveys, 2024, 56(4): 1-39.
[103] YI X, TANG L, ZHANG H, et al. Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior[J]. In-formation Fusion, 2024, 110: 102450.
[104] LI M, PEI R, ZHENG T, et al. FusionDiff: Multi-focus image fusion using denoising diffusion probabilistic models[J]. Expert Systems with Applications, 2024, 238: 121664.
[105] YUE J, FANG L, XIA S, et al. Dif-Fusion: Toward High Color Fidelity in Infrared and Visible Image Fusion With Diffusion Models[J]. IEEE Transactions On Image Processing, 2023, 32({}): 5705-5720.
[106] L T, Y D, X Y, et al. DRMF: Degradation-Robust Multi-Modal Image Fusion via Composable Diffusion Prior: ACM Multi-media[Z]. 2024.
[107] YANG B, JIANG Z, PAN D, et al. LFDT-Fusion: A latent feature-guided diffusion Transformer model for general image fu-sion[J]. Information Fusion, 2025, 113: 102639.
[108] 孙彬,高云翔,诸葛吴为,等. 可见光与红外图像融合质量评价指标分析[J]. 中国图象图形学报, 2023, 28(01): 144-155.
B SUN, Y X GAO, G W W ZHU, et al. Analysis of quality objective assessment metrics for visible and infrared image fu-sion[J]. Journal of Image and Graphics, 2023, 28(01): 144-155 (in Chinese).
[109] HAN Y, CAI Y, CAO Y, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013, 14(2): 127-135.