Reviews

Dual-band payload image fusion and its applications in low-altitude remote sensing

  • Bin SUN ,
  • Hang YOU ,
  • Wenbo LI ,
  • Xiangrui LIU ,
  • Jiayi MA
Expand
  • 1.School of Aeronautics and Astronautics,University of Electronic Science and Technology of China,Chengdu 611731,China
    2.National Laboratory on Adaptive Optics,Chengdu 610209,China
    3.Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province,Chengdu 611731,China
    4.Electronic Information School,Wuhan University,Wuhan 430072,China

Received date: 2024-10-08

  Revised date: 2024-11-20

  Accepted date: 2024-12-06

  Online published: 2024-12-23

Supported by

National Natural Science Foundation of China(U23B2050);Sichuan Science and Technology Program(2022YFG0050);Fundamental Research Funds for the Central Universities(ZYGX2020ZB032)

Abstract

Dual-band payload image fusion has broad prospects for application in the field of low-altitude remote sensing such as target monitoring, disaster warning, and professional inspection. Firstly, dual-band images datasets of task payload are summarized to provide data support for relevant research and applications. Secondly, by tracking the latest technologies in the field of deep learning, a systematic review of deep learning-based dual-band payload image fusion methods is conducted. These methods are categorized into generative and discriminative approaches, and representative algorithms along with their characteristics are detailed. Thirdly, in-depth experimental comparative analysis of different types of image fusion methods is performed on various datasets, evaluating their performance from three aspects: qualitative analysis, fusion quality, and operational efficiency. Finally, the challenges faced in application of the image fusion technology in low-altitude remote sensing are discussed, offering valuable insights for research in related fields.

Cite this article

Bin SUN , Hang YOU , Wenbo LI , Xiangrui LIU , Jiayi MA . Dual-band payload image fusion and its applications in low-altitude remote sensing[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2025 , 46(11) : 531343 -531343 . DOI: 10.7527/S1000-6893.2024.31343

References

[1] 吴付杰, 王博文, 齐静雅, 等. 机载多孔径全景图像合成技术研究进展[J]. 航空学报202546(3): 630505.
  WU F J, WANG B W, QI J Y, et al. A review of airborne multi-aperture panoramic image compositing[J]. Acta Aeronautica et Astronautica Sinica202546(3): 630505 (in Chinese).
[2] 张洲宇, 曹云峰, 范彦铭. 低空小型无人机空域冲突视觉感知技术研究进展[J]. 航空学报202243(8): 025645.
  ZHANG Z Y, CAO Y F, FAN Y M. Research progress of vision based aerospace conflict sensing technologies for small unmanned aerial vehicle in low altitude[J]. Acta Aeronautica et Astronautica Sinica2022, 43?(8)?: 025645 (in Chinese).
[3] 唐霖峰, 张浩, 徐涵, 等. 基于深度学习的图像融合方法综述[J]. 中国图象图形学报202328(1): 3-36.
  TANG L F, ZHANG H, XU H, et al. Deep learning-based image fusion: a survey[J]. Journal of Image and Graphics202328(1): 3-36 (in Chinese).
[4] ZHANG H, XU H, TIAN X, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion202176: 323-336.
[5] KARIM S, TONG G, LI J Y, et al. Current advances and future perspectives of image fusion: A comprehensive review[J]. Information Fusion202390: 185-217.
[6] MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion201945: 153-178.
[7] ZHANG X C, DEMIRIS Y. Visible and infrared image fusion using deep learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202345(8): 10535-10554.
[8] 李霖, 王红梅, 李辰凯. 红外与可见光图像深度学习融合方法综述[J]. 红外与激光工程202251(12): 337-356.
  LI L, WANG H M, LI C K. A review of deep learning fusion methods for infrared and visible images[J]. Infrared and Laser Engineering202251(12): 337-356 (in Chinese).
[9] ZHANG P Y, ZHAO J, WANG D, et al. Visible-thermal UAV tracking: A large-scale benchmark and new baseline[C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 8876-8885.
[10] SUN Y M, CAO B, ZHU P F, et al. Drone-based RGB-infrared cross-modality vehicle detection via uncertainty-aware learning?[J]. IEEE Transactions on Circuits and Systems for Video Technology202232(10): 6700-6713.
[11] SPETH S, GON?ALVES A, RIGAULT B, et al. Deep learning with RGB and thermal images onboard a drone for monitoring operations[J]. Journal of Field Robotics202239(6): 840-868.
[12] BROYLES D, HAYNER C R, LEUNG K. WiSARD: A labeled visual and thermal image dataset for wilderness search and rescue?[C]?∥2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE Press, 2022: 9467-9474.
[13] JAMES G, EDWARD O. RGB-LWIR labeled dataset for air-based platforms[DB/OL]. (2022-12-20)[2024-09-29]. .
[14] XIAO Y, LIU F, ZHU Y B, et al. UAV cross-modal image registration: Large-scale dataset and transformer-based approach[C]?∥Advances in Brain Inspired Cognitive Systems. Singapore: Springer Nature, 2023: 166-176.
[15] FENG H T, ZHANG L, ZHANG S Q, et al. RTDOD: A large-scale RGB-thermal domain-incremental object detection dataset for UAVs[J]. Image and Vision Computing2023140: 104856.
[16] ZHU Y B, WANG Q W, LI C L, et al. Visible-thermal multiple object tracking: Large-scale video dataset and progressive fusion approach?[J]. Pattern Recognition2024161: 111330.
[17] MEI L Y, HU X L, YE Z Y, et al. GTMFuse: Group-attention transformer-driven multiscale dense feature-enhanced network for infrared and visible image fusion[J]. Knowledge-Based Systems2024293: 111658.
[18] 肖云, 曹丹, 李成龙, 等. 基于高空无人机平台的多模态跟踪数据集[J]. 中国图象图形学报202530(2): 361-374.
  XIAO Y, CAO D, LI C L, et al. A benchmark dataset for high-altitude UAV multi-modal tracking[J]. Journal of Image and Graphics202530(2): 361-374 (in Chinese).
[19] SONG K C, XUE X T, WEN H W, et al. Misaligned visible-thermal object detection: A drone-based benchmark and baseline[J]. IEEE Transactions on Intelligent Vehicles2024: 1-12.
[20] 樊一江, 李卫波. 我国低空经济阶段特征及应用场景研究[J]. 中国物价2024(4): 98-103.
  FAN Y J, LI W B. The development stage and application scenarios of China’?s low-altitude economy?[J]. China Price Journal2024(4): 98-103 (in Chinese).
[21] SHAMSOSHOARA A, AFGHAH F, RAZI A, et al. Aerial imagery pile burn detection using deep learning: The FLAME dataset?[J]. Computer Networks2021193: 108001.
[22] LIU Y Q, ZHENG C G, LIU X D, et al. Forest fire monitoring method based on UAV visual and infrared image fusion[J]. Remote Sensing202315(12): 3173.
[23] LI R Z, WANG Z G, SUN H Q, et al. Automatic identification of earth rock embankment piping hazards in small and medium rivers based on UAV thermal infrared and visible images[J]. Remote Sensing202315(18): 4492.
[24] RUI X, LI Z Q, ZHANG X Y, et al. A RGB-Thermal based adaptive modality learning network for day-night wildfire identification[J]. International Journal of Applied Earth Observation and Geoinformation2023125: 103554.
[25] KULARATNE W . et al. FireMan-UAV-RGBT. 1.0.0-beta[DB/OL].(2024-07-18)[2024-09-29]. .
[26] YETGIN ? E, GEREK ? N. Powerline image dataset (infrared-IR and visible light-VL)[DB/OL]. (2019-06-26)[2024-09-29]. .
[27] HOU Y, CHEN M D, VOLK R, et al. An approach to semantically segmenting building components and outdoor scenes based on multichannel aerial imagery datasets[J]. Remote Sensing202113(21): 4357.
[28] KAHN J, MAYER Z, HOU Y, et al. Hyperspectral (RGB + Thermal) drone images of Karlsruhe, Germany-raw images for the Thermal Bridges on Building Rooftops (TBBR) dataset (v1.1)[DB/OL]. (2022-11-25)[2024-09-29]. .
[29] NOORALISHAHI P, RAMOS G, POZZER S, et al. Texture analysis to enhance drone-based multi-modal inspection of structures[J]. Drones20226(12): 407.
[30] XU C, LI Q W, JIANG X B, et al. Dual-space graph-based interaction network for RGB-thermal semantic segmentation in electric power scene[J]. IEEE Transactions on Circuits and Systems for Video Technology202333(4): 1577-1592.
[31] XU X, LIU G, BAVIRISETTI D P, et al. Fast detection fusion network (FDFnet): An end to end object detection framework based on heterogeneous image fusion for power facility inspection[J]. IEEE Transactions on Power Delivery202237(6): 4496-4505.
[32] CHOI H, YUN J P, KIM B J, et al. Attention-based multimodal image feature fusion module for transmission line detection[J]. IEEE Transactions on Industrial Informatics202218(11): 7686-7695.
[33] ZHANG C, ZOU Y, DIMYADI J, et al. Thermal-textured BIM generation for building energy audit with UAV image fusion and histogram-based enhancement[J]. Energy and Buildings2023301: 113710.
[34] WANG P J, XIAO J Z, QIANG X X, et al. An automatic building fa?ade deterioration detection system using infrared-visible image fusion and deep learning[J]. Journal of Building Engineering202495: 110122.
[35] ZHAO Z X, BAI H W, ZHU Y Z, et al. DDFM: Denoising diffusion model for multi-modality image fusion[C]?∥2023 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2023: 8048-8059.
[36] LI H, WU X J. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing201928(5): 2614-2623.
[37] LI H, WU X J, DURRANI T. NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement202069(12): 9645-9656.
[38] LI H, WU X J, KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion202173: 72-86.
[39] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]?∥European Conference on Computer Vision. Cham: Springer, 2014: 740-755.
[40] XU H, ZHANG H, MA J Y. Classification saliency-based rule for visible and infrared image fusion[J]. IEEE Transactions on Computational Imaging20217: 824-836.
[41] JIAN L H, YANG X M, LIU Z, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement202070: 5002215.
[42] LIU J Y, FAN X, JIANG J, et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology202232(1): 105-119.
[43] ZHAO Z X, XU S, ZHANG J S, et al. Efficient and model-based infrared and visible image fusion via algorithm unrolling[J]. IEEE Transactions on Circuits and Systems for Video Technology202232(3): 1186-1196.
[44] ZHAO Z X, XU S, ZHANG C X, et al. DIDFuse: Deep image decomposition for infrared and visible image fusion?[DB/OL]. arXiv preprint2003. 09210, 2020.
[45] ZHAO Z X, BAI H W, ZHANG J S, et al. CDDFuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion[C]?∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2023: 5906-5916.
[46] XU H, GONG M Q, TIAN X, et al. CUFD: An encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition[J]. Computer Vision and Image Understanding2022218: 103407.
[47] 徐涵, 梅晓光, 樊凡, 等. 信息分离和质量引导的红外与可见光图像融合[J]. 中国图象图形学报202227(11): 3316-3330.
  XU H, MEI X G, FAN F, et al. Information decomposition and quality guided infrared and visible image fusion[J]. Journal of Image and Graphics202227(11): 3316-3330 (in Chinese).
[48] XU H, WANG X Y, MA J Y. DRF: Disentangled representation for visible and infrared image fusion[J]. IEEE Transactions on Instrumentation and Measurement202170: 5006713.
[49] LUO X Q, GAO Y H, WANG A Q, et al. IFSepR: A general framework for image fusion based on separate representation learning[J]. IEEE Transactions on Multimedia202125: 608-623.
[50] WANG Z S, WANG J Y, WU Y Y, et al. UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology202232(6): 3360-3374.
[51] XU H, MA J, JIANG J, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202244(1): 502-518.
[52] LONG Y Z, JIA H T, ZHONG Y D, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion202169: 128-141.
[53] HOU R C, ZHOU D M, NIE R C, et al. VIF-Net: An unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging20206: 640-651.
[54] ZHAO F, ZHAO W D, YAO L B, et al. Self-supervised feature adaption for infrared and visible image fusion[J]. Information Fusion202176: 189-203.
[55] LIANG P, JIANG J, LIU X, et al. Fusion from decomposition: A self-supervised decomposition approach for image fusion[C]?∥European Conference on Computer Vision. Cham: Springer, 2022: 719-735.
[56] LI J T, NIE R C, CAO J D, et al. LRFE-CL: A self-supervised fusion network for infrared and visible image via low redundancy feature extraction and contrastive learning[J]. Expert Systems with Applications2024251: 124125.
[57] WANG X, GUAN Z, QIAN W, et al. CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map[J]. Information Fusion2024102: 102039.
[58] MA J Y, TANG L F, XU M L, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement202170: 5009513.
[59] ZHU D P, ZHAN W D, JIANG Y C, et al. IPLF: A novel image pair learning fusion network for infrared and visible image[J]. IEEE Sensors Journal202222(9): 8808-8817.
[60] WANG X, GUAN Z, YU S S, et al. Infrared and visible image fusion via decoupling network?[J]. IEEE Transactions on Instrumentation and Measurement202271: 2521213.
[61] TANG L F, YUAN J T, MA J Y. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network[J]. Information Fusion202282: 28-42.
[62] SUN Y M, CAO B, ZHU P F, et al. DetFusion: A detection-driven infrared and visible image fusion network[C]?∥Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 4003-4011.
[63] TANG L F, ZHANG H, XU H, et al. Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity[J]. Information Fusion202399: 101870.
[64] LIU X W, HUO H T, LI J, et al. A semantic-driven coupled network for infrared and visible image fusion[J]. Information Fusion2024108: 102352.
[65] ZHAO W D, XIE S G, ZHAO F, et al. MetaFusion: Infrared and visible image fusion via meta-feature embedding from object detection[C]?∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2023: 13955-13965.
[66] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[DB/OL]. arXiv preprints2010.11929, 2020.
[67] WANG Z, CHEN Y, SHAO W, et al. SwinFuse: A residual swin transformer fusion network for infrared and visible images[J]. IEEE Transactions on Instrumentation and Measurement202271: 5016412 .
[68] GUO J Y, HAN K, WU H, et al. CMT: Convolutional neural networks meet vision transformers?[C]?∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 12165-12175.
[69] YUAN K, GUO S P, LIU Z W, et al. Incorporating convolution designs into visual transformers?[C]?∥2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2021: 559-568.
[70] KHAN A, RAUF Z, SOHAIL A, et al. A survey of the vision transformers and their CNN-transformer based variants[J]. Artificial Intelligence Review202356(3): 2917-2970.
[71] CHANG Z H, FENG Z X, YANG S Y, et al. AFT: Adaptive fusion transformer for visible and infrared images[J]. IEEE Transactions on Image Processing202332: 2077-2092.
[72] LI J, YANG B, BAI L, et al. TFIV: Multigrained token fusion for infrared and visible image via transformer[J]. IEEE Transactions on Instrumentation and Measurement202372: 2526414.
[73] MA J Y, TANG L F, FAN F, et al. SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer[J]. IEEE/CAA Journal of Automatica Sinica20229(7): 1200-1217.
[74] CHEN J, DING J F, YU Y, et al. THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor?[J]. Neurocomputing2023527: 71-82.
[75] LI B C, LU J X, LIU Z F, et al. SBIT-Fuse: Infrared and visible image fusion based on Symmetrical Bilateral interaction and Transformer[J]. Infrared Physics & Technology2024138: 105269.
[76] PARK S, VIEN A G, LEE C. Cross-modal transformers for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology202434(2): 770-785.
[77] TANG W, HE F Z, LIU Y. TCCFusion: An infrared and visible image fusion method based on transformer and cross correlation?[J]. Pattern Recognition2023137: 109295.
[78] JIN H Y, YANG Y, SU H N, et al. Low light RGB and IR image fusion with selective CNN-transformer network[C]?∥2023 IEEE International Conference on Image Processing (ICIP). Piscataway: IEEE Press, 2023: 1255-1259.
[79] ZHAO Y Y, ZHENG Q C, ZHU P H, et al. TUFusion: A transformer-based universal fusion algorithm for multimodal images[J]. IEEE Transactions on Circuits and Systems for Video Technology202434(3): 1712-1725.
[80] LI J, ZHU J M, LI C, et al. CGTF: Convolution-guided transformer for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement202271: 5012314.
[81] TANG W, HE F Z, LIU Y, et al. DATFuse: Infrared and visible image fusion via dual attention transformer[J]. IEEE Transactions on Circuits and Systems for Video Technology202333(7): 3159-3172.
[82] YANG X, HUO H T, WANG R H, et al. DGLT-Fusion: A decoupled global-local infrared and visible image fusion transformer[J]. Infrared Physics & Technology2023128: 104522.
[83] WANG Z S, YANG F, SUN J, et al. AITFuse: Infrared and visible image fusion via adaptive interactive transformer learning[J]. Knowledge-Based Systems2024299: 111949.
[84] MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion201948: 11-26.
[85] FU Y, WU X J, DURRANI T. Image fusion based on generative adversarial network consistent with perception[J]. Information Fusion202172: 110-125.
[86] MA J Y, ZHANG H, SHAO Z F, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement202070: 5005014.
[87] MA J Y, LIANG P W, YU W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion202054: 85-98.
[88] RAO Y J, WU D, HAN M N, et al. AT-GAN: A generative adversarial network with attention and transition for infrared and visible image fusion[J]. Information Fusion202392: 336-349.
[89] CHANG L, HUANG Y D, LI Q F, et al. DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discriminator[J]. Neurocomputing2024578: 127391.
[90] MA J Y, XU H, JIANG J J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing202029: 4980-4995.
[91] ZHOU H B, WU W, ZHANG Y D, et al. Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network[J]. IEEE Transactions on Multimedia202125: 635-648.
[92] ZHANG H, YUAN J T, TIAN X, et al. GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators[J]. IEEE Transactions on Computational Imaging20217: 1134-1147.
[93] LI J, HUO H T, LI C, et al. AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia202023: 1383-1396.
[94] LIU J Y, FAN X, HUANG Z B, et al. Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection?[C]?∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 5792-5801.
[95] SONG A Y, DUAN H X, PEI H D, et al. Triple-discriminator generative adversarial network for infrared and visible image fusion?[J]. Neurocomputing2022483: 183-194.
[96] REN L, PAN Z B, CAO J Z, et al. Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation[J]. Infrared Physics & Technology2021117: 103839.
[97] DUFFHAUSS F, VIEN N A, ZIESCHE H, et al. FusionVAE: A deep hierarchical variational autoencoder for RGB image fusion?[C]?∥European Conference on Computer Vision. Cham: Springer, 2022: 674-691.
[98] 闫志浩, 周长兵, 李小翠. 生成扩散模型研究综述[J]. 计算机科学202451(1): 273-283.
  YAN Z H, ZHOU Z B, LI X C. Survey on generative diffusion model[J]. Computer Science202451(1): 273-283 (in Chinese).
[99] CAO H Q, TAN C, GAO Z Y, et al. A survey on generative diffusion models?[J]. IEEE Transactions on Knowledge and Data Engineering202436(7): 2814-2830.
[100] CROITORU F A, HONDRU V, IONESCU R T, et al. Diffusion models in vision: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202345(9): 10850-10869.
[101] YANG L, ZHANG Z L, SONG Y, et al. Diffusion models: A comprehensive survey of methods and applications[J]. ACM Computing Surveys202356(4):1-39.
[102] YI X P, TANG L F, ZHANG H, et al. Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior?[J]. Information Fusion2024110: 102450.
[103] LI M N, PEI R H, ZHENG T Y, et al. FusionDiff: Multi-focus image fusion using denoising diffusion probabilistic models?[J]. Expert Systems with Applications2024238: 121664.
[104] YUE J, FANG L Y, XIA S B, et al. Dif-Fusion: Toward high color fidelity in infrared and visible image fusion with diffusion models[J]. IEEE Transactions on Image Processing202332: 5705-5720.
[105] TANG L F, DENG Y X, YI X P, et al. DRMF: Degradation-robust multi-modal image fusion via composable diffusion prior?[C]?∥Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 8546-8555.
[106] YANG B, JIANG Z H, PAN D, et al. LFDT-Fusion: A latent feature-guided diffusion Transformer model for general image fusion?[J]. Information Fusion2025113: 102639.
[107] 孙彬, 高云翔, 诸葛吴为, 等. 可见光与红外图像融合质量评价指标分析[J]. 中国图象图形学报202328(1): 144-155.
  SUN B, GAO Y X, ZHUGE W W, et al. Analysis of quality objective assessment metrics for visible and infrared image fusion?[J]. Journal of Image and Graphics202328(1): 144-155 (in Chinese).
[108] HAN Y, CAI Y Z, CAO Y, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion201314(2): 127-135.
Outlines

/