ACTA AERONAUTICAET ASTRONAUTICA SINICA >
Dual-band payload image fusion and its applications in low-altitude remote sensing
Received date: 2024-10-08
Revised date: 2024-11-20
Accepted date: 2024-12-06
Online published: 2024-12-23
Supported by
National Natural Science Foundation of China(U23B2050);Sichuan Science and Technology Program(2022YFG0050);Fundamental Research Funds for the Central Universities(ZYGX2020ZB032)
Dual-band payload image fusion has broad prospects for application in the field of low-altitude remote sensing such as target monitoring, disaster warning, and professional inspection. Firstly, dual-band images datasets of task payload are summarized to provide data support for relevant research and applications. Secondly, by tracking the latest technologies in the field of deep learning, a systematic review of deep learning-based dual-band payload image fusion methods is conducted. These methods are categorized into generative and discriminative approaches, and representative algorithms along with their characteristics are detailed. Thirdly, in-depth experimental comparative analysis of different types of image fusion methods is performed on various datasets, evaluating their performance from three aspects: qualitative analysis, fusion quality, and operational efficiency. Finally, the challenges faced in application of the image fusion technology in low-altitude remote sensing are discussed, offering valuable insights for research in related fields.
Bin SUN , Hang YOU , Wenbo LI , Xiangrui LIU , Jiayi MA . Dual-band payload image fusion and its applications in low-altitude remote sensing[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2025 , 46(11) : 531343 -531343 . DOI: 10.7527/S1000-6893.2024.31343
[1] | 吴付杰, 王博文, 齐静雅, 等. 机载多孔径全景图像合成技术研究进展[J]. 航空学报, 2025, 46(3): 630505. |
WU F J, WANG B W, QI J Y, et al. A review of airborne multi-aperture panoramic image compositing[J]. Acta Aeronautica et Astronautica Sinica, 2025, 46(3): 630505 (in Chinese). | |
[2] | 张洲宇, 曹云峰, 范彦铭. 低空小型无人机空域冲突视觉感知技术研究进展[J]. 航空学报, 2022, 43(8): 025645. |
ZHANG Z Y, CAO Y F, FAN Y M. Research progress of vision based aerospace conflict sensing technologies for small unmanned aerial vehicle in low altitude[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43?(8)?: 025645 (in Chinese). | |
[3] | 唐霖峰, 张浩, 徐涵, 等. 基于深度学习的图像融合方法综述[J]. 中国图象图形学报, 2023, 28(1): 3-36. |
TANG L F, ZHANG H, XU H, et al. Deep learning-based image fusion: a survey[J]. Journal of Image and Graphics, 2023, 28(1): 3-36 (in Chinese). | |
[4] | ZHANG H, XU H, TIAN X, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion, 2021, 76: 323-336. |
[5] | KARIM S, TONG G, LI J Y, et al. Current advances and future perspectives of image fusion: A comprehensive review[J]. Information Fusion, 2023, 90: 185-217. |
[6] | MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019, 45: 153-178. |
[7] | ZHANG X C, DEMIRIS Y. Visible and infrared image fusion using deep learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10535-10554. |
[8] | 李霖, 王红梅, 李辰凯. 红外与可见光图像深度学习融合方法综述[J]. 红外与激光工程, 2022, 51(12): 337-356. |
LI L, WANG H M, LI C K. A review of deep learning fusion methods for infrared and visible images[J]. Infrared and Laser Engineering, 2022, 51(12): 337-356 (in Chinese). | |
[9] | ZHANG P Y, ZHAO J, WANG D, et al. Visible-thermal UAV tracking: A large-scale benchmark and new baseline[C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 8876-8885. |
[10] | SUN Y M, CAO B, ZHU P F, et al. Drone-based RGB-infrared cross-modality vehicle detection via uncertainty-aware learning?[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(10): 6700-6713. |
[11] | SPETH S, GON?ALVES A, RIGAULT B, et al. Deep learning with RGB and thermal images onboard a drone for monitoring operations[J]. Journal of Field Robotics, 2022, 39(6): 840-868. |
[12] | BROYLES D, HAYNER C R, LEUNG K. WiSARD: A labeled visual and thermal image dataset for wilderness search and rescue?[C]?∥2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE Press, 2022: 9467-9474. |
[13] | JAMES G, EDWARD O. RGB-LWIR labeled dataset for air-based platforms[DB/OL]. (2022-12-20)[2024-09-29]. . |
[14] | XIAO Y, LIU F, ZHU Y B, et al. UAV cross-modal image registration: Large-scale dataset and transformer-based approach[C]?∥Advances in Brain Inspired Cognitive Systems. Singapore: Springer Nature, 2023: 166-176. |
[15] | FENG H T, ZHANG L, ZHANG S Q, et al. RTDOD: A large-scale RGB-thermal domain-incremental object detection dataset for UAVs[J]. Image and Vision Computing, 2023, 140: 104856. |
[16] | ZHU Y B, WANG Q W, LI C L, et al. Visible-thermal multiple object tracking: Large-scale video dataset and progressive fusion approach?[J]. Pattern Recognition, 2024, 161: 111330. |
[17] | MEI L Y, HU X L, YE Z Y, et al. GTMFuse: Group-attention transformer-driven multiscale dense feature-enhanced network for infrared and visible image fusion[J]. Knowledge-Based Systems, 2024, 293: 111658. |
[18] | 肖云, 曹丹, 李成龙, 等. 基于高空无人机平台的多模态跟踪数据集[J]. 中国图象图形学报, 2025, 30(2): 361-374. |
XIAO Y, CAO D, LI C L, et al. A benchmark dataset for high-altitude UAV multi-modal tracking[J]. Journal of Image and Graphics, 2025, 30(2): 361-374 (in Chinese). | |
[19] | SONG K C, XUE X T, WEN H W, et al. Misaligned visible-thermal object detection: A drone-based benchmark and baseline[J]. IEEE Transactions on Intelligent Vehicles, 2024: 1-12. |
[20] | 樊一江, 李卫波. 我国低空经济阶段特征及应用场景研究[J]. 中国物价, 2024(4): 98-103. |
FAN Y J, LI W B. The development stage and application scenarios of China’?s low-altitude economy?[J]. China Price Journal, 2024(4): 98-103 (in Chinese). | |
[21] | SHAMSOSHOARA A, AFGHAH F, RAZI A, et al. Aerial imagery pile burn detection using deep learning: The FLAME dataset?[J]. Computer Networks, 2021, 193: 108001. |
[22] | LIU Y Q, ZHENG C G, LIU X D, et al. Forest fire monitoring method based on UAV visual and infrared image fusion[J]. Remote Sensing, 2023, 15(12): 3173. |
[23] | LI R Z, WANG Z G, SUN H Q, et al. Automatic identification of earth rock embankment piping hazards in small and medium rivers based on UAV thermal infrared and visible images[J]. Remote Sensing, 2023, 15(18): 4492. |
[24] | RUI X, LI Z Q, ZHANG X Y, et al. A RGB-Thermal based adaptive modality learning network for day-night wildfire identification[J]. International Journal of Applied Earth Observation and Geoinformation, 2023, 125: 103554. |
[25] | KULARATNE W . et al. FireMan-UAV-RGBT. 1.0.0-beta[DB/OL].(2024-07-18)[2024-09-29]. . |
[26] | YETGIN ? E, GEREK ? N. Powerline image dataset (infrared-IR and visible light-VL)[DB/OL]. (2019-06-26)[2024-09-29]. . |
[27] | HOU Y, CHEN M D, VOLK R, et al. An approach to semantically segmenting building components and outdoor scenes based on multichannel aerial imagery datasets[J]. Remote Sensing, 2021, 13(21): 4357. |
[28] | KAHN J, MAYER Z, HOU Y, et al. Hyperspectral (RGB + Thermal) drone images of Karlsruhe, Germany-raw images for the Thermal Bridges on Building Rooftops (TBBR) dataset (v1.1)[DB/OL]. (2022-11-25)[2024-09-29]. . |
[29] | NOORALISHAHI P, RAMOS G, POZZER S, et al. Texture analysis to enhance drone-based multi-modal inspection of structures[J]. Drones, 2022, 6(12): 407. |
[30] | XU C, LI Q W, JIANG X B, et al. Dual-space graph-based interaction network for RGB-thermal semantic segmentation in electric power scene[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(4): 1577-1592. |
[31] | XU X, LIU G, BAVIRISETTI D P, et al. Fast detection fusion network (FDFnet): An end to end object detection framework based on heterogeneous image fusion for power facility inspection[J]. IEEE Transactions on Power Delivery, 2022, 37(6): 4496-4505. |
[32] | CHOI H, YUN J P, KIM B J, et al. Attention-based multimodal image feature fusion module for transmission line detection[J]. IEEE Transactions on Industrial Informatics, 2022, 18(11): 7686-7695. |
[33] | ZHANG C, ZOU Y, DIMYADI J, et al. Thermal-textured BIM generation for building energy audit with UAV image fusion and histogram-based enhancement[J]. Energy and Buildings, 2023, 301: 113710. |
[34] | WANG P J, XIAO J Z, QIANG X X, et al. An automatic building fa?ade deterioration detection system using infrared-visible image fusion and deep learning[J]. Journal of Building Engineering, 2024, 95: 110122. |
[35] | ZHAO Z X, BAI H W, ZHU Y Z, et al. DDFM: Denoising diffusion model for multi-modality image fusion[C]?∥2023 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2023: 8048-8059. |
[36] | LI H, WU X J. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. |
[37] | LI H, WU X J, DURRANI T. NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656. |
[38] | LI H, WU X J, KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86. |
[39] | LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]?∥European Conference on Computer Vision. Cham: Springer, 2014: 740-755. |
[40] | XU H, ZHANG H, MA J Y. Classification saliency-based rule for visible and infrared image fusion[J]. IEEE Transactions on Computational Imaging, 2021, 7: 824-836. |
[41] | JIAN L H, YANG X M, LIU Z, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 5002215. |
[42] | LIU J Y, FAN X, JIANG J, et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(1): 105-119. |
[43] | ZHAO Z X, XU S, ZHANG J S, et al. Efficient and model-based infrared and visible image fusion via algorithm unrolling[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(3): 1186-1196. |
[44] | ZHAO Z X, XU S, ZHANG C X, et al. DIDFuse: Deep image decomposition for infrared and visible image fusion?[DB/OL]. arXiv preprint: 2003. 09210, 2020. |
[45] | ZHAO Z X, BAI H W, ZHANG J S, et al. CDDFuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion[C]?∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2023: 5906-5916. |
[46] | XU H, GONG M Q, TIAN X, et al. CUFD: An encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition[J]. Computer Vision and Image Understanding, 2022, 218: 103407. |
[47] | 徐涵, 梅晓光, 樊凡, 等. 信息分离和质量引导的红外与可见光图像融合[J]. 中国图象图形学报, 2022, 27(11): 3316-3330. |
XU H, MEI X G, FAN F, et al. Information decomposition and quality guided infrared and visible image fusion[J]. Journal of Image and Graphics, 2022, 27(11): 3316-3330 (in Chinese). | |
[48] | XU H, WANG X Y, MA J Y. DRF: Disentangled representation for visible and infrared image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5006713. |
[49] | LUO X Q, GAO Y H, WANG A Q, et al. IFSepR: A general framework for image fusion based on separate representation learning[J]. IEEE Transactions on Multimedia, 2021, 25: 608-623. |
[50] | WANG Z S, WANG J Y, WU Y Y, et al. UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(6): 3360-3374. |
[51] | XU H, MA J, JIANG J, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. |
[52] | LONG Y Z, JIA H T, ZHONG Y D, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion, 2021, 69: 128-141. |
[53] | HOU R C, ZHOU D M, NIE R C, et al. VIF-Net: An unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 2020, 6: 640-651. |
[54] | ZHAO F, ZHAO W D, YAO L B, et al. Self-supervised feature adaption for infrared and visible image fusion[J]. Information Fusion, 2021, 76: 189-203. |
[55] | LIANG P, JIANG J, LIU X, et al. Fusion from decomposition: A self-supervised decomposition approach for image fusion[C]?∥European Conference on Computer Vision. Cham: Springer, 2022: 719-735. |
[56] | LI J T, NIE R C, CAO J D, et al. LRFE-CL: A self-supervised fusion network for infrared and visible image via low redundancy feature extraction and contrastive learning[J]. Expert Systems with Applications, 2024, 251: 124125. |
[57] | WANG X, GUAN Z, QIAN W, et al. CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map[J]. Information Fusion, 2024, 102: 102039. |
[58] | MA J Y, TANG L F, XU M L, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 5009513. |
[59] | ZHU D P, ZHAN W D, JIANG Y C, et al. IPLF: A novel image pair learning fusion network for infrared and visible image[J]. IEEE Sensors Journal, 2022, 22(9): 8808-8817. |
[60] | WANG X, GUAN Z, YU S S, et al. Infrared and visible image fusion via decoupling network?[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 2521213. |
[61] | TANG L F, YUAN J T, MA J Y. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network[J]. Information Fusion, 2022, 82: 28-42. |
[62] | SUN Y M, CAO B, ZHU P F, et al. DetFusion: A detection-driven infrared and visible image fusion network[C]?∥Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 4003-4011. |
[63] | TANG L F, ZHANG H, XU H, et al. Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity[J]. Information Fusion, 2023, 99: 101870. |
[64] | LIU X W, HUO H T, LI J, et al. A semantic-driven coupled network for infrared and visible image fusion[J]. Information Fusion, 2024, 108: 102352. |
[65] | ZHAO W D, XIE S G, ZHAO F, et al. MetaFusion: Infrared and visible image fusion via meta-feature embedding from object detection[C]?∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2023: 13955-13965. |
[66] | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[DB/OL]. arXiv preprints: 2010.11929, 2020. |
[67] | WANG Z, CHEN Y, SHAO W, et al. SwinFuse: A residual swin transformer fusion network for infrared and visible images[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 5016412 . |
[68] | GUO J Y, HAN K, WU H, et al. CMT: Convolutional neural networks meet vision transformers?[C]?∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 12165-12175. |
[69] | YUAN K, GUO S P, LIU Z W, et al. Incorporating convolution designs into visual transformers?[C]?∥2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2021: 559-568. |
[70] | KHAN A, RAUF Z, SOHAIL A, et al. A survey of the vision transformers and their CNN-transformer based variants[J]. Artificial Intelligence Review, 2023, 56(3): 2917-2970. |
[71] | CHANG Z H, FENG Z X, YANG S Y, et al. AFT: Adaptive fusion transformer for visible and infrared images[J]. IEEE Transactions on Image Processing, 2023, 32: 2077-2092. |
[72] | LI J, YANG B, BAI L, et al. TFIV: Multigrained token fusion for infrared and visible image via transformer[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 2526414. |
[73] | MA J Y, TANG L F, FAN F, et al. SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(7): 1200-1217. |
[74] | CHEN J, DING J F, YU Y, et al. THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor?[J]. Neurocomputing, 2023, 527: 71-82. |
[75] | LI B C, LU J X, LIU Z F, et al. SBIT-Fuse: Infrared and visible image fusion based on Symmetrical Bilateral interaction and Transformer[J]. Infrared Physics & Technology, 2024, 138: 105269. |
[76] | PARK S, VIEN A G, LEE C. Cross-modal transformers for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(2): 770-785. |
[77] | TANG W, HE F Z, LIU Y. TCCFusion: An infrared and visible image fusion method based on transformer and cross correlation?[J]. Pattern Recognition, 2023, 137: 109295. |
[78] | JIN H Y, YANG Y, SU H N, et al. Low light RGB and IR image fusion with selective CNN-transformer network[C]?∥2023 IEEE International Conference on Image Processing (ICIP). Piscataway: IEEE Press, 2023: 1255-1259. |
[79] | ZHAO Y Y, ZHENG Q C, ZHU P H, et al. TUFusion: A transformer-based universal fusion algorithm for multimodal images[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(3): 1712-1725. |
[80] | LI J, ZHU J M, LI C, et al. CGTF: Convolution-guided transformer for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 5012314. |
[81] | TANG W, HE F Z, LIU Y, et al. DATFuse: Infrared and visible image fusion via dual attention transformer[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(7): 3159-3172. |
[82] | YANG X, HUO H T, WANG R H, et al. DGLT-Fusion: A decoupled global-local infrared and visible image fusion transformer[J]. Infrared Physics & Technology, 2023, 128: 104522. |
[83] | WANG Z S, YANG F, SUN J, et al. AITFuse: Infrared and visible image fusion via adaptive interactive transformer learning[J]. Knowledge-Based Systems, 2024, 299: 111949. |
[84] | MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. |
[85] | FU Y, WU X J, DURRANI T. Image fusion based on generative adversarial network consistent with perception[J]. Information Fusion, 2021, 72: 110-125. |
[86] | MA J Y, ZHANG H, SHAO Z F, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 5005014. |
[87] | MA J Y, LIANG P W, YU W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 2020, 54: 85-98. |
[88] | RAO Y J, WU D, HAN M N, et al. AT-GAN: A generative adversarial network with attention and transition for infrared and visible image fusion[J]. Information Fusion, 2023, 92: 336-349. |
[89] | CHANG L, HUANG Y D, LI Q F, et al. DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discriminator[J]. Neurocomputing, 2024, 578: 127391. |
[90] | MA J Y, XU H, JIANG J J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. |
[91] | ZHOU H B, WU W, ZHANG Y D, et al. Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network[J]. IEEE Transactions on Multimedia, 2021, 25: 635-648. |
[92] | ZHANG H, YUAN J T, TIAN X, et al. GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators[J]. IEEE Transactions on Computational Imaging, 2021, 7: 1134-1147. |
[93] | LI J, HUO H T, LI C, et al. AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 2020, 23: 1383-1396. |
[94] | LIU J Y, FAN X, HUANG Z B, et al. Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection?[C]?∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 5792-5801. |
[95] | SONG A Y, DUAN H X, PEI H D, et al. Triple-discriminator generative adversarial network for infrared and visible image fusion?[J]. Neurocomputing, 2022, 483: 183-194. |
[96] | REN L, PAN Z B, CAO J Z, et al. Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation[J]. Infrared Physics & Technology, 2021, 117: 103839. |
[97] | DUFFHAUSS F, VIEN N A, ZIESCHE H, et al. FusionVAE: A deep hierarchical variational autoencoder for RGB image fusion?[C]?∥European Conference on Computer Vision. Cham: Springer, 2022: 674-691. |
[98] | 闫志浩, 周长兵, 李小翠. 生成扩散模型研究综述[J]. 计算机科学, 2024, 51(1): 273-283. |
YAN Z H, ZHOU Z B, LI X C. Survey on generative diffusion model[J]. Computer Science, 2024, 51(1): 273-283 (in Chinese). | |
[99] | CAO H Q, TAN C, GAO Z Y, et al. A survey on generative diffusion models?[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(7): 2814-2830. |
[100] | CROITORU F A, HONDRU V, IONESCU R T, et al. Diffusion models in vision: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(9): 10850-10869. |
[101] | YANG L, ZHANG Z L, SONG Y, et al. Diffusion models: A comprehensive survey of methods and applications[J]. ACM Computing Surveys, 2023, 56(4):1-39. |
[102] | YI X P, TANG L F, ZHANG H, et al. Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior?[J]. Information Fusion, 2024, 110: 102450. |
[103] | LI M N, PEI R H, ZHENG T Y, et al. FusionDiff: Multi-focus image fusion using denoising diffusion probabilistic models?[J]. Expert Systems with Applications, 2024, 238: 121664. |
[104] | YUE J, FANG L Y, XIA S B, et al. Dif-Fusion: Toward high color fidelity in infrared and visible image fusion with diffusion models[J]. IEEE Transactions on Image Processing, 2023, 32: 5705-5720. |
[105] | TANG L F, DENG Y X, YI X P, et al. DRMF: Degradation-robust multi-modal image fusion via composable diffusion prior?[C]?∥Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 8546-8555. |
[106] | YANG B, JIANG Z H, PAN D, et al. LFDT-Fusion: A latent feature-guided diffusion Transformer model for general image fusion?[J]. Information Fusion, 2025, 113: 102639. |
[107] | 孙彬, 高云翔, 诸葛吴为, 等. 可见光与红外图像融合质量评价指标分析[J]. 中国图象图形学报, 2023, 28(1): 144-155. |
SUN B, GAO Y X, ZHUGE W W, et al. Analysis of quality objective assessment metrics for visible and infrared image fusion?[J]. Journal of Image and Graphics, 2023, 28(1): 144-155 (in Chinese). | |
[108] | HAN Y, CAI Y Z, CAO Y, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013, 14(2): 127-135. |
/
〈 |
|
〉 |