信息融合

基于无监督深度学习的红外与可见光图像融合

  • 孙秀一 ,
  • 胡绍海 ,
  • 马晓乐
展开
  • 1. 北京交通大学 信息科学研究所,北京 100044;
    2. 现代信息科学与网络技术北京市重点实验室,北京 100044

收稿日期: 2022-01-11

  修回日期: 2022-01-14

  网络出版日期: 2022-04-06

基金资助

国家自然科学基金 (62172030,61771058);中央高校基础研究基金(2021JBM009)

Infrared and visible image fusion based on unsupervised deep learning

  • SUN Xiuyi ,
  • HU Shaohai ,
  • MA Xiaole
Expand
  • 1. Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China;
    2. Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China

Received date: 2022-01-11

  Revised date: 2022-01-14

  Online published: 2022-04-06

Supported by

National Natural Science Foundation of China (62172030, 61771058), the Fundamental Research Funds for the Central Universities (2021JBM009)

摘要

目前已知的多数基于卷积神经网络的红外与可见光图像融合模型没有充分利用来自可见光源图像的层次特征,因此导致融合图像细节纹理不足。受残差网络和密集网络的启发,提出了一种基于无监督深度学习的图像融合算法来解决融合图像细节纹理信息不足的问题。使用的残差密集块有连续存储机制,最大程度地保留每层的特征信息,局部残差融合和全局残差融合的设计有利于学习图像中的结构纹理。此外,为了更好地保留可见光图像中的细节纹理,引入了生成对抗网络对数据集进行无监督学习。主客观实验表明,该算法不仅获得了良好的视觉融合效果,融合图像具有更多的边缘纹理信息,在客观评价指标上对比现有优秀的算法也较大的提升。

本文引用格式

孙秀一 , 胡绍海 , 马晓乐 . 基于无监督深度学习的红外与可见光图像融合[J]. 航空学报, 2022 , 43(S1) : 726938 -726938 . DOI: 10.7527/S1000-6893.2022.26938

Abstract

Most of the known infrared and visible image fusion models based on convolutional neural networks make little use of the hierarchical features from visible images, thus resulting in insufficient texture details of the fused image. Inspired by the residual network and dense network, an image fusion algorithm is proposed based on unsupervised deep learning to solve the problem of insufficient texture information of fused images. The residual dense block has a continuous storage mechanism to retain the feature information of each layer to the maximum extent. The design of local residual fusion and global residual fusion is conducive to learning the structural texture in the image. In addition, to better preserve the detailed texture in visible images, the generative adversarial network is introduced to perform unsupervised learning on the dataset. Subjective and objective experiments show that the proposed algorithm achieves not only a good visual fusion effect, but also more edge texture information of the fused image. Compared with that of the existing state-of-the-art algorithms, the objective evaluation index of the method proposed is also greatly improved.

参考文献

[1] SHEN Y, HUANG C H, HUANG F, et al. Research progress of infrared and visible image fusion technology[J]. Infrared and Laser Engineering, 2021, 50(9): 152-169 (in Chinese). 沈英, 黄春红, 黄峰, 等. 红外与可见光图像融合技术的研究进展[J]. 红外与激光工程, 2021, 50(9): 152-169.
[2] CHEN J, WU K L, CHENG Z, et al. A saliency-based multiscale approach for infrared and visible image fusion[J]. Signal Processing, 2021, 182: 107936.
[3] YANG Y, LIU J X, HUANG S Y, et al. Infrared and visible image fusion via texture conditional generative adversarial network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(12): 4771-4783.
[4] BAI Y, HOU Z Q, LIU X Y, et al. An object detection algorithm based on decision-level fusion of visible light image and infrared image[J]. Journal of Air Force Engineering University (Natural Science Edition), 2020, 21(6): 53-59, 100 (in Chinese). 白玉, 侯志强, 刘晓义, 等. 基于可见光图像和红外图像决策级融合的目标检测算法[J]. 空军工程大学学报(自然科学版), 2020, 21(6): 53-59, 100.
[5] ZHANG H, XU H, XIAO Y, et al. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12797-12804.
[6] LIU Y, CHEN X, PENG H, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191-207.
[7]
[8]
[9]
[10] LI H, WU X J. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2018: 2018Dec18.
[11]
[12] RUDIN L I, OSHER S, FATEMI E. Nonlinear total variation based noise removal algorithms[J]. Physica D: Nonlinear Phenomena, 1992, 60(1-4): 259-268.
[13]
[14] TOET A. Image fusion by a ratio of low-pass pyramid[J]. Pattern Recognition Letters, 1989, 9(4): 245-253.
[15] SHREYAMSHA KUMAR B K. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, Image and Video Processing, 2015, 9(5): 1193-1204.
[16] LIU Y, LIU S P, WANG Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164.
[17] MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109.
[18] MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.
[19] MA J Y, XU H, JIANG J J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995.
[20] MA J Y, ZHANG H, SHAO Z F, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.
[21]
文章导航

/