电子电气工程与控制

基于地貌类别信息指导的SAR图像仿真方法

  • 孟令捷 ,
  • 李红光 ,
  • 李新军
展开
  • 1.北京航空航天大学 电子信息工程学院,北京 100191
    2.北京航空航天大学 无人系统研究院,北京 100191

收稿日期: 2024-07-29

  修回日期: 2024-09-07

  录用日期: 2024-10-09

  网络出版日期: 2024-10-29

基金资助

国家自然科学基金(62076019);国家重点研发计划(2022YFB3904303)

SAR image simulation method guided by geomorphic category information

  • Lingjie MENG ,
  • Hongguang LI ,
  • Xinjun LI
Expand
  • 1.School of Electronic Information Engineering,Beihang University,Beijing 100191,China
    2.Institute of Unmanned System,Beihang University,Beijing 100191,China

Received date: 2024-07-29

  Revised date: 2024-09-07

  Accepted date: 2024-10-09

  Online published: 2024-10-29

Supported by

National Natural Science Foundation of China(62076019);National Key Research and Development Program of China(2022YFB3904303)

摘要

深度学习SAR图像仿真方法一般没有考虑SAR图像不同地貌类别特征差异,导致仿真图像地貌区分失真。针对这一情况,提出一种地貌类别信息指导的可见光到SAR图像转换算法。算法设计了地貌类别提取分支,使用注意力机制,从多个维度采集地貌类别信息,指导SAR图像仿真。设计了图像内容提取分支,使用对比学习,增强网络对可见光和SAR图像共有的内容信息的特征提取能力。设计了图像生成模块,在地貌类别信息的指导下,将内容信息转化为SAR图像,使生成的SAR图像具有对应地貌类别的特征,并使用路径正则化细分可见光到SAR图像的完整转换过程,降低实现难度。建立了具有多种不同地貌的可见光和SAR图像配对数据集,通过实验对比6类评价指标,所提算法较其他代表性算法均表现出较好性能,其中结构相似度至少提升了9.24%。同时,仿真SAR图像的视觉效果真实度更高,能够有效保留地貌类别特征。

本文引用格式

孟令捷 , 李红光 , 李新军 . 基于地貌类别信息指导的SAR图像仿真方法[J]. 航空学报, 2025 , 46(7) : 331003 -331003 . DOI: 10.7527/S1000-6893.2024.31003

Abstract

The current deep learning SAR image simulation methods generally do not consider the feature differences of different geomorphic categories in SAR images, resulting in distortion of geomorphic differentiation in simulated images. To address this issue, this paper proposes a visible-to-SAR image translation algorithm guided by geomorphic category information. A topographic category extraction branch is designed, and the attention mechanism is used to collect topographic category information from multiple dimensions to guide SAR image simulation. Image content extraction branches are designed, and contrast learning is used to enhance the feature extraction capability of the network for common content information of visible light and SAR images. An image generation module is designed to convert content information into SAR images under the guidance of geomorphic category information, so that the generated SAR images have the features corresponding to geomorphic categories, and path regularization is used to subdivide the complete translation process from visible light to SAR images to reduce the difficulty of implementation. A pair dataset of visible light and SAR images with different terrains is established. Experimental comparison of 6 evaluation indexes shows that the proposed algorithm has better performance than other representative algorithms, with the structural similarity being improved by at least 9.24%. In addition, the simulated SAR image shows a higher degree of realism in the visual effect, and can effectively retain the features of landform categories.

参考文献

1 BI H, ZHU D, ZHANG J. Advances in synthetic aperture radar data processing and application[J]. Remote Sensing202415(6): 2931-2952.
2 谷秀昌, 付琨, 仇晓兰. SAR图像判读解译基础[M]. 北京: 科学出版社, 2017: 2-4.
  GU X C, FU K, QIU X L. Basic of SAR image interpretation[M]. Beijing: Science Press, 2017: 2-4 (in Chinese).
3 LI B, CHEN H. Advancements in spaceborne synthetic aperture radar imaging with system-on-chip architecture and system fault-tolerant technology[J]. Remote Sensing202315(19): 4739.
4 DANG S, CUI Z, CAO Z, et al. SAR target recognition via incremental nonnegative matrix factorization with Lp sparse constraint[C]∥2017 IEEE Radar Conference (RadarConf). Piscataway: IEEE Press, 2017: 530-534.
5 ZHOU Y, WANG X, LIU J, et al. Challenges and techniques in SAR image interpretation and analysis[J]. Remote Sensing202012(10): 1678-1678.
6 ZHU X X, MOU L, SCHMITT M, et al. Deep learning in SAR image interpretation and applications: A review[J]. IEEE Geoscience and Remote Sensing Magazine20175(4): 8-24.
7 GUO J, LEI B, DING C, et al. Synthetic aperture radar image synthesis by using generative adversarial nets[J]. IEEE Geoscience and Remote Sensing Letters201714(7): 1111-1115.
8 ZHANG L, ZHANG L, TAO D. Applications of synthetic aperture radar in deep learning: A review[J]. IEEE Geoscience and Remote Sensing Magazine20219(4): 23-40.
9 DONG J, JIN M, ZHUANG Q, et al. Simulation of SAR images based on physical modeling of scattering and propagation[J]. Remote Sensing201810(5): 712-712.
10 ZHU X X, MOU L, SCHMITT M, et al. Deep learning meets SAR: A survey[J]. IEEE Transactions on Geoscience and Remote Sensing201957(2): 784-798.
11 CUI Z, ZHANG M, CAO Z, et al. Image data augmentation for SAR sensor via generative adversarial nets[J]. IEEE Access20197: 42255-42268.
12 ARJOVSKY M, CHINTALA S, BOTTOU L, et al. Wasserstein GAN[DB/OL]. arXiv preprint: 1701.07875v3, 2017.
13 卢庆林, 叶伟, 李国靖. 基于DCGAN的SAR虚假目标图像仿真[J]. 电子信息对抗技术202035(2): 57-61, 65.
  LU Q L, YE W, LI G J. Deceptive target SAR image simulation based on deep convolutional generative adversarial network[J]. Electronic Information Warfare Technology202035(2): 57-61, 65 (in Chinese).
14 ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2017: 1125-1134.
15 ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]∥2017 IEEE International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2017: 2223-2232.
16 PARK T, EFROS A A, ZHANG R, et al. Contrastive learning for unpaired image-to-image translation[C]∥European Conference on Computer Vision (ECCV 2020). Berlin: Springer Nature, 2020: 319-345.
17 GRILL J B, STRUB F, ALTCHé F, et al. Bootstrap your own latent: A new approach to self-supervised learning[C]∥Proceedings of the 34th International Conference on Neural Information Processing Systems. New York: Curran Associates Inc, 2020: 21271-21284.
18 KARRAS T, LAINE S, AILA T. A style-based generator architecture for generative adversarial networks[C]∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2019: 4401-4410.
19 SCHMITT M, HUGHES L H, ZHU X X. The SEN1-2 dataset for deep Learning in SAR-optical data fusion[J]. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 20184(1): 141-146.
20 XIE S, XU Y, GONG M, et al. Unpaired image-to-image translation with shortest path regularization[C]∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2021: 10177-10187.
21 KIM B, KWON G, KIM K, et al. Unpaired image-to-image translation via neural Schr?dinger Bridge[DB/OL]. arXiv preprint: 2305.15086v3, 2024.
22 JUNG C, KWON G, YE J C. Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks[C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2022: 18239-18248.
23 YANG Z, KPALMA K, RONSIN J. Design of a hybrid measure for image similarity: A statistical, algebraic, and information-theoretic approach[J]. Journal of Electronic Imaging200817(3): 033008.
24 ZHANG L, ZHANG L, MOU X, et al. FSIM: A feature similarity index for image quality assessment[J]. IEEE Transactions on Image Processing201120(8): 2378-2386.
25 GONZALEZ R C, WOODS R E. Digital image processing[M]. 3rd ed. Upper Saddle River: Prentice Hall, 2006 352-357.
26 LANARAS C, BIOUCAS-DIAS J, BALTSAVIAS E, et al. Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network[J]. ISPRS Journal of Photogrammetry and Remote Sensing2018146: 305-319.
27 SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press 2016: 2818-2826.
28 HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local Nash Equilibrium[C]∥Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: Curran Associates Inc,2017: 6626-6637.
29 BI?KOWSKI M, SUTHERLAND D J, ARBEL M, et al. Demystifying MMD GANs[C]∥ICLR 2018 Conference Blind Submission, 2018.
30 HONG W, WANG Z, YANG M, et al. Conditional generative adversarial network for structured domain adaptation[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE Press, 2018: 1335-1344.
31 CHEN Y, ZHANG H, HAN J, et al. Robust CycleGAN: Improving CycleGAN performance for image-to-image translation with limited data[J]. IEEE Transactions on Image Processing202029: 5423-5432.
32 ZHAO S, ZHENG Y, XU Z, et al. Improving image translation with contrastive learning[C]∥Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 10033-10042.
33 SCHMITT M, HUGHES L H, QIU C, et al. SEN12MS—A curated dataset of georeferenced multi-spectral Sentinel-1/2 imagery for deep learning and data fusion[J]. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences2019: 153-160.
文章导航

/