航空学报 > 2022, Vol. 43 Issue (S1): 726948-726948   doi: 10.7527/S1000-6893.2022.26948

局部特征点对称约束的图像拼接增强方法

钟梦帆, 裴继红   

  1. 1. 深圳大学 电子与信息工程学院,深圳 518061
  • 收稿日期:2022-01-13 修回日期:2022-01-13 发布日期:2022-02-17
  • 通讯作者: 裴继红,E-mail:jhpei@szu.edu E-mail:jhpei@szu.edu
  • 基金资助:
    国家自然科学基金(62071303,61871269);广东省基础与应用基础研究基金(2019A1515011861);深圳市科技计划(JCYJ20190808151615540);中国博士后科学基金(2021M702275)

Image stitching enhancement method with symmetrical constraint of local feature points

ZHONG Mengfan, PEI Jihong   

  1. 1. College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518061, China
  • Received:2022-01-13 Revised:2022-01-13 Published:2022-02-17
  • Supported by:
    National Natural Science Foundation of China (62071303,61871269);Guangdong Basic and Applied Basic Research Foundation (2019A1515011861);Shenzhen Science and Technology Projection (JCYJ20190808151615540); China Postdoctoral Science Foundation (2021M702275)

摘要: 提出了一种局部特征点对称约束的图像拼接增强方法,首先根据相机内外参数矩阵,计算出特征点的正向局部约束区域,得到正向匹配点对集合。然后再进行逆向局部约束区域的计算,得到逆向匹配点对集合。找出正向与逆向匹配点对集合中的交集,得到满足对称约束条件的最终正确匹配点对集合。将此增强方法结合到现有先进拼接模型学习算法上,求解得到最优图像变换,最终根据图像变换模型得到拼接图像。本文方法有效克服了现有算法学习不同模型时需要手动调整RANSAC算法阈值的问题,从而降低了算法的参数敏感性。实验结果表明,该方法从定性与定量上均优于现有拼接算法的性能。

关键词: 计算机视觉, 图像拼接, 局部约束, 内参矩阵, 特征点匹配

Abstract: In this paper, an image mosaic enhancement method with symmetrical constraint of local feature points is proposed. Firstly, according to the internal and external parameter matrix of the camera, the forward local constraint region of the feature points is calculated, and the set of positive matching points is obtained. Then, the inverse local constraint region is calculated, and the set of inverse matching points is obtained. The intersection in the set of forward and reverse matching point pairs is found, and the final correct matching point pair set which satisfies the symmetric constraint condition is obtained. Combining this enhancement method with the existing advanced stitching model learning algorithm, the optimal image transformation is obtained, and finally the stitched image is obtained according to the image transformation model. This method effectively overcomes the problem that the threshold of the RANSAC algorithm needs to be manually adjusted when learning different models, thus reducing the parameter sensitivity of the algorithm, and improving the image stitching accuracy. Experiments show that our method is superior to existing methods both qualitatively and quantitatively.

Key words: computer vision, image stitching, local constraint, internal parameter matrix, feature point matching

中图分类号: