[1]桂昊,张庆永,袁一卿.基于YOLACT网络的移动机器人视觉SLAM算法[J].福建理工大学学报,2024,22(01):65-73.[doi:10.3969/j.issn.2097-3853.2024.01.010]
 GUI Hao,ZHANG Qingyong,YUAN Yiqing.YOLACT network-based algorithm of visual SLAM of mobile robots[J].Journal of Fujian University of Technology;,2024,22(01):65-73.[doi:10.3969/j.issn.2097-3853.2024.01.010]
点击复制

基于YOLACT网络的移动机器人视觉SLAM算法()
分享到:

《福建理工大学学报》[ISSN:2097-3853/CN:35-1351/Z]

卷:
第22卷
期数:
2024年01期
页码:
65-73
栏目:
出版日期:
2024-02-25

文章信息/Info

Title:
YOLACT network-based algorithm of visual SLAM of mobile robots
作者:
桂昊张庆永袁一卿
福建理工大学机械与汽车工程学院
Author(s):
GUI Hao ZHANG Qingyong YUAN Yiqing
School of Mechanical and Automotive Engineering, Fujian University of Technology
关键词:
实例分割网络SLAM多视图几何动态场景静态稠密地图
Keywords:
instance segmentation network SLAM multi-view geometry dynamic scenes static dense maps
分类号:
TP242.6
DOI:
10.3969/j.issn.2097-3853.2024.01.010
文献标志码:
A
摘要:
提出一种室内动态场景的视觉SLAM 算法,引入实例分割网络YOLACT ,剔除大部分动态点,利用多视图几何进一步过滤分割掩膜外未被剔除的动态特征点,使用剩余的静态特征点作为相机位姿估计;同时构建点云地图,转换并建立八叉树地图;使用背景修复以恢复被剔除动态物体后的背景。为验证算法的有效性,使用TUM 数据集测试,并与ORB?SLAM2 算法和其他处理动态场景的SLAM 算法对比,结果表明,提出的算法在高动态数据集上表现良好。相较于ORB?SLAM2 算法,提出的算法在室内动态场景中的定位精度提升93.06%,可应用于后期机器人定位导航使用。
Abstract:
A visual SLAM algorithm for indoor dynamic scenes was proposed. The instance segmentation network YOLACT was introduced to eliminate most of the dynamic points. The multi-view geometry was used to further filter the dynamic feature points that were not eliminated outside the segmentation mask. The remaining static feature points were used as camera pose estimation. At the same time, the point cloud map was constructed, the octree map was transformed and established; background repair was used to restore the background after dynamic objects were removed. Finally, in order to verify the effectiveness of the proposed algorithm, the TUM dataset was used for testing, and compared with the ORB-SLAM2 algorithm and other SLAM algorithms processing dynamic scenarios, and results show that the proposed algorithm performs well on the highly dyna-mic dataset. Compared with the ORB-SLAM2 algorithm, the positioning accuracy of the proposed algorithm in indoor dynamic scenes is improved by 93.06%, and it can be applied to the later use of robot positioning and navigation.

参考文献/References:

[1] CADENA C,CARLONE L,CARRILLO H,et al. Past,present,and future of simultaneous localization and mapping:toward the robust-perception age[J]. IEEE Transactions on Robotics,2016,32(6):1309-1332.[2] MURARTAL R,MONTIEL J M,TARDOS J D. ORBSLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163.[3] DAI W C,ZHANG Y,LI P,et al. RGB-D SLAM in dynamic environments using point correlations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,44(1):373-389.[4] ZHANG T W,ZHANG H Y,LI Y,et al. FlowFusion:dynamic dense RGB-D SLAM based on optical flow[C]∥2020 IEEE International Conference on Robotics and Automation (ICRA). Paris,France: IEEE,2020:7322-7328.[5] BESCOS B,FACIL J M,CIVERA J,et al. DynaSLAM:tracking,mapping,and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters,2018,3(4):4076-4083.[6] CHANG Z Y,WU H L,SUN Y L,et al. RGB-D visual SLAM based on Yolov4-tiny in indoor dynamic environment[J]. Micromachines,2022,13(2):230.[7] WEN S H,LI P J,ZHAO Y J,et al. Semantic visual SLAM in dynamic environment[J]. Autonomous Robots,2021,45(4):493-504.[8] 房立金,刘博,万应才. 基于深度学习的动态场景语义SLAM[J]. 华中科技大学学报(自然科学版),2020,48(1):121-126.[9] LI A,WANG J K,XU M,et al. DPSLAM:a visual SLAM with moving probability towards dynamic environments[J]. Information Sciences,2021,556:128-142.[10] FAN Y C,ZHANG Q C,TANG Y L,et al. Blitz-SLAM:a semantic SLAM in dynamic environments[J]. Pattern Recognition,2022,121:108225.[11] BOLYA D,ZHOU C,XIAO F Y,et al. YOLACT:realtime instance segmentation[C]∥2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul,Korea (South): IEEE,2019:9157-9166.[12] YU C,LIU Z X,LIU X J,et al. DS-SLAM:a semantic visual SLAM towards dynamic environments[C]∥2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018:1168–1174.[13] STURM J,ENGELHARD N,ENDRES F,et al. A benchmark for the evaluation of RGB-D SLAM systems[C]∥2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura-Algarve,Portugal: IEEE,2012:573-580.[14] GRUPP M. evo: Python package for the evaluation of odometry and slam[EB/OL].[2022-01-20].https:∥github.com/MichaelGrupp/evo. [15] LIU Y B,MIURA J. RDS-SLAM:real-time dynamic SLAM using semantic segmentation methods[J]. IEEE Access,2021,9:23772-23785.

相似文献/References:

[1]李少伟,钟勇,杨华山,等.融合激光雷达和RGB-D相机建图研究[J].福建理工大学学报,2023,21(06):551.[doi:10.3969/j.issn.1672-4348.2023.06.007]
 LI Shaowei,ZHONG Yong,YANG Huashan,et al.Mapping by integrating LiDAR and RGB-D camera[J].Journal of Fujian University of Technology;,2023,21(01):551.[doi:10.3969/j.issn.1672-4348.2023.06.007]

更新日期/Last Update: 2024-02-25