[1]李少伟,钟勇,杨华山,等.融合激光雷达和RGB-D相机建图研究[J].福建理工大学学报,2023,21(06):551-557.[doi:10.3969/j.issn.1672-4348.2023.06.007]
 LI Shaowei,ZHONG Yong,YANG Huashan,et al.Mapping by integrating LiDAR and RGB-D camera[J].Journal of Fujian University of Technology;,2023,21(06):551-557.[doi:10.3969/j.issn.1672-4348.2023.06.007]
点击复制

融合激光雷达和RGB-D相机建图研究()
分享到:

《福建理工大学学报》[ISSN:2097-3853/CN:35-1351/Z]

卷:
第21卷
期数:
2023年06期
页码:
551-557
栏目:
出版日期:
2023-12-25

文章信息/Info

Title:
Mapping by integrating LiDAR and RGB-D camera
作者:
李少伟钟勇杨华山张树范周慧
福建省汽车电子与电驱动技术重点实验室(福建理工大学)
Author(s):
LI Shaowei ZHONG Yong YANG Huashan ZHANG Shu FAN Zhouhui
Fujian Provincial Key Laboratory of Automotive Electronics and Electric Drive Technology
关键词:
智能小车RTABMAP算法SLAM传感器融合
Keywords:
intelligent vehiclesRTABMAP algorithmSLAMsensor integration
分类号:
TP242.6
DOI:
10.3969/j.issn.1672-4348.2023.06.007
文献标志码:
A
摘要:
针对智能小车在未知环境的条件下,利用单一传感器同时定位与地图创建不能准确构建复杂环境地图的问题,提出采用一种RTABMAP 算法,用于融合激光雷达和RGB?D 相机建图,该算法采集了激光雷达、RGB?D 相机和里程计的数据,将其存储在内存管理机制的节点中,提取这些节点的特征。通过匹配节点间的视觉词汇次数更新节点的权重,采用离散贝叶斯滤波估计进行回环检测,优化局部地图,最终构建全局地图。在安装有开源机器人操作系统(ROS)的智能小车上实验。结果表明,本研究方法在障碍物检测率方面与激光建图和RGB?D 相机建图方法相比,提高了30.75%和18.63%;地图尺寸误差分别减少了0.013 和0.150 m;角度误差分别减少了3°和1°。
Abstract:
To address the issue of using a single sensor for simultaneous localization and mapping in an unknown environment for intelligent vehicles, a RTABMAP algorithm was proposed for mapping by integrating LiDAR and RGB-D camera. The algorithm collects data from LiDAR, RGB-D camera, and odometer, and stores them in nodes of the memory management mechanism for feature extraction. The weight of the node is updated by matching the number of visual words between the nodes, and the discrete Bayesian filter estimation is used for loop detection to optimize the local map, and finally construct a global map. Experiments were carried out on a smart car equipped with an open source robot operating system (ROS). Results show that compared with laser mapping and RGB D camera mapping methods, the proposed method improves obstacle detection rate by 30.75% and 18.63%;the map size error has been reduced by 0.013m and 0.150m respectively;the angle error has been reduced by 3 °and 1 °, respectively.

参考文献/References:

[1] 吴文涛,何赟泽,杜旭,等. 融合相机与激光雷达的目标检测与尺寸测量[J]. 电子测量与仪器学报,2023,37(6):169-177.[2] 王世强,孟召宗,高楠,等. 激光雷达与相机融合标定技术研究进展[J]. 红外与激光工程,2023,52(8):119-132.[3] 孙健,刘隆辉,李智,等. 基于RGBD相机和激光雷达的传感器数据融合算法[J]. 湖南工程学院学报(自然科学版),2022,32(1):18-24.[4] 李陆君,张智,韩蕊,等. 基于激光雷达和深度相机融合的视觉SLAM研究[J]. 智能计算机与应用,2020,10(8):87-92.[5] 张青春,何孝慈,姚胜,等. 基于ROS机器人的相机与激光雷达融合技术研究[J]. 中国测试,2021,47(12):120-123.[6] 肖军浩,施成浩,黄开宏,等. 单目相机-3维激光雷达的外参标定及融合里程计研究[J]. 机器人,2021,43(1):17-28.[7] MUR-ARTAL R,MONTIEL J M M,TARDOS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163.[8] 刘鸿勋,王伟. 双目相机和激光雷达的融合SLAM研究[J]. 南京师范大学学报(工程技术版),2021,21(1):64-71.[9] LIU Y,ZHENG Z,QIN F Y,et al. A residual convolutional neural network based approach for realtime path planning[J]. KnowledgeBased Systems,2022,242:108400.[10] WU Y Z,XIE F,HUANG L,et al. Convolutionally evaluated gradient first search path planning algorithm without prior global maps[J]. Robotics and Autonomous Systems,2022,150:103985.[11] XU Y L,OU Y S,XU T T. SLAM of robot based on the fusion of vision and LIDAR[C]∥2018 IEEE International Conference on Cyborg and Bionic Systems (CBS). Shenzhen: IEEE,2018:121126.[12] 白崇岳,王建军,程霄霄,等. 融合激光SLAM实现无人驾驶轮椅空间定位优化[J]. 激光与光电子学进展,2022,59(2):485-493.[13] 卢俊鑫,方志军,陈婕妤,等. 点线特征结合的RGBD视觉里程计[J]. 光学学报,2021,41(4):147-157.

更新日期/Last Update: 2023-12-25