[1]丁仁集,陈丙三.基于改进YOLOv5的番茄识别检测算法[J].福建理工大学学报,2023,21(06):585-591.[doi:10.3969/j.issn.1672-4348.2023.06.012]
 DING Renji,CHEN Bingsan.Tomato recognition and detection algorithm based on improved YOLOv5[J].Journal of Fujian University of Technology;,2023,21(06):585-591.[doi:10.3969/j.issn.1672-4348.2023.06.012]
点击复制

基于改进YOLOv5的番茄识别检测算法()
分享到:

《福建理工大学学报》[ISSN:2097-3853/CN:35-1351/Z]

卷:
第21卷
期数:
2023年06期
页码:
585-591
栏目:
出版日期:
2023-12-25

文章信息/Info

Title:
Tomato recognition and detection algorithm based on improved YOLOv5
作者:
丁仁集陈丙三
福建工程学院机械与汽车工程学院
Author(s):
DING Renji CHEN Bingsan
School of Mechanical and Automotive Engineering, Fujian University of Technology
关键词:
番茄检测改进YOLOv5SimAMCARAFE
Keywords:
tomato detectionimproved YOLOv5SimAMCARAFE
分类号:
TP391.4
DOI:
10.3969/j.issn.1672-4348.2023.06.012
文献标志码:
A
摘要:
为提升果园自动采摘机器人的工作效率,提高对番茄果实识别的准确率,提出了一种基于改进YOLOv5 的番茄识别检测算法。该识别检测算法以YOLOv5 算法为基础,改进了原算法中颈部网络的BottleneckCSP 模块,通过增加批归一化层与SiLU 激活函数提升了网络对目标深层语义信息的提取能力;采用轻量级通用上采样算子CARAFE 扩大感受野,减少漏检并保持轻量化;结合轻量化注意力机制SimAM 为网络提供三维的注意力权重,过滤掉冗余信息,提高模型的准确性和鲁棒性;用SIoU替换CIoU 损失函数,有效减少冗余框的同时加快了预测框的收敛和回归。研究结果表明,改进后算法对番茄目标检测的平均精度均值达到96.5%,比原始算法提高3.4%,对小番茄及番茄被遮挡时的漏检率也有效降低,且满足实时要求。
Abstract:
In order to improve the efficiency of automatic picking robots in orchards and the accuracy of tomato fruit recognition, a tomato recognition detection algorithm based on improved YOLOv5 is proposed. This algorithm is based on the YOLOv5 algorithm, and improves the BottleneckCSP module in the neck network, which enhances the networks ability to extract the deep semantic information of the target by adding a batch normalization layer and the SiLU activation function. It adopts the lightweight universal upsampling operator CARAFE to expand the sensory field, reduce leakage detection and maintain lightweight. Combined with the lightweight attention mechanism SimAM, it provides a three-dimensional attention weight for the network, filters out redundant information, and improves the accuracy and robustness of the model. Replacing the CIoU loss function with SIoU effectively reduces the redundant frames while accelerating the convergence and regression of the prediction frames. Experimental results show that the improved tomato identification and detection algorithm achieves 96.5% mean average precision for tomato target detection, which is 3.4% higher than that of the original algorithm, and the omission rate for small tomatoes and tomatoes occluded is also effectively reduced, and the real-time requirements are met.

参考文献/References:

[1] 黄彤镔,黄河清,李震,等. 基于YOLOv5改进模型的柑橘果实识别方法[J]. 华中农业大学学报,2022,41(4):170-177.[2] 王海楠,弋景刚,张秀花. 番茄采摘机器人识别与定位技术研究进展[J]. 中国农机化学报,2020,41(5):188-196.[3] 张文静,赵性祥,丁睿柔,等. 基于Faster R-CNN算法的番茄识别检测方法[J]. 山东农业大学学报(自然科学版),2021,52(4):624-630.[4] REN S Q,HE K M,GIRSHICK R,et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149. [5] 成伟,张文爱,冯青春,等.基于改进YOLOv3的温室番茄果实识别估产方法[J].中国农机化学报,2021,42(4):176-182.DOI:10.13733/j.jcam.issn.2095-5553.2021.04.25.[6] REDMON J,FARHADI A.YOLOv3:an incremental improvement[EB/OL].[2018-04-08].https:∥arxiv.org/abs/1804.02767. [7] WANG J Q,CHEN K,XU R,et al. CARAFE:content-aware ReAssembly of FEatures[C]∥2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul,Korea (South): IEEE,2019:3007-3016. [8] YANG L, ZHANG R Y, Li L, et al. Simam: A simple, parameter-free attention module for convolutional neural networks[C]∥International Conference on Machine Learning, 2021: 11863-11874. [9] GEVORGYAN Z.SIoU loss: More powerful learning for bounding box regression[J]. Springer International Publishing,2016:21-37. [10] LIU W,ANGUELOV D,ERHAN D,et al. SSD:single shot MultiBox detector[M]∥Computer VisionECCV 2016. Cham:Springer International Publishing,2016:21-37.

相似文献/References:

[1]喻露,戴甜杰,余丽华.基于改进YOLOv5的道路病害智能检测[J].福建理工大学学报,2023,21(04):332.[doi:10.3969/j.issn.1672-4348.2023.04.005]
 YU Lu,DAI Tianjie,YU Lihua.Automatic detection of pavement defect based on improved YOLOv5 algorithm[J].Journal of Fujian University of Technology;,2023,21(06):332.[doi:10.3969/j.issn.1672-4348.2023.04.005]

更新日期/Last Update: 2023-12-25