[1]蔡思静,汪严昱.改进的DDeepLabV3+语义分割网络[J].福建理工大学学报,2024,22(01):95-102.[doi:10.3969/j.issn.2097-3853.2024.01.014]
 CAI Sijing,WANG Yanyu.Improved DDeepLabV3+ semantic segmentation network[J].Journal of Fujian University of Technology;,2024,22(01):95-102.[doi:10.3969/j.issn.2097-3853.2024.01.014]
点击复制

改进的DDeepLabV3+语义分割网络()
分享到:

《福建理工大学学报》[ISSN:2097-3853/CN:35-1351/Z]

卷:
第22卷
期数:
2024年01期
页码:
95-102
栏目:
出版日期:
2024-02-25

文章信息/Info

Title:
Improved DDeepLabV3+ semantic segmentation network
作者:
蔡思静汪严昱
福建理工大学电子电气与物理学院
Author(s):
CAI Sijing WANG Yanyu
School of Electronics, Electrical and Physics, Fujian University of Technology, Fuzhou 350118
关键词:
语义分割SE注意力机制模块DeepLabV3+网络
Keywords:
semantic segmentation SE attention module DeeplabV3+ network
分类号:
TP391.41
DOI:
10.3969/j.issn.2097-3853.2024.01.014
文献标志码:
A
摘要:
针对语义分割网络在移动智能化终端上存在参数量大、分割精度不足的问题,提出一种改进的DDeepLabV3+网络算法。首先,采用深度可分离的MobileNet 结构作为网络的骨干部分,降低网络的参数量和复杂度,从而有效减少了运行时间。其次,引入网络的低级特征,实现多尺度信息融合,减少网络下采样引起的空间信息损失。最后,结合注意力机制设计网络ASPP 结构,增强特征提取在实验中的利用。优化后的网络结构在保持较高分类准确性的前提下,计算时间显著减少。网络的平均交并比在Cityscapes 和Camvid 数据集中分别提升了2.37%和2.13%。
Abstract:
Aiming at the problems of too large a number of parameters and insufficient segmentation accuracy of semantic segmentation network on mobile intelligent terminals, an improved DDeepLabV3+ network algorithm was proposed. First, the depthseparable MobileNet structure is used as the backbone of the network to reduce the number of parameters and complexity of the network, thereby effectively reducing the running time. Secondly, lowlevel features of the network are introduced to achieve multiscale information fusion and reduce the spatial information loss caused by network downsampling. Finally, the network ASPP structure is designed based on the attention mechanism to enhance the utilization of feature extraction in experiments. The optimized network structure significantly reduces the calculation time while maintaining high classification accuracy. In the Cityscapes data set used in the study, the average intersection and union ratio of the network increased by 2.37%, while in the Camvid dataset, the ratio increased by 2.13%

参考文献/References:

[1] 王可,沈川贵,罗孟华. 基于深度学习的图像语义分割方法综述[J]. 信息技术与信息化,2022(4):23-30.[2] LONG J,SHELHAMER E,DARRELL T. Fully convolutional networks for semantic segmentation[C]∥2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Boston: IEEE,2015:3431-3440.[3] CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Semantic image segmentation with deep convolutional nets and fully connected CRFs[EB/OL].(2014-12-22)[2021-02-10]arXiv:1412.7062[4] CHEN L C,PAPANDREOU G,KOKKINOS I,et al.DeepLab:semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(4):834-848. [5] CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Rethinking atrous convolution for semantic image segmentation[EB/OL]. (2017-06-17)[2021-02-10]arXiv:1706.05587. [6] CHEN L C,ZHU Y K,PAPANDREOU G,et al. Encoder-decoder withatrous separable convolution for semantic image segmentation[C]∥European Conference on Computer Vision. Cham:Springer,2018:833-851. [7] HOWARD A G,ZHU M L,CHEN B,et al. MobileNets: efficient convolutional neural networks for mobile vision applications[J]. arXiv:1704.04861,2017. [8] YANG M K,YU K,ZHANG C,et al.DenseASPP for semantic segmentation in street scenes[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE,2018:3684-3692. [9] HU J,SHEN L,SUN G. Squeeze-and-excitation networks[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE,2018:7132-7141.

更新日期/Last Update: 2024-02-25