文章摘要
赵华民,LAWAL Olarewaju,许德芳.温室甜瓜自动采摘系统目标检测模型及空间定位研究[J].广东农业科学,2022,49(3):151-162
查看全文    HTML 温室甜瓜自动采摘系统目标检测模型及空间定位研究
Study on Target Detection Model and Spatial Location ofGreenhouse Muskmelon Automatic Picking System
  
DOI:10.16768/j.issn.1004-874X.2022.03.017
中文关键词: 甜瓜  目标检测  YOLOResNet70  目标空间定位  自动采摘
英文关键词: muskmelon  object detection  YOLOResNet70  target spatial positioning  automatic picking
基金项目:山西省高等学校科技创新项目(2019L0402);山西省优秀博士来晋工作奖励资金科研项目(SXYBKY2018030);山西农业大学博士科研启动项目(2018YJ43)
作者单位
赵华民,LAWAL Olarewaju,许德芳  
摘要点击次数: 923
全文下载次数: 570
中文摘要:
      【目的】提高温室甜瓜采摘机器人在复杂光线变化和枝叶遮挡情况下的检测精度,实现检测目 标的空间坐标定位。【方法】基于 YOLOv3,研究优化不同主干网络,头部、颈部网络结构及边界框损失函 数组合对模型检测性能的影响,建立甜瓜严重遮挡下的目标检测网络模型 YOLOResNet70,然后将模型与 Intel RealSense D435i 传感器融合进行目标空间定位。【结果】模型 YOLOResNet70 采用 ResNet70 为主干网络,结合 SPP (Spatial pyramid pooling)、CIoU (Complete intersection over union)、FPN (Feature pyramid network) 以及 NMS (Greedy non-maximum suppression) 时性能最佳,模型平均精度 (AP) 达到 89.4%,优于 Y OLOv3 的 83.3% 和 YOLOv5 的 82%,其检测速度(61.8 帧 /s)比 YOLOv4(54.1 帧 /s)快 14%。【结论】通过对不同光照条件下的遮挡甜瓜图 像进行检测测试,表明 YOLOResNet70 模型鲁棒性良好,并且与 Intel RealSense D435i 深度传感器融合实现了甜 瓜的空间定位坐标,与手工测量结果吻合,为甜瓜采摘机器人目标检测和空间定位提供了理论和模型支持。
英文摘要:
      【Objective】The study was conducted to improve the detection accuracy of muskmelon picking robot in greenhouse under complex light changes and branch and leaf occlusion, and realize the spatial coordinate positioning of detection targets.【Method】Based on YOLOv3, the study explored the impacts of optimizing the combination of different backbone networks, head and neck network structures and bounding box loss function on the model detection performance, established a target detection network model YOLOResNet70 under severe muskmelon occlusion, and then fused the model with Intel RealSense D435i depth visual sensor for target space positioning.【Result】With ResNet70 as the backbone network, YOLOResNet70 had the best performance with the combination of SPP (Spatial pyramid pooling), CIoU (Complete Intersection over Union), FPN (Feature Pyramid Network) and NMS (Greedy non-maximum suppression). The average accuracy (AP) of the model reached 89.4%, which was better than 83.3% of YOLOv3 and 82% of YOLOv5, and the detection speed (61.8 frames/s) was 14% faster than that of YOLOv4 (54.1 frames/s).【Conclusion】 Through the detection and test of occluded muskmelon images under different lighting conditions, it shows that the YOLOResNet70 model has good robustness, and the model is fused with Intel RealSense D435i depth visual sensor to achieve the spatial positioning coordinates of muskmelon, which is consistent with the manual measurement result. It provides theoretical and model support for target detection and spatial positioning of muskmelon picking robot.
  查看/发表评论  下载PDF阅读器

手机扫一扫看