Design and Implementation of a Vision-Guided Soft Gripper Robotic System for Static and Dynamic Fruit Grasping
DOI:
https://doi.org/10.56028/aetr.15.1.1488.2025Keywords:
soft robotics, monocular vision, pneumatic gripper, robotic grasping, multi-language control. 1. IntroductionAbstract
This study develops a vision-guided robotic system for grasping both stationary and moving fruits using a custom-fabricated silicone soft pneumatic gripper. The design integrates monocular vision for object detection and localization, pixel-to-world calibration, and an “eye-in-hand” hand–eye transformation to enable accurate grasping. A multi-language modular architecture—Python for vision, MATLAB for calibration, and C++ for control—coordinates perception and actuation. Finite element analysis verified the gripper’s predictable deformation under pressure, and real-time edge-based detection achieved consistent fruit localization in semi-structured settings. Experimental results demonstrate reliable grasp execution in static and dynamic conditions. While current limitations include fixed-height depth assumptions and open-loop control, future enhancements such as stereo vision, visual servoing, and improved gripper geometry are proposed to increase adaptability and precision. This work highlights the potential of combining soft robotics and computer vision for adaptive manipulation in agricultural and industrial automation.