Figure 3.
IOT-based smart agriculture monitoring system.
3.3. Harvesting Robots
Under specific climatic circumstances, a harvesting robot is intended to gather fruits
autonomously. The advancement of vision-based harvesting robots’ mechanism is yet in its
early stages. Agricultural robotic systems, on the other hand, have comparable architecture.
The system is comprised of an autonomous mobile platform, a lightweight mechanical
arm with multiple degrees of freedom, an adaptable end effector for a power response
system, a multi-sensor machine vision system, a smart decision and drive management
system, and supplementary hardware and software [
33
]. Kang et al., 2020 [
34
] developed
an intense neural network to assist robotic apple harvesting, which detects and grasps
fruit in a real-time environment using a computer vision system. The proposed robotic
harvesting system was implemented using a customized soft end-effector comprised of
Intel i7-6700 CPU and NVIDIA GTX-1070 GPU and DELL-INSPIRATION main computer
unit, Intel D-435 RGBD visualization camera, and UR5 Universal Robot (modern robotic
manipulator). The proposed approach uses Mobile-DasNet, a computationally efficient
lightweight one-stage instance segmentation network to conduct fruit recognition and
instance segmentation on sensory input. An improved PointNet model was also developed
to conduct fruit modeling and grip estimates from an RGB-D camera through the point
clouds technique. The two qualities described above were utilized and integrated to
Agronomy
2022
,
12
, 127
10 of 21
develop and build a precise robotic system for autonomous fruit picking. The goal of the
study was to improve the vision algorithm’s performance, boost, and improvements.
Furthermore, the proposed soft end-effector robotic device may improve its grasping
recognition proportion and effectiveness under various situations. Ogorodnikova and
Ali [
35
] devised a technique for recognizing ripe tomatoes in a greenhouse setting using a
machine vision system of a harvesting robot. To effectively execute the suggested image
processing method for this purpose, RGB color images from a typical digital camera are
required. In the second stage, RGB color images are converted to HSV, which is easier
for extracting red tomato from the green backdrop in the image. Image segmentation,
thresholding, and morphological operations separate a red tomato from a green background
color photograph. The algorithm is built using Matlab methods and then evaluated to see
if it produces favorable results. The process can be converted into fast-acting codes for the
harvesting robot’s controller since it is basic and short. The research is limited to moving
the gripper to the proper place in tomato detection and developing efficient algorithms
using 3D gripper models to transform the existing research system into industrial robots.
Only a few robotic devices that can successfully perform watering, planting, and
weeding activities now exist. FaRo (Cultivating RObot), a new smart robot based on a CNC
machine, has been presented for automatic crop farming deprived of human involvement in
agriculture. What sets FaRo apart from other farming platforms is its capability to complete
the entire farming cycle, from sowing to harvesting. In addition, the FaRo harvesting
tool will be discussed and shown. FarmBot can only be used for a limited time, from
sowing to harvesting, after which the robot’s tool mount system will be exchanged for crop
harvest. In this example, the robot assumes the role of a tomato collector. Both the FaRo
harvesting robot and the unique kinematics of the continuum manipulator design were
thoroughly discussed. Due to implementation problems, the robot’s design is currently in
the development stage. The objective of the proposed system is to build a model with an
intelligent agricultural monitoring technique linked to the main database, and the robot
will have sufficient information to plant and cultivate crops without the need for human
intervention [
36
].
A depth vision-based approach for detecting and placing truck containers is proposed
for the joint harvesting system, along with three coordinate systems. This method included
data preprocessing, point cloud poses transformation using the SVD (singular value decom-
position) algorithm, upper edge segmentation and projection, RANSAC (Random Sample
Consensus) algorithm edge line extraction and corner point positioning, and fusion and
visualization of results on the depth image. Field trials show that the suggested approach
is effective in identifying and positioning vehicles. However, the study is restricted due
to its sensitivity to the appearance of truck containers and the presence of loud sites in
the agricultural area. Autonomous driving and path planning in the forage harvester’s
unloading system is still challenging [
37
].
Intelligent robots have become extensively employed in various sectors as the in-
telligent computer industry with automation expands. Currently, manual labor is still
used to harvest the majority of domestic crops. However, owing to constant worker pay
hikes, the manual picking technique increases the fruit farmer’s financial expenditures,
and the appliances of robots in the farming business are challenging. As a result, the
smart moveable robot picker has been introduced based on computer vision machinery by
incorporating the robot arm, selector, flexible carrier, track procedure, and the intelligence
unit, which accomplishes the robot picker’s travel channel coding, auto-judging the ripe
fruit, and in addition a vision-based binocular stereoscopic methodology employed for the
functions of recognition and placement. To begin with, precise segmentation recognition
and maturity evaluation of the target fruit is required for proper picking. Thus, the robot
picker may potentially replace human labor in manual picking. The most important part
of the recognition process is gathering fruit image samples, which is performed using a
CCD camera that shoots following the preprocessed fruit features using image content.
The color model is then built up, and it separates the fruit and surrounding surroundings
using segmentation technology before recognizing the fruit. Additionally, it precisely traces
Agronomy
2022
,
12
, 127
11 of 21
and goes for the fruit location in relation to the three-dimensional coordinate information
provided by the infrared source and the fruit contour and image differences taken by
the two cameras simultaneously. To finish the picking operation, it must program the
path recognition to avoid the obstacle [
38
]. The overall architecture of IoT-based fruit
identification for harvesting is shown in Figure
4
.
Dostları ilə paylaş: |