[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112673799B - Self-walking mowing system and outdoor walking equipment - Google Patents

Self-walking mowing system and outdoor walking equipment Download PDF

Info

Publication number
CN112673799B
CN112673799B CN201911409440.1A CN201911409440A CN112673799B CN 112673799 B CN112673799 B CN 112673799B CN 201911409440 A CN201911409440 A CN 201911409440A CN 112673799 B CN112673799 B CN 112673799B
Authority
CN
China
Prior art keywords
image
module
mowing
obstacle
walking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911409440.1A
Other languages
Chinese (zh)
Other versions
CN112673799A (en
Inventor
陈伟鹏
杨德中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Chervon Industry Co Ltd
Original Assignee
Nanjing Chervon Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Chervon Industry Co Ltd filed Critical Nanjing Chervon Industry Co Ltd
Publication of CN112673799A publication Critical patent/CN112673799A/en
Application granted granted Critical
Publication of CN112673799B publication Critical patent/CN112673799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Guiding Agricultural Machines (AREA)
  • Harvester Elements (AREA)

Abstract

The invention proposes a self-walking mowing system comprising: the execution mechanism comprises a mowing assembly for realizing a mowing function, a walking assembly shell for realizing a walking function, and an image acquisition module capable of acquiring real-time images of mowing areas; the display module is configured to display a simulated live view of the real-time image according to the generation of the real-time image; the receiving module is used for receiving an instruction input by a user; the obstacle generation module generates a first virtual obstacle identifier according to an instruction input by a user so as to form a first fusion image; the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the first virtual obstacle mark in the first fusion image. The invention also provides the outdoor walking equipment, the self-walking mowing system and the outdoor walking equipment are convenient for a user to add the obstacle mark to bypass the obstacle area, and the working condition of the self-walking mowing system can be intuitively acquired.

Description

Self-walking mowing system and outdoor walking equipment
Technical Field
The invention relates to an outdoor electric tool, in particular to a self-walking mowing system and outdoor walking equipment.
Background
As an outdoor mowing tool, the self-walking mowing system does not need long-term operation of a user, is intelligent and convenient, and is favored by the user. In the mowing process of the traditional self-walking mowing system, obstacles such as trees and stones often exist in a mowing area, the obstacles not only can influence the walking track of the self-walking mowing system, but also can easily damage the self-walking mowing system due to repeated collision of the obstacles. The user may have an area where mowing work is not desired, such as a planted flower and grass area, in the mowing area, and the conventional self-walking mowing system cannot detect the area, so that the mowing area where the user does not desire mowing is mistakenly mowed, and the mowing requirement of the user cannot be met. Other common outdoor walking equipment, such as snowploughs, also have the above problems.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide the self-walking mowing system, which can enable a user to add an obstacle mark on a simulated live-action image or a real-time image through displaying the simulated live-action image or the real-time image of an executing mechanism, control the self-walking mowing system to bypass an obstacle area and intuitively acquire the working condition of the self-walking mowing system.
In order to achieve the above main object, a self-walking mowing system is provided, comprising: the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function; the shell is used for supporting the executing mechanism; an image acquisition module capable of acquiring a real-time image comprising at least a portion of a mowing area and at least one obstacle located within the mowing area; the display module is electrically connected or in communication with the image acquisition module and is configured to display a real-time image or a simulated live-action image generated according to the real-time image; the obstacle generation module generates a virtual obstacle identifier corresponding to the obstacle in the real-time image or the simulated live-action image according to an instruction input by a user so as to form a first fusion image; the sending module is used for sending the information of the first fusion image; the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the obstacle corresponding to the virtual obstacle identifier in the first fusion image.
Optionally, the display module includes a projection device, through which the simulated actual graph or the actual graph is projected, and the projection device includes one of a mobile phone screen, a hardware display screen, VR glasses, and AR glasses.
Optionally, the control module includes a data operation processor for processing the data and an image processor for generating the image and the scene modeling, the data operation processor establishing a pixel coordinate system and an actuator coordinate system to convert the virtual obstacle identification position information to the position information of the actual obstacle.
Optionally, the obstacle generating module is configured to include a preset obstacle model for adding the virtual obstacle identifier, where the preset obstacle model includes at least one or a combination of a stone model, a tree model, and a flower model.
In order to achieve the above main object, a self-walking mowing system is provided, comprising: the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function; the shell is used for supporting the executing mechanism; an image acquisition module capable of acquiring a real-time image comprising at least a portion of a mowing area and at least one obstacle located within the mowing area; the display module is electrically connected or in communication with the image acquisition module and is configured to display a real-time image or a simulated live-action image generated according to the real-time image; the obstacle generation module is used for generating a first virtual obstacle identifier corresponding to the obstacle in the real-time image or the simulated live-action image by calculating the characteristic parameters so as to form a first fusion image; the sending module is used for sending the information of the first fusion image; the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid virtual obstacles in the first fusion image.
Optionally, the image acquisition module includes one or a combination of an image sensor, a lidar, an ultrasonic sensor, a camera, and a TOF sensor.
Optionally, the self-walking mowing system further comprises a boundary generation module, the boundary generation module generates a first virtual boundary according to the information of the first boundary of the mowing area acquired by the image acquisition module, and the control module controls the execution mechanism to walk in the first boundary corresponding to the first virtual boundary.
Optionally, the self-walking mowing system further comprises a path generating module, the path generating module automatically generates a walking path in the first virtual boundary, and the control module controls the executing mechanism to walk in the first boundary according to the walking path.
In order to achieve the above main object, an outdoor self-walking device is provided, comprising: the executing mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function; the shell is used for supporting the executing mechanism; an image acquisition module capable of acquiring a real-time image including at least a portion of a work area and at least one obstacle located within the work area; the display module is electrically connected or in communication with the image acquisition module and is configured to display a real-time image or a simulated live-action image generated according to the real-time image; the obstacle generation module generates a virtual obstacle identifier corresponding to the obstacle in the real-time image according to an instruction input by a user so as to form a first fusion image; the sending module is used for sending the information of the first fusion image; the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid virtual obstacles in the first fusion image.
In order to achieve the above main object, an outdoor self-walking device is provided, comprising: the executing mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function; the shell is used for supporting the executing mechanism; an image acquisition module capable of acquiring a real-time image including at least a portion of a work area and at least one obstacle located within the work area; the display module is electrically connected or in communication with the image acquisition module and is configured to display or simulate a live-action image according to the generation of the real-time image; the obstacle generation module generates a first virtual obstacle identifier corresponding to the obstacle in the real-time image by calculating the characteristic parameters so as to form a first fusion image; the sending module is used for sending the information of the first fusion image; the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid virtual obstacles in the first fusion image.
Drawings
FIG. 1 is a block diagram of an actuator of the self-propelled mowing system of the present invention.
Fig. 2 is a schematic illustration of the connection of the actuator and the projection device of fig. 1.
Fig. 3 is a schematic view of a part of the internal structure of the actuator in fig. 2.
Fig. 4 is a schematic diagram of the framework of the actuator of fig. 1.
Fig. 5 is a schematic frame view of the self-propelled mowing system in fig. 1.
Fig. 6 is a schematic view of a mowing area according to a first embodiment of the present invention.
Fig. 7 is a schematic diagram of an interactive interface according to a first embodiment of the present invention.
Fig. 8 is a schematic diagram of an interactive interface displaying real-time images according to a first embodiment of the present invention.
Fig. 9 is a schematic diagram of an interactive interface displaying a first fused image according to a first embodiment of the present invention.
Fig. 10 is a schematic diagram of a second fused image in an interactive interface according to a first embodiment of the present invention.
Fig. 11 is a schematic diagram of an actuator coordinate system according to a first embodiment of the present invention.
Fig. 12 is a schematic diagram of a pixel coordinate system according to a first embodiment of the present invention.
Fig. 13 is a schematic frame view of a self-walking mowing system in accordance with a second embodiment of the present invention.
Fig. 14 is a schematic view of a mowing area in accordance with a second embodiment of the present invention.
Fig. 15 is a schematic view of a first fused image according to a second embodiment of the present invention.
Fig. 16 is a schematic frame view of a self-walking mowing system in accordance with a third embodiment of the present invention.
Fig. 17 is a schematic view of a mowing area in accordance with a third embodiment of the present invention.
Fig. 18 is a schematic view of a first fused image according to a third embodiment of the present invention.
Fig. 19 is a schematic view of a first fused image according to a third embodiment of the present invention.
Fig. 20 is a schematic view of a second fused image according to a third embodiment of the present invention.
Fig. 21 is a schematic frame view of a self-walking mowing system in accordance with a fourth embodiment of the present invention.
Fig. 22 is a schematic view of a mowing area in accordance with a fourth embodiment of the present invention.
Fig. 23 is a schematic view of a first fused image of a fourth embodiment of the present invention.
Fig. 24 is a schematic view of a first fused image according to a fourth embodiment of the present invention.
Fig. 25 is a schematic diagram of a second fused image of a fourth embodiment of the present invention.
Fig. 26 is a schematic diagram of virtual boot channel identification setting according to a fourth embodiment of the present invention.
Fig. 27 is a schematic structural view of an outdoor self-walking device according to a fifth embodiment of the present invention.
Detailed Description
The present invention proposes a self-walking mowing system, referring to fig. 1 to 3, the self-walking mowing system comprises an actuating mechanism 100 for trimming vegetation, the actuating mechanism 100 at least comprises a mowing assembly 120 for realizing mowing function and a walking assembly 110 for realizing walking function, and comprises a main body 140 and a shell 130, wherein the shell 130 packages and supports the main body 140, the mowing assembly 120 and the walking assembly 110. The mower assembly 120 includes a mower member 121 and an output motor 122, the output motor 122 driving the mower member 121 in rotation to trim vegetation, the mower member 121 may be a blade, or other member that can cut a lawn being trimmed. The travel assembly 110 includes at least one travel wheel 111, and a drive motor 112 for driving the travel wheel 111, the drive motor 112 providing torque to the at least one travel wheel 111. By the cooperation of the mowing assembly 120 and the walking assembly 110, the self-walking mowing system can control the actuating mechanism 100 to move and work on vegetation.
Referring to fig. 4, the self-walking mowing system further includes a receiving module 200, a computing assembly, and a power supply 170, wherein the receiving module 200 includes at least a receiving module 200 for receiving a user command, and the receiving module 200 is configured to receive an input control command for the self-walking mowing system. The computing assembly includes at least a control module 150 for controlling operation of the self-walking mowing system, wherein the control module 150 is used for controlling operation of the driving motor 112 and the output motor 122 according to the instruction and the operation parameters of the self-walking mowing system, so as to control the actuator 100 to walk in the corresponding working area and mow. The power source 170 is used to power the walk assembly and the output assembly, and the power source 170 is preferably a pluggable battery pack mounted to the housing 130.
The self-walking mowing system comprises an image acquisition module 400 and a display module 500, wherein the computing assembly comprises a control module 150 for computing image information, the display module 500 is electrically or communicatively connected with the image acquisition module 400, the image acquisition module 400 can acquire a real-time image 530 comprising at least a part of a mowing area and at least a part of a grass boundary, and the display module 500 is used for displaying the real-time image 530 of the corresponding mowing area and mowing boundary. Referring to fig. 3 and 6, the image acquisition module 400 at least includes one or a combination of a camera 410, a laser radar 420, and a TOF sensor 430, acquires ambient environmental information of the actuator 100 through the camera 410 and the laser radar 420, that is, acquires environmental images of a mowing area and a mowing boundary to be worked through the camera 410, and can acquire parameter information such as positions of objects in the mowing area and the mowing boundary, distance, inclined distance, shape and the like relative to the current actuator 100 through information of laser reflection of the laser radar 420, and the control module 150 receives image information of the mowing area and the mowing boundary acquired by the image acquisition module 400 and combines the parameter information of the objects in the image onto the image. The display module 500 displays the real-time image 530 of the mowing area and mowing boundary acquired by the image acquisition module 400 to the user.
Referring to fig. 3, to improve the accuracy of detecting the position of the actuator 100, the self-walking mowing system further includes a positioning module 300 for acquiring the position of the actuator 100, and by analyzing the real-time positioning data of the actuator 100, acquiring control adjustments for advancing and mowing the actuator 100. The positioning module 300 includes one or a combination of a GPS positioning unit 310, an IMU inertial measurement unit 320, and a displacement sensor 330 for acquiring the position of the actuator 100. The GPS positioning unit 310 is used for acquiring position information or position estimation of the actuator 100, and a starting position of the movement of the actuator 100. The IMU inertial measurement unit 320 includes accelerometers, gyroscopes for detecting deflection information of the actuator 100 during travel. The displacement sensor 330 may be provided on the drive motor 112 or the road wheel 111 for acquiring displacement data of the actuator 100. By combining and correcting the information acquired by the plurality of devices, more accurate position information is acquired, and the real-time position and posture of the actuator 100 are acquired.
In another manner, the control module 150 generates a simulated live-action image 540 of the mowing area according to the image information and the data information of the mowing area acquired by the image acquisition module 400, the simulated live-action image 540 simulates boundaries, areas, obstacles and the like of the mowing area, and builds the actuator model 160, and correspondingly displays the actuator model 160 in the simulated live-action image 540 according to the position of the actuator 100 in the mowing area, so that the position and the working state of the actuator model 160 are synchronous with the actual actuator 100.
Referring to fig. 5, a display module 500 is used to project a simulated live-action view 540. Specifically, the display module 500 generates the interactive interface 520 by projecting through the projecting device 510, and the interactive interface 520 displays the simulated live view 540 of the actuator 100. The control module 150 controls the interactive interface 520 generated by the display module 500 to generate the simulated live-action image 540, and simultaneously generates the control panel 550 for the user to operate, and the user directly controls the self-walking mowing system through the receiving module 200 or through the interactive interface 520. Projection device 510 may be a cell phone screen, a hardware display screen, communicatively coupled to the computing component and configured to display simulated live action view 540 or real-time image 530.
Referring to fig. 3, the control module 150 includes a data operation processor 310 for processing data and an image processor 320 for producing images and modeling scenes, the data operation processor 310 may be a CPU or a microcontroller with a high data processing speed, and the image processor 320 may be a separate GPU (Graphics Processing Unit) module. When the execution mechanism 100 operates, various operation data and environment data of the execution mechanism 100 are analyzed through the data operation processor 310, corresponding virtual live-action image information is generated through image processor 320 according to the data modeling, a specific virtual live-action image is generated through the projection device 510, and the virtual live-action image is controlled to synchronously update display contents along with the real-time operation state change of the execution mechanism 100 so as to be matched with the operation state of the actual execution mechanism 100. The control module 150 also includes a memory for storing data, which stores algorithms associated with the self-propelled mowing system and data information generated during operation of the self-propelled mowing system.
In the first embodiment of the present invention, the computing component further includes a boundary generating module 700, a control module 150 and a transmitting module 600, referring to fig. 7 and 8, generating a first virtual boundary 710 corresponding to the mowing boundary in the real-time image 530 or the simulated live-action image 540 by computing the feature parameters to form a first fused image 720. The boundary generation module 700 is internally provided with a boundary analysis algorithm, and the mowing boundary of the area to be mowed is analyzed on the color, the grass height and the shape in the real-time image 530 or the simulated live-action image 540, so that a first virtual boundary 710 is generated in the corresponding mowing boundary position in the real-time image 530 or the simulated live-action image 540, and a first virtual boundary 710 is generated in the corresponding mowing boundary position in the real-time image 530 or the simulated live-action image 540, so that the first virtual boundary 710 and the real-time image 530 or the simulated live-action image 540 are fused to generate a first fused image 720, the first fused image 720 comprises the first virtual boundary 710 and a first virtual mowing area 760 limited by the first virtual boundary 710, and the first virtual boundary 710 corresponds to the actual first boundary and is the mowing boundary in the current environment detected by the boundary generation module 700. The first virtual mowing area 760 corresponds to the physical object distribution and location of the first mowing area 770. The sending module 600 is electrically or communicatively connected to the control module 150, the sending module 600 sends information of the first fused image 720 to the control module 150, the information of the first fused image 720 includes position information of the first virtual boundary 710, the control module controls the actuator to operate within the first virtual boundary, that is, the first virtual boundary 710 defines the first virtual mowing area 760, the control module 150 controls the actuator 100 to mow in an actual first mowing area 770 corresponding to the first virtual mowing area 760 according to the position information of the first virtual boundary 710, and controls the actuator 100 to operate only within the actual first boundary corresponding to the first virtual boundary according to detecting the position of the actuator 100.
The control module 150 is connected to and controls the driving motor 112 and the output motor 122, so that the control module 150 controls the actuator 100 to travel and mow according to the supplementary working path, the number of the walking wheels 111 is two, namely the first walking wheel 113 and the second walking wheel 114, the driving motor 112 is a first driving motor 115 and a second driving motor 116, the control module 150 controls and connects the first driving motor 115 and the second driving motor 116, and the control unit controls the rotating speeds of the first driving motor 115 and the second driving motor 116 through the driving controller so as to control the traveling state of the actuator 100. The computing component analyzes the control instructions to the actuator 100 by obtaining the real-time position of the actuator 100 to achieve control of the actuator 100 to operate within the first boundary. The control module 150 includes an output controller for controlling the output motor, and a driving controller for controlling the driving motor 112, the output controller being electrically connected to the output motor 122, and controlling the operation of the output motor through the output controller, thereby controlling the cutting state of the cutting blade. The driving controller is connected to control the driving motor 112, and the driving controller is communicatively connected to the driving motor 112, so that the receiving module 200 receives a start instruction of a user or determines to start, and the control module 150 analyzes a driving route of the executing mechanism 100, and controls the driving motor 112 to drive the travelling wheel 111 to travel through the driving controller. The control module 150 obtains the position information corresponding to the first virtual boundary 710, analyzes the steering and speed information required by the actuator 100 to complete the operation within the preset first boundary according to the position information of the actuator 100 detected by the positioning module 300, and controls the driving controller to control the rotation speed of the driving motor 112, so that the actuator 100 is driven at the preset speed. And can cause differential rotation of the two road wheels of the actuator 100 to cause the actuator 100 to steer. The user can control the movement of the corresponding real-time image 530 or simulated live-action image 540 by operating the displacement of the actuator 100 and the displacement of the image acquisition module 400 through the receiving module 200, so that the user of the real-time image 530 or simulated live-action image 540 needs to watch the mowing area, and a control instruction is added.
The receiving module 200 may be disposed in a peripheral device outside the actuator 100, where the peripheral device is communicatively connected to the actuator 100, and the peripheral device receives a control instruction from a user and sends the control instruction to the computing component, where the computing component analyzes the control instruction from the user to control the actuator 100 to perform. The peripheral device may be configured as any one or more of a keyboard, mouse, microphone, touch screen, remote control and/or handle, camera 410, lidar 420, cell phone, etc. mobile device. The user can directly and manually input command information through hardware such as a mouse, a keyboard, a remote controller, a mobile phone and the like, and can also input command information through signals such as voice, gestures, eye movements and the like. By arranging the camera 410, the information features of eye movement or hand movement of the user are collected, so that control instructions given by the user are analyzed.
In another embodiment, projection device 510 employs virtual imaging techniques, through interference and diffraction principles, through holographic projection, through an AR device or within a VR glasses device, and accordingly generates virtual control panel 550, and enables instruction input through a communicatively coupled peripheral device 310, such as a remote control or a handle. Preferably, the interaction module 400 includes a motion capturing unit and an interaction positioning device, the motion capturing unit is configured as a camera 410 and/or an infrared sensing device, and is used for capturing the motion of the hand or the controller of the user, the interaction positioning device obtains the position of the projection device 510, and analyzes the selection of the generated virtual control panel 550 by the user by analyzing the displacement of the hand of the user and the relative position of the projection device 510, and generates the corresponding control instruction.
In one embodiment, the projection device 510 is mounted on a peripheral device, such as the peripheral device 310 is selected as a mobile phone or a computer or VR device, and the projection device 510 is a mobile phone screen, a computer screen, a curtain, VR glasses, or the like.
The display module 500 has at least a projection device 510 and an interactive interface 520, the interactive interface 520 is displayed by the projection device 510, and a real-time image 530 or a simulated live-action image 540 and a first fused image 720 are displayed in the interactive interface 520. Projection device 510 may be implemented as a hardware display screen, which may be an electronic device mounted to a peripheral device, such as a cell phone, computer, etc., or mounted directly to actuator 100, or such that the computing components are communicatively coupled to a variety of display screens and the user selects the projected object to display a corresponding real-time image 530 or simulated live view 540.
Referring to fig. 9, the receiving module 200 may also generate a control panel 550 on the interactive interface 520 to receive a control instruction of a user through the control panel 550. For receiving information entered by a user whether the first virtual boundary 710 in the first fused image 720 needs to be modified. When the user selects to modify the information of the first fused image 720, the user manually inputs an instruction to modify the first virtual boundary 710, so as to generate a second virtual boundary 730 designated by the user, after the boundary display module 500 calculates to generate the first fused image 720, the display module 500 generates the interactive interface 520 through the projection device 510 to display the first fused image 720 and the first virtual boundary 710, the receiving module 200 asks the user whether to modify the first virtual boundary 710 through the interactive interface 520, the user selects to modify through the receiving module 200, and the mowing boundary combined with the actual requirement modifies the first virtual boundary 710 in the displayed first fused image 720 through the control panel 550. The computing component also includes a correction module 800, the correction module 800 receiving user instructions to correct the first virtual boundary 710 to generate a second virtual boundary 730 in the real-time image 530 or the simulated live-view 540 to form a second fused image 740 when the user inputs information that requires correction of the first virtual boundary 710.
The second fused image 740 includes a second virtual boundary 730 and a second virtual mowing area defined by the second virtual boundary 730, where the second virtual boundary 730 corresponds to an actual second boundary, and the second boundary is an actual mowing area corrected by the user. The second virtual mowing area corresponds to the object distribution and position of the actual second mowing area. The control module controls the actuator to operate within the second virtual boundary, that is, the second virtual boundary defines a second virtual mowing area, the control module 150 controls the actuator 100 to mow in an actual second mowing area corresponding to the second virtual mowing area according to the position information of the second virtual boundary 730, and controls the actuator 100 to operate only within the actual second boundary corresponding to the second virtual boundary 730 according to the position of the detection actuator 100.
Referring to fig. 10 and 11, in order to identify a correction instruction of the user to the first fused image 720 to generate the second fused image 740, that is, to fuse the correction instruction of the user into the real-time image 530 or the simulated live-action image 540, the data operation processor establishes an actuator coordinate system 750 for analyzing the positioning of the actuator 100 in the environment to be mowed according to the positioning of the first fused image 720 and the actuator 100 acquired by the positioning module 300 and the image acquisition module 400. The data operation processor establishes a pixel coordinate system 760 for the generated first fused image 720, so that pixels in the first fused image 720 correspond to their pixel coordinates, respectively, and generates the real-time image 530 or the simulated live-action image 540 through analysis. When a user selects a line segment or region in the first fused image 720 through the interactive interface 520, essentially a collection of pixels on the first fused image 720 is selected. The correction module 800 calculates the position information of the actual second boundary by analyzing the position of the real-time actuator 100 in the actuator coordinate system 750, the rotation angle of the image acquisition module 400, and the pixel coordinate set corresponding to the second virtual boundary 730 selected by the user, so as to project the second virtual boundary 730 selected by the user to be corrected on the first fused image 720 into the actual mowing area to obtain the second mowing area designated by the user, and fuse the second virtual boundary 730 into the real-time image 530 or the simulated live-action image 540 to generate the second fused image 740, wherein the coordinates of the second virtual boundary 730 are fixed in the actuator coordinate system 750, and then move in the pixel coordinate system 760 during the conversion process of the real-time image 530 or the simulated live-action image 540 controlled by the user. By correcting the user, the error of automatic recognition and acquisition of the mowing boundary by the self-walking mowing system can be corrected, so that the boundary of a mowing area can be intuitively and accurately set, the first virtual boundary 710 is generated by recognition of devices such as an image sensor, and the user only needs to correct the first virtual boundary 710 to generate the second virtual boundary 730, so that the mowing boundary can be conveniently set by the user operation.
In another embodiment, the user may directly set the first virtual boundary 710 on the real-time image 530 or the simulated live-action image 540 through the receiving module 200, obtain the position information of the first virtual boundary 710 set by the user through the boundary identifying module, project the position information onto the coordinates of the actuator 100, and detect the position of the actuator 100 through the positioning module 300, so as to control the actuator 100 to move on the first boundary corresponding to the first virtual boundary 710 through the control module 150, thereby facilitating the user to quickly set the mowing boundary.
In the second embodiment of the present invention, referring to fig. 13 and 14, the calculating means includes an image acquisition module 400A and an obstacle generation module 800A, the image acquisition module 400A includes one or a combination of an image sensor, a laser radar 420A, an ultrasonic sensor, a camera 410A, a TOF sensor 430A, the ultrasonic sensor detects whether there is an obstacle in a mowing area by transmitting ultrasonic waves according to a return time of the ultrasonic waves, and records position information of the obstacle, and the laser radar 420A transmits laser light and detects the obstacle in the mowing area according to a reflection time of the laser light; the image sensor analyzes the shape and color of the acquired image, and analyzes the corresponding image conforming to the obstacle through an algorithm. The obstacle generating module 800a fuses the obstacle detection information of the mowing area according to the image collecting module 400a into the real-time image 530a or the simulated live-action image 540a, and generates a first fused image 720a by generating a first virtual obstacle identifier 810a in a corresponding position in the mowing area in the real-time image 530a or the simulated live-action image 540a through the display module 500a, wherein the first fused image 720a is the real-time image 530a or the simulated live-action image 540a including the first virtual obstacle identifier 810 a. The transmitting module 600a transmits the information of the first fused image 720a to the control module 150a. The control module 150a controls the actuator 100a to avoid the virtual obstacle when mowing is performed according to the information of the first fused image 720 a. The data operation processor establishes a pixel coordinate system and an actuating mechanism 100a coordinate system, through identifying the pixel coordinates of a first virtual obstacle identifier 810a added by a user on the first fused image 720a, calculates the position information of the first virtual obstacle identifier 810a in the obstacle according to a preset coordinate conversion method so as to convert the first virtual obstacle identifier position information into the actual obstacle 820a, and controls the actuating mechanism 100a to avoid the obstacle 820a in the operation process through the control module 150a, so that the user can add the first virtual obstacle identifier 810a on the real-time image 530a or the simulated live view image 540a, and the self-walking mowing system can identify the obstacle and bypass the obstacle, thereby facilitating the operation of the user and accurately adding the obstacle information into a mowing area.
In another embodiment, referring to fig. 15, the obstacle generation module 800a generates a virtual obstacle identification corresponding to an obstacle in the real-time image 530a or the simulated live-action image 540a according to an instruction input by a user to form a first fused image 720a. The user sets a virtual obstacle identifier according to the obstacle position in the actual mowing area or the area position where mowing is not needed in the real-time image 530a or the simulated live view 540a through the receiving module 200a, and the virtual obstacle identifier is used as an identifier for setting an area which the actuator 100a needs to bypass in the actual mowing operation without operation.
The obstacle generation module 800a presets an obstacle model, such as a stone model, a tree model, a flower model, for a user to select, for an obstacle, such as a stone, a tree, etc., that may be present in the mowing area. The user judges the position of the obstacle corresponding to the simulated live-action image 540a or the real-time image 530a according to the environmental characteristics shown by the simulated live-action image 540a or the real-time image 530a and the actual state of the mowing area through the simulated live-action image 540a or the real-time image 530a, selects or selects the type of the obstacle in the simulated live-action image 540a or the real-time image 540a through the receiving module 200a, and the position and the size of the obstacle, after the user inputs the related information, the image processor 320 generates the corresponding simulated obstacle 640 in the generated simulated live-action image 540a, and the control module 150a controls the executing mechanism 100a to avoid the obstacle operation in the running process.
The obstacle generating module 800a generates a virtual obstacle identifier corresponding to the obstacle in the real-time image 530a or the simulated live-action image 540a to form a first fused image 720a, the first fused image 720a including size, shape, and location information of the virtual obstacle identifier. The sending module 600a sends the information of the first fused image 720a to the control module 150a, so that the control module 150a controls the executing mechanism 100a to bypass the virtual obstacle identifier when mowing in the mowing area according to the information of the first fused image 720a, thereby achieving the requirement of avoiding obstacles.
The first fused image 720a may further include a first virtual boundary 710a, and the boundary generation module 700a generates the first virtual boundary 710a corresponding to the mowing boundary in the real-time image 530a or the simulated live-action image 540a by calculating the feature parameter, so that the control module 150a controls the actuator 100a to operate in a first mowing area corresponding to a first virtual mowing area inside the first virtual boundary 710a and outside the virtual obstacle identifier according to the first fused image 720a information, thereby limiting the actuator 100a to operate in the range of the first boundary and avoiding the virtual obstacle identifier. The obstacle can be an object occupying space such as stone, articles and the like, or an area which does not need mowing such as flowers, special plants and the like; an obstacle may also be understood as an area that is not required by the user to work in an area within the current first virtual boundary 710a, and may be formed in a special pattern or shape to meet the user's need to beautify the lawn.
In the third embodiment of the present invention, referring to fig. 16 to 19, the obstacle generating module 800b generates a first virtual obstacle 810b corresponding to a mowing obstacle in the real-time image 530b or the simulated live-action image 540b by calculating the characteristic parameters to form a first fused image 720b. The first fused image 720b includes a first virtual mowing area 760b, and a first virtual obstacle 810b within the first virtual mowing area 760b, the first virtual mowing area 760b corresponding to an actual first mowing area 770b, the first virtual mowing area 760b corresponding to an object distribution and position of the actual first mowing area 770b, the first mowing area 770b being a mowing area where the actuator 100b needs to operate. The obstacle generating module 800b is provided with an obstacle analysis algorithm, and the image collecting module 400b detects the obstacle 820b of the area to be mowed and generates a first virtual obstacle 810b in the position of the obstacle 820b correspondingly mowed in the real-time image 530b or the simulated live-action image 540b, so that the first virtual obstacle 810b is fused with the real-time image 530b or the simulated live-action image 540b to generate a first fused image 720b. The virtual live-action image 540b or the real-time image 530b is displayed by the display module 500 b. The first fused image 720b includes a first virtual obstacle 810b, where the first virtual obstacle 810b corresponds to at least one actual obstacle 820b, and is a mowing obstacle 820b in the current environment detected by the obstacle generating module 800 b. The sending module 600b is electrically connected or communicatively connected to the control module 150b, where the sending module 600b sends information of the first fused image 720b to the control module 150b, where the information of the first fused image 720b includes position information of the first virtual obstacle 810b, and the control module 150b controls the actuator 100b to mow in the actual first mowing area 770b corresponding to the first virtual mowing area 760b according to the position information of the first virtual obstacle 810b, and controls the actuator 100b to operate only in the actual first obstacle corresponding to the first virtual obstacle 810b according to the position of the detection actuator 100 b.
Further, referring to fig. 20, after the obstacle generating module 800b generates the first fused image 720b, the receiving module 200b inquires of the user through the display interface whether the user needs to correct the first virtual obstacle 810b in the current first fused image 720b, and receives the information input by the user whether the user needs to correct the first virtual obstacle 810b in the first fused image 720 b. When the user selects to modify the information of the first fused image 720b, the user manually inputs an instruction to modify the first virtual obstacle 810b, so as to generate a second virtual obstacle 830b designated by the user, so that the user modifies the first virtual obstacle 810b in the displayed first fused image 720b through the control panel in combination with the mowing obstacle actually required. The computing component also includes a correction module 800b, the correction module 800b receiving user instructions to correct the first virtual obstacle 810b to generate a second virtual obstacle 830b in the real-time image 530b or the simulated live-view 540b to form a second fused image 740b when the user inputs information that requires correction of the first virtual obstacle 810 b.
The second fused image 740b includes a modified second virtual obstacle 830b, where the second virtual obstacle 830b corresponds to at least one obstacle 820b that the actual user needs to avoid. The control module 150b controls the execution mechanism 100b to mow in the actual first mowing area 770b corresponding to the first virtual mowing area 760b according to the position information of the second virtual obstacle 830b, and controls the execution mechanism 100b to operate only in the actual second obstacle corresponding to the second virtual obstacle 830b according to the position of the detection execution mechanism 100b, and the control module 150b controls the execution mechanism 100b to avoid the actual obstacle position corresponding to the second virtual obstacle 830b according to the information of the first fused image 720b when mowing is performed, so that a user can conveniently adjust the self-walking mowing system to avoid a work area in the work process, and the obstacle can be an object occupying space such as stone, an object or an area not needing mowing such as flowers, special plants.
In the fourth embodiment of the present invention, referring to fig. 21, the calculation component includes a path generation module 900c, and the path generation module 900c generates a walking path 910c in the real-time image 530c or the simulated live-action image according to an instruction input by a user to form a first fused image 720c. The path generation module 900c is provided with a preset mowing path mode, such as an arcuate path, wherein the actuator 100c is controlled to reciprocate in the boundary or a loop path, wherein the actuator 100c is controlled to circle the progressive operation toward a center.
Referring to fig. 22, the computing component includes a boundary generation module 700c, the user sends an on command, a boundary analysis algorithm is set in the boundary generation module 700c, and a first virtual boundary 710c is generated in a corresponding mowing boundary position in the real-time image 530c or the simulated live-action image by analyzing mowing boundaries of the area to be mowed for colors, grass heights, and shapes in the real-time image 530c or the simulated live-action image. Referring to fig. 23 and 24, the path generation module 900c installs a walking path 910c within a preset algorithm designed mowing area within the generated first virtual boundary 710c, and calculates pixel coordinates in a corresponding pixel coordinate system according to position coordinates in the actuator 100c coordinate system corresponding to the generated walking path 910c, thereby displaying the generated walking path 910c in the real-time image 530c or the simulated live-action image, and fusing the generated walking path 910c into the real-time image 530c or the simulated live-action image to generate the first fused image 720c. The transmitting module 600c transmits the first fused image 720c to the control module 150c, and the control module 150c controls the traveling assembly 110c to travel along the traveling path 910c in the first fused image 720c and perform mowing operation on the mowing area.
Further, referring to fig. 25, the computing assembly further includes a modification module 800c, and the user may modify the walking path 910c in the first fused image 720c through the receiving module 200c, and modify the first fused image 720c generated by the path generating module 900c through the modification module 800 c. The generated walking path 910c is modified on the first fused image 720c through the interactive interface 520c, a part of paths are selected and deleted to delete the paths, line segments are added in the first fused image 720c to add new paths, the modification module 800c reads the pixel coordinate set of the paths selected by the user or the added paths, converts the pixel coordinate set into the coordinate set of the execution mechanism 100c according to a preset algorithm, and projects the coordinate set of the execution mechanism 100c to a position corresponding to a mowing area, so that the walking operation of the execution mechanism 100c along the walking path 910c after the modification by the user is analyzed according to the positioning tracking of the execution mechanism 100 c.
In another embodiment, the path generating module 900c includes a preset algorithm for calculating and generating the first walking path 910c according to the characteristic parameters of the mowing area, and the first walking path is displayed in the real-time image 530c or the simulated live-action image displayed by the display module 500 c. The path generation module 900c automatically calculates and generates the first travel path 910c according to the obtained mowing boundary information and the area information. The path generation module 900c is configured to generate a first travel path 910c, such as an arcuate path, a return-to-font path, or a random path, based on the characteristic parameters of the mowing area. And the first travel path 910c to be followed by mowing within the corresponding mowing area is shown to the user in the real-time image 530c or simulated live view. The receiving module 200c receives information input by the user whether the first travel path 910c in the first fused image 720c needs to be corrected, the user selects correction and inputs a correction instruction through the receiving module 200c, deletes a part of line segments or areas for the first travel path 910c, and adds a part of line segments or areas for the first travel path 910c, thereby generating a second travel path 920c in the real-time image 530c or the simulated live-action image, the correction module 800c identifies the correction instruction of the user, and fuses the coordinates of the second travel path 920c into the real-time image 530c or the simulated live-action image, so as to generate a second fused image 740c. The sending module 600c sends the information of the second fused image 740c to the control module 150c, and the control module 150c controls the execution mechanism 100c to walk along the path of the mowing area corresponding to the second walking path 920c according to the information of the second walking path 920 c.
In another embodiment, the path generation module 900c generates a preset path brush, such as a back-font path brush, an arcuate path brush, a straight path brush for user selection. The path generating module 900c forms an alternative path brush on the interactive interface 520c, and the user selects a corresponding path brush and brushes a region where the actuator 100c is expected to work in the real-time image 530c or the simulated live-action image, so as to generate a back font path, an arcuate font path and a linear path in the corresponding region; to generate a corresponding walking path 910c in the real-time image 530c or the simulated live-action image, the control module 150c controls the actuator 100c to walk along the path of the mowing area corresponding to the walking path 910 c.
In another manner, the path generating module 900c may receive patterns, characters, and the like sent by the user through the receiving module 200c, and generate a corresponding walking path 910c according to the pattern calculation, and the control module 150c controls the executing mechanism 100c to walk and mow according to the generated walking path 910c, so as to print out mowing marks of the patterns sent by the user in a mowing area, so as to achieve the purpose of printing mowing places, and thus enrich the appearance types of the mowing places.
In the above embodiment, when the boundary generating module 700, the path generating module 900c, and the obstacle generating module 800b generate the corresponding virtual boundary, the virtual obstacle identifier, and the walking path 910c, the user may preview the subsequent work execution state of the execution mechanism and the mowing area state after the mowing operation through the execution mechanism model in the real-time image or the simulated live view displayed by the display module, so that the user may know the subsequent mowing state and the mowing effect of the execution mechanism under the current setting in advance, for example, preview the mowing work mowing and the mowing effect of the self-walking mowing system for avoiding the first virtual obstacle identifier through the real-time image or the simulated live view, thereby facilitating the user to adjust the self-walking mowing system in time.
The user judges the position of the obstacle corresponding to the simulated live-action image 540c or the real-time image 530c according to the environmental characteristics shown by the simulated live-action image 540c or the real-time image 530c and the actual state of the mowing area through the simulated live-action image 540c or the real-time image 530c of the simulated live-action state of the interactive interface 520c, selects or selects the type of the obstacle in the simulated live-action image 540c or the real-time image 540c through the receiving module 200c, and the position and the size of the obstacle, after relevant information is input by the user, the image processor generates the corresponding simulated obstacle in the generated simulated live-action image 540c, and the control module 150c controls the executing mechanism 100c to avoid the operation of the obstacle in the running process.
Referring to fig. 26, the computing assembly further includes a guidance channel setting module that controls the generation of guidance channel setting keys or setting interfaces at the interactive interface 520c projected by the projection device 510, through which the user adds virtual guidance channel identifiers 560c in the simulated live view 540c or the real-time image 530 c. The user's area to be worked may have a plurality of relatively independent work areas, such as the front and rear courtyards of the user's courtyard, so that the user may move the guide actuator 100c from one work area to another through the user's desired guide channel by adding a virtual guide channel identifier 560c between the two independent work areas. Specifically, the self-walking mowing system detects mowing areas, and when the working environment has a plurality of relatively independent working areas, identifies and generates corresponding first virtual sub-mowing area 770c and second virtual sub-mowing area 780c, or selects a target working area by a user, and selects at least the first virtual sub-mowing area 770c and the second virtual sub-mowing area 780c through the simulated live-action graph 540 c. The guide channel setting module is configured to receive a virtual guide channel set by a user between the first virtual sub-mowing area 770c and the second virtual sub-mowing area 780c, and is configured to guide the traveling path 910c of the actuator 100c between the first sub-mowing area 770c and the second sub-mowing area corresponding to the first virtual sub-mowing area 770c and the second virtual sub-mowing area 780c. The user selects a corresponding virtual guide channel identifier 560c in the simulated live-action graph 540c according to the moving channel of the required executing mechanism 100c between the first mowing area and the second mowing area, and the control module 150c controls the executing mechanism 100c to guide the executing mechanism 100c to travel according to the virtual guide channel identifier 560c fused in the simulated live-action graph.
The self-propelled mowing system further comprises detection means for detecting operating conditions of the actuator 100c, such as machine parameters, operating modes, machine fault conditions, and alarm information of the actuator 100 c. The display module can also display machine parameters, working modes, machine fault conditions and alarm information of the execution mechanism through the interactive interface, and the data operation processor 310 calculates the display information to control the projection equipment to dynamically reflect the machine information in real time, so that a user can conveniently control and acquire the running state of the execution mechanism.
For better detection of the operating state of the actuator, the self-walking mowing system further comprises a voltage sensor and/or a current sensor, a rainfall sensor and a boundary recognition sensor. Typically, the above sensors may be disposed within the actuator, and the voltage sensor and the current sensor are configured to detect current and voltage values during operation of the actuator to analyze current operating information of the actuator. The rainfall sensor is used for detecting the rainwater condition of the execution mechanism environment. The boundary recognition sensor is used for detecting the boundary of the working area, and can be a sensor matched with the boundary electronic buried line, an imaging device for acquiring environmental information through imaging, or a positioning device.
Optionally, the current rainfall information is detected by a rainfall sensor, and the image sensor calculates a generated simulated live-action graph to simulate a corresponding raining scene and the rainfall. And acquiring surrounding environment and height information of the executing mechanism through detection devices such as a laser radar, a camera and a state sensor of the detection device, and correspondingly displaying the surrounding environment and the height information in the simulated live-action diagram. Optionally, a capacitive sensor is provided to detect load information of the mowing blade, thereby simulating grass height information after the operation of the actuator.
The computing assembly in the above embodiment is communicably connected with the executing mechanism, and at least part of the structure of the computing assembly may be disposed in the executing mechanism or may be disposed outside the executing mechanism, and the computing assembly may be transmitted to the controller of the executing mechanism through a transmission signal to control the operation of the output motor and the walking motor so as to control the walking and mowing state of the executing mechanism.
In a fifth embodiment of the present invention, referring to fig. 27, an outdoor self-walking device is proposed, which may be a snowplow, comprising: an actuator 100d including a walking assembly 110d for implementing a walking function and a working assembly for implementing a preset function; a housing for supporting the actuator 100d; an image acquisition module 400d capable of acquiring a real-time image 530d comprising at least part of the working area and at least part of the working boundary; a display module 500d electrically or communicatively connected to the image acquisition module 400d, the display module 500d configured to display the real-time image 530d or a simulated live view 540d from the generation of the real-time image 530d; the boundary generation module 700d generates a first virtual boundary corresponding to the working boundary in the real-time image 530d by calculating the feature parameter to form a first fused image; the receiving module 200d is configured to receive information input by a user, which is about whether the first virtual boundary in the first fused image needs to be modified; a correction module 800d that receives a user instruction to correct the first virtual boundary to generate a second virtual boundary 730d in the real-time image 530d or the simulated live-action image 540d to form a second fused image when the user inputs information that requires correction of the first virtual boundary; a transmitting module 600d for transmitting the first fused image or the corrected second fused image without correction; the control module 300d is electrically or communicatively connected to the sending module 600d, and the control module 300d controls the actuator 100d to operate within the first virtual boundary or the second virtual boundary 730 d.
Optionally, the boundary generation module 700d generates a first virtual boundary corresponding to the working boundary in the real-time image 530d by calculating the feature parameter to form a first fused image; a transmitting module 600d for transmitting the first fused image; the control module 300d is electrically or communicatively connected to the sending module 600d, and the control module 300d controls the actuator 100d to operate within the first virtual boundary.
Optionally, the outdoor self-walking device further includes an obstacle generating module, configured to generate a virtual obstacle identifier corresponding to the obstacle in the real-time image 530d according to an instruction input by the user, so as to form a first fused image; the image acquisition module 400d acquires a real-time image 530d including at least a portion of the working area and at least one obstacle located within the working area, and is electrically or communicatively connected to the transmission module 600d, and the control module 300d controls the actuator 100d to avoid the virtual obstacle in the first fused image.
Optionally, the obstacle generating module generates a first virtual obstacle identifier corresponding to the obstacle in the real-time image 530d by calculating the feature parameter to form a first fused image, and the control module 300d controls the actuator 100d to avoid the virtual obstacle in the first fused image.
Optionally, the obstacle generating module generates a first virtual obstacle identifier corresponding to the obstacle in the real-time image 530d or the simulated live-action image 540d by calculating the characteristic parameter to form a first fused image; the receiving module 200d receives information input by a user whether the first virtual obstacle identifier in the first fused image needs to be corrected or not; the correction module 800d receives a user instruction to correct the first virtual obstacle identifier to generate a second virtual obstacle identifier in the real-time image 530d or the simulated live-action image 540d to form a second fused image when the user inputs information that the first virtual obstacle identifier needs to be corrected; the transmitting module 600d transmits the first fused image or the corrected second fused image without correction; the control module 300d is electrically connected or in communication with the sending module 600d, and the control module 300d controls the executing mechanism 100d to avoid the first virtual obstacle identifier in the first fused image or the second virtual obstacle identifier in the second fused image.
Optionally, the obstacle generating module generates a first virtual obstacle identifier in the real-time image 530d or the simulated live-action image 540d according to an instruction input by the user to form a first fused image; a transmitting module 600d for transmitting the first fused image; the control module 300d is electrically connected or in communication with the sending module 600d, and the control module 300d controls the executing mechanism 100d to avoid the first virtual obstacle identifier in the first fused image.
Optionally, the path generating module generates a walking path in the real-time image 530d or the simulated live-action image 540d according to an instruction input by a user to form a first fused image; the transmitting module 600d transmits the first fused image; the control module 300d is electrically or communicatively connected to the transmitting module 600d, and the control module 300d controls the walking assembly 110d to walk along the walking path in the first fused image.
Optionally, the path generation module generates a first walking path in the real-time image 530d or the simulated live-action image 540d according to the feature parameters of the calculation work area to form a first fused image; the receiving module 200d is configured to receive information input by a user, which is about whether the first walking path in the first fused image needs to be modified; when the user inputs information needing to correct the first walking path, the correction module 800d receives a user instruction to correct the first walking path so as to generate a second walking path in the real-time image 530d or the simulated live-action image 540d to form a second fused image; the transmitting module 600d transmits the first fused image or the corrected second fused image without correction; the control module 300d is electrically or communicatively connected to the transmitting module 600d, and the control module 300d controls the walking assembly 110d to walk along the first walking path in the first fused image or the second walking path in the second fused image.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be appreciated by persons skilled in the art that the above embodiments are not intended to limit the invention in any way, and that all technical solutions obtained by means of equivalent substitutions or equivalent transformations fall within the scope of the invention.

Claims (7)

1. A self-propelled mowing system comprising:
the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function;
a housing for supporting the actuator;
An image acquisition module capable of acquiring a real-time image comprising at least a portion of a mowing area and at least one obstacle located within the mowing area;
A display module electrically or communicatively connected to the image acquisition module, the display module configured to display the real-time image or a simulated live view generated from the real-time image;
The obstacle generation module is used for generating a first virtual obstacle identifier corresponding to the obstacle in the real-time image or the simulated live-action image through calculating characteristic parameters so as to form a first fusion image;
the receiving module is used for receiving information input by a user whether the first virtual obstacle identification in the first fusion image needs to be corrected or not;
The correction module is used for receiving a user instruction to correct the first virtual obstacle identifier when the user inputs information needing to correct the first virtual obstacle identifier so as to generate a second virtual obstacle identifier in the real-time image or the simulated live-action image to form a second fused image;
the sending module is used for sending information of the first fusion image which is not required to be corrected or the corrected second fusion image;
The control module is electrically connected or in communication connection with the sending module, and controls the executing mechanism to avoid the obstacle corresponding to the first virtual obstacle identifier in the first fusion image or the second virtual obstacle identifier in the second fusion image, and controls the simulated live-action image to synchronously update display content along with the real-time running state of the executing mechanism so as to be matched with the running state of the actual executing mechanism.
2. The self-propelled mowing system as set forth in claim 1, comprising: the image acquisition module comprises one or a combination of an image sensor, a laser radar, an ultrasonic sensor, a camera and a TOF sensor.
3. The self-propelled mowing system as set forth in claim 1, comprising: the self-walking mowing system further comprises a boundary generation module, wherein the boundary generation module generates a first virtual boundary according to the information of the first boundary of the mowing area acquired by the image acquisition module, and the control module controls the executing mechanism to walk in the first boundary corresponding to the first virtual boundary.
4. A self-propelled mowing system as set forth in claim 3, comprising: the self-walking mowing system further comprises a path generation module, the path generation module automatically generates a walking path in the first virtual boundary, and the control module controls the execution mechanism to walk in the first boundary according to the walking path.
5. The self-propelled mowing system as set forth in claim 1, comprising: the display module comprises a projection device, the simulated live-action image or the real-time image is projected through the projection device, and the projection device comprises a mobile phone screen, a hardware display screen, VR glasses and AR glasses.
6. The self-propelled mowing system as set forth in claim 5, comprising: the control module comprises a data operation processor for processing data and an image processor for generating images and scene modeling, wherein the data operation processor establishes a pixel coordinate system and an actuating mechanism coordinate system to convert the position information of the first virtual obstacle identifier or the second virtual obstacle identifier into the position information of the actual obstacle.
7. An outdoor self-walking device comprising:
The executing mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function;
a housing for supporting the actuator;
an image acquisition module capable of acquiring a real-time image comprising at least a portion of a work area and at least one obstacle located within the work area;
A display module electrically or communicatively connected to the image acquisition module, the display module configured to display the real-time image or a simulated live view generated from the real-time image;
The obstacle generation module is used for generating a first virtual obstacle identifier corresponding to the obstacle in the real-time image or the simulated live-action image through calculating characteristic parameters so as to form a first fusion image;
the receiving module is used for receiving information input by a user whether the first virtual obstacle identification in the first fusion image needs to be corrected or not;
The correction module is used for receiving a user instruction to correct the first virtual obstacle identifier when the user inputs information needing to correct the first virtual obstacle identifier so as to generate a second virtual obstacle identifier in the real-time image or the simulated live-action image to form a second fused image;
the sending module is used for sending information of the first fusion image which is not required to be corrected or the corrected second fusion image;
The control module is electrically connected or in communication connection with the sending module, and controls the executing mechanism to avoid the obstacle corresponding to the first virtual obstacle identifier in the first fusion image or the second virtual obstacle identifier in the second fusion image, and controls the simulated live-action image to synchronously update display content along with the real-time running state of the executing mechanism so as to be matched with the running state of the actual executing mechanism.
CN201911409440.1A 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment Active CN112673799B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019109925528 2019-10-18
CN201910992552 2019-10-18

Publications (2)

Publication Number Publication Date
CN112673799A CN112673799A (en) 2021-04-20
CN112673799B true CN112673799B (en) 2024-06-21

Family

ID=75445228

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201911409440.1A Active CN112673799B (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911409198.8A Pending CN112684785A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911409201.6A Active CN112764416B (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911417081.4A Active CN112684786B (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN201911409198.8A Pending CN112684785A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911409201.6A Active CN112764416B (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911417081.4A Active CN112684786B (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment

Country Status (1)

Country Link
CN (4) CN112673799B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112673799B (en) * 2019-10-18 2024-06-21 南京泉峰科技有限公司 Self-walking mowing system and outdoor walking equipment
CN113950934A (en) * 2021-11-02 2022-01-21 甘肃畜牧工程职业技术学院 Lawn mower visual system capable of being remotely controlled
CN114115265A (en) * 2021-11-23 2022-03-01 未岚大陆(北京)科技有限公司 Path processing method of self-moving equipment and self-moving equipment
CN114554142A (en) * 2021-12-31 2022-05-27 南京苏美达智能技术有限公司 Image display technology for self-walking equipment and application
CN116088533B (en) * 2022-03-24 2023-12-19 未岚大陆(北京)科技有限公司 Information determination method, remote terminal, device, mower and storage medium
CN115202344A (en) * 2022-06-30 2022-10-18 未岚大陆(北京)科技有限公司 Mowing method and device for working boundary of mower, storage medium and mower
CN115500143B (en) * 2022-11-02 2023-08-29 无锡君创飞卫星科技有限公司 Mower control method and device with laser radar
CN117191030A (en) * 2023-09-08 2023-12-08 深圳市鑫旭源环保有限公司 Path planning method and device for cleaning robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2689650A1 (en) * 2012-07-27 2014-01-29 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
CN206115271U (en) * 2016-09-20 2017-04-19 深圳市银星智能科技股份有限公司 Mobile robot with manipulator arm traction device
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN109258060A (en) * 2018-08-24 2019-01-25 宁波市德霖机械有限公司 Map structuring intelligent grass-removing based on particular image mark identification
CN112684785A (en) * 2019-10-18 2021-04-20 南京德朔实业有限公司 Self-walking mowing system and outdoor walking equipment

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3318170B2 (en) * 1995-11-02 2002-08-26 株式会社日立製作所 Route generation method for automatic traveling machinery
JP3237705B2 (en) * 1999-02-04 2001-12-10 日本電気株式会社 Obstacle detection device and moving object equipped with obstacle detection device
CN101777263B (en) * 2010-02-08 2012-05-30 长安大学 Traffic vehicle flow detection method based on video
TW201305761A (en) * 2011-07-21 2013-02-01 Ememe Robot Co Ltd An autonomous robot and a positioning method thereof
KR101334961B1 (en) * 2011-08-03 2013-11-29 엘지전자 주식회사 Lawn mower robot system and control method for the same
CN103891464B (en) * 2012-12-28 2016-08-17 苏州宝时得电动工具有限公司 Automatically mow system
US9420741B2 (en) * 2014-12-15 2016-08-23 Irobot Corporation Robot lawnmower mapping
CN105468033B (en) * 2015-12-29 2018-07-10 上海大学 A kind of medical arm automatic obstacle-avoiding control method based on multi-cam machine vision
CN106155053A (en) * 2016-06-24 2016-11-23 桑斌修 A kind of mowing method, device and system
CN106527424B (en) * 2016-09-20 2023-06-09 深圳银星智能集团股份有限公司 Mobile robot and navigation method for mobile robot
CN106647765B (en) * 2017-01-13 2021-08-06 深圳拓邦股份有限公司 Planning platform based on mowing robot
US10583561B2 (en) * 2017-08-31 2020-03-10 Neato Robotics, Inc. Robotic virtual boundaries
CN107976998A (en) * 2017-11-13 2018-05-01 河海大学常州校区 A kind of grass-removing robot map building and path planning system and method
WO2019096262A1 (en) * 2017-11-16 2019-05-23 南京德朔实业有限公司 Intelligent lawn mowing system
CN108829103A (en) * 2018-06-15 2018-11-16 米亚索能光伏科技有限公司 Control method, weeder, terminal, equipment and the storage medium of weeder
CN109063575B (en) * 2018-07-05 2022-12-23 中国计量大学 Intelligent mower autonomous and orderly mowing method based on monocular vision
CN109062225A (en) * 2018-09-10 2018-12-21 扬州方棱机械有限公司 The method of grass-removing robot and its generation virtual boundary based on numerical map
CN109491397B (en) * 2019-01-14 2021-07-30 傲基科技股份有限公司 Mowing robot and mowing area defining method thereof
CN109634286B (en) * 2019-01-21 2021-06-25 傲基科技股份有限公司 Visual obstacle avoidance method for mowing robot, mowing robot and readable storage medium
CN109634287B (en) * 2019-01-22 2022-02-01 重庆火虫创新科技有限公司 Mower path planning method and system
CN109871013B (en) * 2019-01-31 2022-12-09 莱克电气股份有限公司 Cleaning robot path planning method and system, storage medium and electronic equipment
CN109828584A (en) * 2019-03-01 2019-05-31 重庆润通智能装备有限公司 Lawn to be cut removes, the paths planning method after addition barrier and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2689650A1 (en) * 2012-07-27 2014-01-29 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
CN206115271U (en) * 2016-09-20 2017-04-19 深圳市银星智能科技股份有限公司 Mobile robot with manipulator arm traction device
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN109258060A (en) * 2018-08-24 2019-01-25 宁波市德霖机械有限公司 Map structuring intelligent grass-removing based on particular image mark identification
CN112684785A (en) * 2019-10-18 2021-04-20 南京德朔实业有限公司 Self-walking mowing system and outdoor walking equipment

Also Published As

Publication number Publication date
CN112684785A (en) 2021-04-20
CN112684786B (en) 2024-08-09
CN112764416B (en) 2024-06-18
CN112764416A (en) 2021-05-07
CN112673799A (en) 2021-04-20
CN112684786A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112673799B (en) Self-walking mowing system and outdoor walking equipment
EP3613270A1 (en) Intelligent mower based on lidar map building
US20220151147A1 (en) Self-moving lawn mower and supplementary operation method for an unmowed region thereof
EP3237983B1 (en) Robotic vehicle grass structure detection
CN113128747B (en) Intelligent mowing system and autonomous image building method thereof
US9603300B2 (en) Autonomous gardening vehicle with camera
EP3686704B1 (en) Method for generating a representation and system for teaching an autonomous device operating based on such representation
EP3158409B1 (en) Garden visualization and mapping via robotic vehicle
US10809740B2 (en) Method for identifying at least one section of a boundary edge of an area to be treated, method for operating an autonomous mobile green area maintenance robot, identifying system and green area maintenance system
CN113115621B (en) Intelligent mowing system and autonomous image building method thereof
US20220217902A1 (en) Self-moving mowing system, self-moving mower and outdoor self-moving device
CN114937258B (en) Control method for mowing robot, and computer storage medium
CN114721385A (en) Virtual boundary establishing method and device, intelligent terminal and computer storage medium
US20230320263A1 (en) Method for determining information, remote terminal, and mower
CN219349399U (en) Mobile system and gardening work system
CN112438112B (en) Self-walking mower
WO2024038852A1 (en) Autonomous operating zone setup for a working vehicle or other working machine
KR102702778B1 (en) System and method for providing an augmented reality navigation screen for agricultural vehicles
CN207867343U (en) A kind of smart home grass-removing robot
CN118605484A (en) Mobile system and gardening work system
CN117850418A (en) Self-walking device running according to virtual boundary and virtual boundary generation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 211106 No. 529, 159, Jiangjun Avenue, Jiangning District, Nanjing, Jiangsu Province

Applicant after: Nanjing Quanfeng Technology Co.,Ltd.

Address before: No. 529, Jiangjun Avenue, Jiangning Economic and Technological Development Zone, Nanjing, Jiangsu Province

Applicant before: NANJING CHERVON INDUSTRY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant