[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114488879B - Robot control method and robot - Google Patents

Robot control method and robot Download PDF

Info

Publication number
CN114488879B
CN114488879B CN202111653094.9A CN202111653094A CN114488879B CN 114488879 B CN114488879 B CN 114488879B CN 202111653094 A CN202111653094 A CN 202111653094A CN 114488879 B CN114488879 B CN 114488879B
Authority
CN
China
Prior art keywords
behavior information
information
target user
associated behavior
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111653094.9A
Other languages
Chinese (zh)
Other versions
CN114488879A (en
Inventor
王晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202111653094.9A priority Critical patent/CN114488879B/en
Publication of CN114488879A publication Critical patent/CN114488879A/en
Application granted granted Critical
Publication of CN114488879B publication Critical patent/CN114488879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D57/00Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
    • B62D57/02Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
    • B62D57/032Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legs; with alternately or sequentially lifted feet or skid
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application discloses a robot control method and a robot, which are used for enhancing the interactivity between the robot and a user, so that the use experience of the user is improved. The method of the embodiment of the application is applied to a robot control system and comprises the following steps: acquiring current behavior information of a target user; according to the current behavior information of the target user, the current time information and/or the current position information of the target user are combined, and associated behavior information matched with the current behavior information of the target user is searched from a database; if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information does not comprise a corresponding object to be executed, recommending the associated behavior information to a target user; if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information comprises a corresponding object to be executed, searching the object to be executed from the acquired environment image information, and if the object to be executed is searched, recommending the associated behavior information to a target user.

Description

Robot control method and robot
Technical Field
The embodiment of the application relates to the technical field of robot control, in particular to a robot control method and a robot.
Background
Industrial robot arms are mechanical and electronic devices with anthropomorphic arm, wrist and hand functions, and with the continuous development of social technology, the robot arms applied to industrial production are also gradually spreading in the form of cooperative robot arms. Compared with the traditional industrial mechanical arm, the cooperative mechanical arm has the characteristics of small volume, high flexibility, easiness in installation and the like, and the main application of the cooperative mechanical arm is man-machine cooperative work, so that one work can be completed cooperatively with a person.
The collaborative mechanical arm is arranged on the robot and assists a user to complete tasks by operating household articles or tools. In the prior art, a user is required to input a certain operation instruction on a robot control system, so that the robot is controlled to execute semi-autonomous or autonomous operation of daily tasks, and the control system cannot extend and expand other related operation schemes for the user according to the operation instruction input by the user, so that the user can select the operation schemes.
In the current process of controlling the robot to execute the operation instruction input by the user by the robot control system, the robot control system cannot feed back/adjust the operation scheme according to various data such as scenes, time, used objects and the like in specific application, so that the various data are not completely closed-loop, and the robot control system cannot output/recommend related behavior information for the user through the current behavior or position information of the user, so that the use experience of the user is affected.
Disclosure of Invention
The embodiment of the application provides a robot control method and a robot, which are used for determining the user requirement through the current behavior information of a user, outputting/recommending associated behavior information for the user, enhancing the interactivity between the robot and the user and further improving the use experience of the user.
The present application provides, from a first aspect, a robot control method applied to a robot control system, including:
Acquiring current behavior information of a target user;
Searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user, combining current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
If at least one piece of associated behavior information is searched and the at least one piece of associated behavior information does not comprise a corresponding object to be executed, recommending the associated behavior information to the target user;
If at least one piece of associated behavior information is searched and the at least one piece of associated behavior information comprises a corresponding object to be executed, searching the object to be executed from the acquired environment image information, and if the object to be executed is searched, recommending the associated behavior information to the target user.
The present application provides, from a second aspect, a robot comprising:
the first acquisition unit is used for acquiring current behavior information of the target user;
the first search unit is used for searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user, combining the current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
The first execution unit is used for recommending the associated behavior information to the target user when the first search unit searches at least one piece of associated behavior information and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed;
The first execution unit is further configured to search for a target object to be executed from the acquired environmental image information when the first search unit searches for at least one piece of associated behavior information and the at least one piece of associated behavior information includes a corresponding target object to be executed, and recommend the associated behavior information to the target user if the target object to be executed is searched for.
From the above technical solutions, the embodiment of the present application has the following advantages:
Firstly, current behavior information of a target user is obtained, searching is conducted from a behavior database according to the current behavior information, the current time information and/or the current position information, and when associated behavior information matched with the current behavior information of the target user is searched, the associated behavior information is recommended to the target user according to the searched associated behavior information. According to the method and the device, the related behavior information is output/recommended to the user through the current behavior or position information of the user by searching the related behavior information matched with the related behavior information in the database according to the related information and recommending the corresponding related behavior information to the user, so that the interactivity between the robot and the user is enhanced, and the use experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a hardware structure of a four-legged robot provided by the present application;
fig. 2 is a schematic mechanical structure of the quadruped robot provided by the application;
FIG. 3 is a schematic flow chart of an embodiment of a robot control method according to the present application;
FIG. 4 is a schematic flow chart of another embodiment of a robot control method according to the present application;
FIG. 5 is a schematic flow chart of another embodiment of a robot control method according to the present application;
FIG. 6 is a schematic structural view of an embodiment of a robot according to the present application;
fig. 7 is a schematic structural diagram of another embodiment of a robot according to the present application;
FIG. 8 is a schematic structural view of an embodiment of a robot control device provided by the present application;
Fig. 9 is a data structure diagram of an object recognition dictionary provided by the present application.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "component", or "unit" for representing components are used only for facilitating the description of the present invention, and have no specific meaning in themselves. Thus, "module," "component," or "unit" may be used in combination.
Referring to fig. 1, fig. 1 is a schematic hardware structure of a multi-legged robot 100 according to one embodiment of the present invention. In the embodiment shown in fig. 1, the multi-legged robot 100 includes a mechanical unit 101, a communication unit 102, a sensing unit 103, an interface unit 104, a storage unit 105, a control module 110, and a power source 111. The various components of the multi-legged robot 100 can be connected in any manner, including wired or wireless connections, and the like. It will be appreciated by those skilled in the art that the specific structure of the multi-legged robot 100 shown in fig. 1 does not constitute a limitation of the multi-legged robot 100, the multi-legged robot 100 may include more or less components than illustrated, and that certain components do not necessarily constitute the multi-legged robot 100, may be omitted entirely or combined as necessary within a range that does not change the essence of the invention.
The various components of the multi-legged robot 100 are described in detail below in conjunction with fig. 1:
The mechanical unit 101 is hardware of the multi-legged robot 100. As shown in fig. 1, the mechanical unit 101 may include a drive plate 1011, a motor 1012, a mechanical structure 1013, as shown in fig. 2, the mechanical structure 1013 may include a body 1014, extendable legs 1015, feet 1016, and in other embodiments, the mechanical structure 1013 may further include an extendable mechanical arm, a rotatable head structure, a swingable tail structure, a carrying structure, a saddle structure, a camera structure, and the like. It should be noted that, the number of the component modules of the mechanical unit 101 may be one or more, and may be set according to the specific situation, for example, the number of the legs 1015 may be 4, each leg 1015 may be configured with 3 motors 1012, and the number of the corresponding motors 1012 is 12.
The communication unit 102 may be used for receiving and transmitting signals, or may be used for communicating with a network and other devices, for example, receiving command information sent by the remote controller or other multi-legged robot 100 to move in a specific direction at a specific speed value according to a specific gait, and then transmitting the command information to the control module 110 for processing. The communication unit 102 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, and the like.
The sensing unit 103 is used for acquiring information data of the surrounding environment of the multi-legged robot 100 and monitoring parameter data of each component inside the multi-legged robot 100, and sending the information data to the control module 110. The sensing unit 103 includes various sensors such as a sensor that acquires surrounding environment information: lidar (for remote object detection, distance determination and/or velocity value determination), millimeter wave radar (for short range object detection, distance determination and/or velocity value determination), cameras, infrared cameras, global navigation satellite systems (GNSS, global Navigation SATELLITE SYSTEM), and the like. Such as sensors that monitor various components within the multi-legged robot 100: an inertial measurement unit (IMU, inertial Measurement Unit) (values for measuring velocity values, acceleration values and angular velocity values), plantar sensors (for monitoring plantar force point position, plantar posture, touchdown force magnitude and direction), temperature sensors (for detecting component temperature). As for other sensors such as load sensors, touch sensors, motor angle sensors, torque sensors, etc. that may be further configured for the multi-legged robot 100, the detailed description thereof will be omitted.
The interface unit 104 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more components within the multi-legged robot 100, or may be used to output (e.g., data information, power, etc.) to an external device. The interface unit 104 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting devices having identification modules, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 105 is used to store a software program and various data. The storage unit 105 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the multi-legged robot 100 in use (such as various sensed data acquired by the sensing unit 103, log file data), and the like. In addition, the storage unit 105 may include high-speed random access memory, and may also include nonvolatile memory, such as disk memory, flash memory, or other volatile solid state memory.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 107 may be used to receive input numeric or character information. In particular, the input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 1071 or in the vicinity of the touch panel 1071 using a palm, a finger, or a suitable accessory), and drive the corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device 1073 and a touch controller 1074. The touch detection device 1073 detects the touch orientation of the user, detects a signal caused by the touch operation, and transmits the signal to the touch controller 1074; the touch controller 1074 receives touch information from the touch detecting device 1073, converts it into touch point coordinates, and sends the touch point coordinates to the control module 110, and can receive and execute commands sent from the control module 110. The input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, one or more of a remote control handle or the like, as is not limited herein.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the control module 110 to determine the type of touch event, and then the control module 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions, which is not limited herein.
The control module 110 is a control center of the multi-legged robot 100, connects the respective components of the entire multi-legged robot 100 using various interfaces and lines, and performs overall control of the multi-legged robot 100 by running or executing a software program stored in the storage unit 105, and calling data stored in the storage unit 105.
The power supply 111 is used to supply power to the various components, and the power supply 111 may include a battery and a power control board for controlling functions such as battery charging, discharging, and power consumption management. In the embodiment shown in fig. 1, the power source 111 is electrically connected to the control module 110, and in other embodiments, the power source 111 may be further electrically connected to the sensing unit 103 (such as a camera, a radar, a speaker, etc.), and the motor 1012, respectively. It should be noted that each component may be connected to a different power source 111, or may be powered by the same power source 111.
On the basis of the above embodiments, specifically, in some embodiments, the terminal device may be in communication connection with the multi-legged robot 100, when the terminal device communicates with the multi-legged robot 100, instruction information may be sent to the multi-legged robot 100 through the terminal device, the multi-legged robot 100 may receive the instruction information through the communication unit 102, and the instruction information may be transmitted to the control module 110 in case of receiving the instruction information, so that the control module 110 may process to obtain the target speed value according to the instruction information. Terminal devices include, but are not limited to: a mobile phone, a tablet personal computer, a server, a personal computer, a wearable intelligent device and other electrical equipment with an image shooting function.
The instruction information may be determined according to preset conditions. In one embodiment, the multi-legged robot 100 may include a sensing unit 103, and the sensing unit 103 may generate instruction information according to the current environment in which the multi-legged robot 100 is located. The control module 110 may determine whether the current speed value of the multi-legged robot 100 satisfies the corresponding preset condition according to the instruction information. If so, the current speed value and current gait movement of the multi-legged robot 100 are maintained; if not, the target speed value and the corresponding target gait are determined according to the corresponding preset conditions, so that the multi-legged robot 100 can be controlled to move at the target speed value and the corresponding target gait. The environmental sensor may include a temperature sensor, a barometric pressure sensor, a visual sensor, an acoustic sensor. The instruction information may include temperature information, air pressure information, image information, sound information. The communication mode between the environment sensor and the control module 110 may be wired communication or wireless communication. Means of wireless communication include, but are not limited to: wireless networks, mobile communication networks (3G, 4G, 5G, etc.), bluetooth, infrared.
The hardware configuration and the mechanical configuration of the robot provided by the present application are described above, and the robot control method and the control device provided by the present application are described below.
In the prior art, the cooperative mechanical arm is arranged on the robot, and the user is assisted to complete the task by operating household articles or tools, so that the user is required to input certain operation instructions on the robot control system, the robot is controlled to execute semi-autonomous or autonomous operation of daily tasks, the control system cannot extend and expand other related operation schemes for the user according to the operation instructions input by the user for selection by the user, and therefore, when the robot control system controls the robot to execute the operation instructions input by the user, the robot control system cannot feed back/adjust the operation schemes according to various data such as scenes, time, articles and the like in specific application, so that the various data are not completely closed-loop, and the robot control system cannot output/recommend related behavior information for the user according to the identification objects selected by the user, thereby influencing the use experience of the user.
Based on the above, the application provides a robot control method and a control device applied to a robot control system, which are used for realizing the effect that the robot control system can output/recommend associated behavior information for a user through the current behavior or position information of the user. For convenience of description, the robot control system is taken as an execution body for illustration in this embodiment.
Referring to fig. 3, fig. 3 provides an embodiment of a robot control method according to the present application, the method includes:
301. Acquiring current behavior information of a target user;
when the robot control system receives the operation initiation request of the user, the camera on the corresponding robot body can be controlled to acquire the image in the current visual field, then the human body action of the object such as the target user in the current visual field is determined according to the image, and the current behavior information of the target user is determined through analysis of the human body action. The image may be acquired by an RGBD camera mounted on the robot.
Specifically, the robot control system controls the camera on the robot, and the captured image may include a physical object in the current field of view of the camera mounted on the robot, for example, an image in the current field of view acquired by the robot includes objects such as a cutter, an apple, a tea table, a playing television, and the like, and a target user sitting on a sofa. The robot control system may then determine, based on the object features, the current location information of the target user as: the living room further determines that the current behavior information of the target user is: is leaning on and sitting on the sofa.
302. Searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user, combining the current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or targets to be executed;
The robot control system is provided with a positioning function, the region position of the robot can be positioned in real time through the positioning function, and the time information comprises time point information or time period information which can be divided by seasons, months, weeks, days, morning, afternoon and the like, and can be divided according to the user-defined time period; for example, the spring is 3,4,5 months, the summer is 6,7,8 months, the autumn is 9,10, 11 months, and the winter is 12,1,2 months; and pre-subdividing the time period in the day, namely 1:00-5:00 in the early morning, 5:00-8:00 in the morning, 8:00-11:00 in the morning, 11:00-13:00 in the noon, 13:00-17:00 in the afternoon, 17:00-19:00 in the evening, 19:00-20:00 in the midnight, and 20:00-24:00 in the late night.
It should be noted that, the current location information of the target user may be determined according to the environment information acquired by the camera of the robot and/or the positioning function of the robot control system itself.
In the embodiment of the application, the database of the target user comprises, but is not limited to, recording and storing behavior data generated by the target user when experiencing various services of the robot, recording the execution time of the behavior data, analyzing the word characteristics of the behavior data, and providing support for a control system and corresponding service work of the robot through a query and matching engine. As the number of behavior information to be performed by the robot selected by the target user increases, the control system may utilize the newly added behavior information to continuously refine the database. The data of the database includes: behavior information, associated behavior information matched with the behavior information, associated actions corresponding to the associated behavior information, objects to be executed, operation information corresponding to the objects to be executed, and auxiliary objects related to the operation information corresponding to the objects to be executed. The representation forms of the object to be executed and the auxiliary object in the database may be images, characters, symbols, etc., which are not limited herein.
In an embodiment of the present application, the behavior information of the user may include, but is not limited to, sitting, lying, walking, jogging, running, boxing, waving, clapping, and the like, and the location information may include, but is not limited to, outdoor as well as indoor scenes such as natatorium, living room, kitchen, bedroom, and the like. The robot control system can search the related behavior information in the database according to the current behavior information of the target user in combination with the current time information, and further, in order to improve the accuracy of the related behavior information searched in the database, the current application scene can be determined by combining with the current position information, so that the related behavior information is matched according to the application scene, the current time and the current behavior of the target user.
It should be noted that, the step 302 may have two implementation orders, and the control system may search the database of the target user for the associated information matched with the current behavior information, and then determine the target object to be executed corresponding to the associated information; or the target object to be executed which is required to be an execution object is determined from the current behavior information, then the associated behavior information matched with the target object to be executed is determined, and the implemented sequence is not limited. The specific expression mode can be as follows: for example, when the current behavior information of the target user is acquired: the user sits on the sofa, and the current time information is: twelve noon points, the current position information is: the living room extracts the behavior characteristics of the sitting sofa, and the control system can match the behavior information associated with the sitting sofa from the database by combining the current time and the current position information: eating apples, and further determining that the targets to be executed are: the apple, the operation information corresponding to the object to be executed is: washing, peeling, dicing, and the like.
303. If at least one piece of associated behavior information is searched and does not comprise a corresponding object to be executed, the associated behavior information is recommended to the target user, if the at least one piece of associated behavior information is searched and comprises the corresponding object to be executed, the object to be executed is searched from the acquired environment image information, and if the object to be executed is searched, the associated behavior information is recommended to the target user.
In the embodiment of the application, searching the object to be executed from the acquired environment image information comprises two modes, wherein the first mode is to acquire the environment image information, then identify and acquire the images of all the objects in the environment image information, then compare the object to be executed with the images of all the objects in the environment image information, and determine the positions of all the objects to be executed passing the matching in the environment; secondly, firstly acquiring a target object to be executed, then directly finding the target object to be executed in the acquired environmental image information according to the target object to be executed, and if the target object to be executed is found, not continuing to search other target objects to be executed.
The object to be executed is an execution object of the robot, and the auxiliary object refers to an auxiliary tool of the robot, for example, the associated behavior information is: "eat apple", then the off link is taken as "eat", the object to be executed is: the apple is cut into pieces and peeled according to the operation information corresponding to the object to be executed, and the auxiliary object is: "knife". For another example, the associated behavior information is: back beating, then the target object to be executed of the information is: the back of the user, and the operation information corresponding to the object to be executed is 'hammer'.
The robot may analyze which time periods are not suitable for recommending information to the user, which time periods are suitable for recommending information to the user, specifically, by analyzing the current behavior of the user, which environmental object the user is currently interacting with, the condition of the environmental object, etc., for example, if the robot judges that the user is watching television at a great deal of concentration through visual information, the robot is not suitable for immediately recommending information to the user. The robot can acquire the television content in a networking way or acquire the television content through visual information, and the robot is suitable for recommending the associated behavior information to the user when the television plays the advertisement or the user switches channels with a remote control.
In the embodiment of the application, firstly, the acquired environmental image information can be input into a pre-built identification neural network model for identification, a set is output, and then data in the set is processed to obtain an identified object, wherein the set at least comprises one data, the data can be integers within a certain preset range, and each integer can be used for labels corresponding to different objects, so that the integer representation has uniqueness.
Among the identified neural network models referred to herein include, but are not limited to, CNN-based yolo series-built models, and transducer-based built models, and the like.
Then, the robot control system matches the collected data with the corresponding Chinese character string through the object recognition dictionary, and the object corresponding to the Chinese character string corresponding to the name of the target object to be executed is the execution object of the robot. In an embodiment of the application, the mentioned object recognition dictionary is used for matching data with the target object; the object recognition dictionary comprises integers and Chinese character strings in a range from 0 to a preset range, each integer is matched with a unique Chinese character string, a data structure of the object recognition dictionary is shown in fig. 9, wherein keys are preset integers with 0-N not repeated in model training, represent identifiable objects, values are Chinese character strings, and the Chinese names of the integers output by the model and mapped to the corresponding objects are regarded as final outputs d= { key1: value1, key2: value2, key3: value3}. After the control system obtains a set containing at least one data through the recognition neural network model, the data in the set are further matched through the object recognition dictionary, and at least one Chinese character string is obtained.
For example, when the control system obtains an output set of identified neural network models: "{1,5,6}", the preset integer range of the object recognition dictionary is 0 to 1999, so that 1,5,6 in the set are all valid recognizable data, 1,5 and 6 are respectively passed through the object recognition dictionary, and the 1 corresponds to the dish, the 5 corresponds to the kitchen knife, the 6 corresponds to the apple, and the set containing three Chinese character strings is output: "{1: dish, 5: kitchen knife, 6: apple }", then the associated behavior information is: "eat apple", then the off link is taken as "eat", the object to be executed is: the operation information corresponding to the object to be executed is "cut block", "peeling", "placing", and the auxiliary object is: "kitchen knife".
Optionally, the control system may recommend the associated behavior information through a manner of displaying the associated behavior information on a screen interface of the robot, may recommend the associated behavior information through a manner of voice, and may recommend the associated behavior information through a manner of establishing a communication connection with a third party device, which is not limited herein.
Alternatively, since the number of associated behavior information corresponding to the object to be executed is not constant, it is sometimes impossible to recommend all the associated behavior information to the target user, and therefore, further, in order to more accurately recommend the associated behavior information, the control system may use the frequency of the associated behavior information, the direction of correlation between the associated action and the object to be executed, and the like as the screening condition of the recommended associated behavior information.
In the embodiment of the application, the control system searches the associated behavior information matched with the current behavior information from the database according to the current time information, the current behavior information and/or the current position information of the target user, recommends the associated behavior information to the target user according to the searched associated behavior information, and enhances the interactivity between the robot and the user. It should be noted that, the purpose of the robot control method provided by the embodiment of the present application is to extend the executable operation scheme of the robot according to the current behavior, scene, time and other data states of the target user, so as to improve the capability of the robot control system for feeding back/adjusting the operation scheme according to the corresponding information, achieve the relevant data closed loop, and recommend the extended operation scheme, i.e. the corresponding relevant behavior information, to the user through the robot control system, so as to improve the use experience of the user.
Referring to fig. 4, the robot control method provided by the present application further includes:
403. if at least one associated behavior information is searched, determining that the first N associated actions with the highest execution times in the past preset time period are recommended associated behavior information;
404. If the associated behavior information is not searched, searching the current behavior information of the target user through a networking third-party search engine, and combining the current time information and/or the first n pieces of information with highest searching frequency, which are matched with the current position information of the target user, as recommended associated behavior information;
Step 403 follows step 302 of the above-described embodiment.
In the embodiment of the application, the robot control system can search the associated behavior information matched with the current behavior information from the database of the target user according to the current behavior information, in combination with the current time information and/or the current position information.
Because the database includes the execution times of the associated actions in the past preset time period, in order to determine the associated action information which can be recommended more specifically, the robot control system may search the database for the associated action information which is matched with the current action information in combination with the current time information and/or the current position information, if at least one associated action information is searched, the previous N associated actions which are executed most frequently in the past preset time period may be selected as recommended associated action information, and the target object in the associated action information may be determined as the target object to be executed, for example, when the control system obtains the current action information of the target user: "sit on sofa", the current time is: "12:05 noon", the current location information is: when "living room", these pieces of information are put into a database and can be matched with the historical behavior data of the corresponding time position: the method comprises the steps of 'eating apples' 5 times, 'eating plums' 2 times, 'eating grapes' 3 times, 'eating waxberries' 1 time and the like, and the control system selects the first 3 most executed times as recommended associated behavior information according to the fact that the matched historical behavior data are more: the method comprises the steps of 'eating apples', 'eating grapes', 'eating plums', wherein 'apples', 'grapes' and 'plums' are objects to be executed corresponding to associated behavior information, and associated actions are 'eating'.
And when the associated behavior information is not searched in the database, searching on a third-party search engine is needed, the first n pieces of information with highest searching frequency are selected as recommended associated behavior information, and if the associated behavior information contains a target object, the object is determined to be the target object to be executed.
405. Recommending the associated behavior information to a target user if at least one associated behavior information is searched and does not comprise a corresponding object to be executed, searching the object to be executed from the acquired environment image information if at least one associated behavior information is searched and comprises the corresponding object to be executed, judging whether the distance between the searched object to be executed and the target user exceeds a preset length if the distance exceeds the preset length, and recommending the associated behavior information to the target user if the distance exceeds the preset length;
in the embodiment of the application, when at least one piece of associated behavior information is searched and comprises a corresponding object to be executed, if the object to be executed is not searched in the acquired environment image information, the associated behavior information is not pushed to a target user, if the object to be executed is searched, the associated behavior information corresponding to the object to be executed is further recommended to the target user after the distance between the object to be executed and the target user is determined to exceed the preset length.
In the embodiment of the application, the recommendation mode of the target user associated behavior information can be determined through specific scene characteristics. Two different types of recommendations are specifically described below:
1. Recommending the associated behavior information to the target user in a voice mode;
Specifically, in any scene, the robot control system can recommend the associated behavior information to the target user in a voice mode. Before voice recommendation, the robot control system can number each piece of associated behavior information needing voice pushing in advance, for example, A eat apples, B eat plums and the like, and then play the information sequentially through voice. Further, the target user can directly reply the number "A", and after receiving the reply of the target user, the control system determines that the corresponding robot executes the "washing" operation, and can also directly reply the recommended content "washing apple".
2. The associated behavior information or the literal name or picture of the object to be executed is displayed on a display screen of the robot or a screen of a third party device for establishing communication connection;
Specifically, when the target user is sitting on a sofa to watch a television or playing a mobile phone, the robot control system can acquire the current behavior information of the target user as sitting, lying, walking, jogging, running, boxing, waving, clapping and the like, at this time, when the control system further recognizes a mobile phone with a bright screen or a television with content being played according to an image shot by a camera on the robot body, bluetooth is started, a receiving device corresponding to the mobile phone or the television is found in the surrounding environment, so that wireless communication is established with the mobile phone or the television, and the associated behavior information to be recommended is displayed on a relevant screen for the target user to browse and select.
Optionally, the control system may also determine whether the associated behavior information needs to be displayed on the screen according to the current time information. For example, when the control system obtains the current time to be 23:00, it can be determined that the current time information is "late night", and the current time information is not suitable for disturbing the target user or affecting surrounding users through voice, at this time, the associated behavior information can also be displayed on a screen currently being watched by the target user, and if no action of watching the screen by the target user is recognized or no screen being lit or played is recognized, the control system controls a display screen of the robot to be lit, and the associated behavior information is displayed on the display screen.
406. If the feedback information of the target user is received within the first preset time period, analyzing whether the feedback information includes meaning representation agreeing to accept the recommended associated behavior information, if so, executing step 407, and if not, ending the flow;
407. generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction;
In the embodiment of the application, if the target user has a demand, the target user selects the required associated behavior information according to the associated behavior information recommended by the robot control system in a preset time, if the target user does not have the demand, the associated behavior information recommended by the robot control system is refused, so that the control system also needs to judge whether the received feedback information agrees to recommend or refuses to recommend in the preset time after sending the associated behavior information, if the feedback information agrees to prove that the target user has the demand, the corresponding first operation instruction is generated according to the associated behavior information selected by the target user to control the robot to execute the corresponding operation, if the feedback information refuses to prove that the target user does not have the demand, and the flow is terminated.
Specifically, when the robot control system recommends the associated behavior information as follows: when eating apples, receiving a 'consent' of a target user through voice reply in a preset time after recommending associated behavior information, if the target user agrees with the recommendation of the system, generating a corresponding first operation instruction to control the robot to execute corresponding operation, wherein the target user can take ok, eat without peeling and the like; and receiving the voice reply of the target user 'disagree, NO, pear taking by me' and the like in the preset time, determining that the target user refuses the recommendation of the system, and ending the flow.
408. Acquiring target associated behavior information selected by a target user according to a first operation instruction;
409. generating a first data set of the target associated behavior information through a first model, wherein the first model is used for analyzing word parts of words in the target associated behavior information, and the first data set is a word and corresponding word part data set in the target associated behavior information;
410. And storing the words and the corresponding part-of-speech data of the first data set in a database according to part-of-speech classification.
In the embodiment of the application, the control system takes the target associated behavior information selected by each target user as new data, analyzes the new data to a certain extent and stores the new data into the database so as to update and iterate the associated behavior information content in the database.
Specifically, for example, the associated behavior information selected by the user is "eat apple". After the control system controls the corresponding robot to execute the associated behavior information, the associated behavior information is output to a first data set through any natural language processing model (namely a first model) with a part-of-speech analysis function: "{ apple-noun, eat-verb }", storing the first data set in a database according to the Chinese part of speech.
In the embodiment of the application, after the control system recommends the associated behavior information corresponding to the object to be executed to the target user, a corresponding first operation instruction can be generated to control the robot to execute corresponding operation according to feedback information which is sent by the target user and has meaning expression agreeing to recommendation. And the target associated behavior information selected by the target user can be acquired according to the first operation instruction, the target associated behavior information is subjected to part-of-speech analysis through the first model, then a first data set is output, words and corresponding part-of-speech data in the first data set are stored in the database according to part-of-speech classification, and the effect of coping with new application scenes and new user behaviors through updating iteration of database data is achieved.
Referring to fig. 5, the robot control method provided by the present application further includes:
501. Judging whether the operation needs to be assisted by an auxiliary target object according to the feedback information, if so, executing the step 502, and if not, executing the step 504;
502. searching for an auxiliary target object from the acquired environmental image information;
503. If the auxiliary target object is searched, a corresponding first operation instruction is generated, corresponding operation is executed, if the auxiliary target object is not searched, the target user is informed, and if the auxiliary target object is searched from the acquired environment image information within a second preset time period after the target user is informed, the corresponding first operation instruction is generated, and corresponding operation is executed;
504. generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction;
step 501 follows step 406 in the above embodiment, and step 504 follows step 408 in the above embodiment.
In the embodiment of the application, in the associated behavior information executed by the robot, an auxiliary tool may be needed to assist in operation to complete the associated action, so that after receiving feedback information sent by a target user, the robot control system needs to analyze the feedback information to determine whether the auxiliary target object is needed to cooperate, if so, the auxiliary target object is searched and then corresponding operation of the associated behavior information selected by the target user is executed, and if not, a corresponding first operation instruction can be directly generated according to the feedback information to execute the corresponding operation.
Specifically, for example, the control system can know, according to the feedback information, that the associated behavior information selected by the target user is: eating apples, then the target object to be executed of the information is: apple, need to cut the apple with the knife as appurtenance, therefore, auxiliary object then is: a knife. At this time, the control system searches for an object meeting the 'knife' setting according to the acquired environmental image information, if the object is searched, a corresponding first operation instruction is generated, the robot is controlled to execute, if the object is not searched, the knife is placed in a shooting area of a camera of the robot by a user through a mode of voice prompt, sending prompt information to a mobile phone terminal of the user and the like, and therefore the control system acquires second environmental image information shot by the camera for a second time within preset time, and the corresponding knife can be searched.
In the embodiment of the application, the control system can judge whether the auxiliary target object is needed or not besides the target object to be executed by analyzing the feedback information sent by the target user, and if so, the position of the auxiliary target object can be actively determined and corresponding operation can be executed, so that the execution force of the robot can be increased.
The above embodiments explain the robot control method provided in the present application in detail, and the robot provided in the present application will be described in detail with reference to the accompanying drawings.
Referring to fig. 6, fig. 6 provides an embodiment of a robot according to the present application, including:
A first obtaining unit 601, configured to obtain current behavior information of a target user;
A first search unit 602, configured to search, according to current behavior information of the target user, in combination with current time information and/or current location information of the target user, for associated behavior information matching the current behavior information of the target user from a database, where the associated behavior information includes an associated action and/or a target object to be executed;
The first execution unit 603 is configured to recommend the associated behavior information to the target user when the first search unit 602 searches for at least one associated behavior information and the at least one associated behavior information does not include a corresponding target object to be executed; and the first search unit 602 is further configured to search for a target object to be executed from the obtained environmental image information when the first search unit searches for at least one piece of associated behavior information and the at least one piece of associated behavior information includes a corresponding target object to be executed, and if the target object to be executed is searched, recommend the associated behavior information to the target user.
In the embodiment of the present application, after the first obtaining unit 601 obtains the current behavior information of the target user, the first searching unit 602 searches the database for the associated behavior information matched with the current time information according to the current behavior information and/or the current position information obtained by the first obtaining unit 601, and then recommends the associated behavior information to the user through the first executing unit 603, so that the interactivity between the robot and the user is enhanced, and the use experience of the user is improved.
Referring to fig. 7, fig. 7 provides another embodiment of a robot control device according to the present application, the device includes:
a first obtaining unit 701, configured to obtain current behavior information of a target user;
a first search unit 702, configured to search, according to current behavior information of a target user, in combination with current time information and/or current location information of the target user, for associated behavior information matching the current behavior information of the target user from a database, where the associated behavior information includes an associated action and/or a target object to be executed;
A recommendation determining unit 703, configured to determine that, when at least one associated behavior information is searched, the first N associated actions that are executed most frequently in a past preset time period are recommended associated behavior information; the method is also used for searching the current behavior information of the target user through a networking third-party search engine when the associated behavior information is not searched, and combining the current time information and/or the first n pieces of information with highest searching frequency, which are matched with the current position information of the target user, are recommended associated behavior information;
A first execution unit 704, configured to recommend the associated behavior information to the target user when the first search unit 702 searches for at least one associated behavior information and the at least one associated behavior information does not include a corresponding target object to be executed; the first search unit 702 is further configured to search for a target object to be executed from the obtained environmental image information when the first search unit 702 searches for at least one piece of associated behavior information and the at least one piece of associated behavior information includes a corresponding target object to be executed, and if the target object to be executed is searched, recommend the associated behavior information to the target user;
a first analysis unit 705 for, when receiving feedback information of the target user within a first preset period of time, analyzing whether the feedback information includes a meaning representation agreeing to accept the recommended associated behavior information;
the second execution unit 706 is configured to generate a corresponding first operation instruction when the first analysis unit 705 analyzes that the feedback information includes meaning representation that agrees to accept the recommended associated behavior information, and execute a corresponding operation according to the first operation instruction;
a second obtaining unit 707, configured to obtain target associated behavior information selected by a target user according to the first operation instruction;
The set generating unit 708 is configured to generate a first data set from the target associated behavior information through a first model, where the first model is used to analyze word parts of vocabulary in the target associated behavior information, and the first data set is a word and a corresponding word part data set in the target associated behavior information;
A data storage unit 709, configured to store the words of the first data set and the corresponding part-of-speech data in the database according to the part-of-speech classification.
In an embodiment of the present application, the second execution unit 706 includes:
A second judging module 7061, configured to judge whether the operation needs to be assisted by the auxiliary target object according to the feedback information;
A third searching module 7062, configured to search for an auxiliary target object from the acquired environmental image information when the second judging module 7061 determines that the operation needs to be assisted by the auxiliary target object;
the third execution module 7063 is configured to generate a corresponding operation instruction when the third search module 7062 searches for the auxiliary target object, and execute a corresponding operation; and the third search module 5062 is further configured to notify the target user when the auxiliary target object is not searched, and if the auxiliary target object is searched from the acquired environmental image information within a second preset time period after notifying the target user, generate a corresponding first operation instruction, and execute a corresponding operation.
In this embodiment of the present application, the first execution unit 704 is further configured to determine whether the searched distance between the target object to be executed and the target user exceeds a preset length when the associated behavior information includes the corresponding target object to be executed, and if the searched distance exceeds the preset length, recommend the associated behavior information to the target user.
In the embodiment of the present application, the manner of recommending the associated behavior information in the first execution unit 704 includes:
recommending the associated behavior information to the target user in a voice mode;
Or alternatively, the first and second heat exchangers may be,
The associated behavior information or the literal name or picture of the object to be executed is displayed on the display screen of the robot or on the screen of the third party device establishing communication connection.
Referring to fig. 8, fig. 8 provides an embodiment of a robot apparatus according to the present application, the robot apparatus including:
a processor 801, a memory 802, an input/output unit 803, and a bus 804;
The processor 801 is connected to a memory 802, an input/output unit 803, and a bus 804;
the processor 801 specifically performs the following operations:
Acquiring current behavior information of a target user;
Searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user, combining the current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or targets to be executed;
if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information does not comprise a corresponding object to be executed, recommending the associated behavior information to a target user;
if at least one piece of associated behavior information is searched and the at least one piece of associated behavior information comprises a corresponding object to be executed, searching the object to be executed from the acquired environment image information, and if the object to be executed is searched, recommending the associated behavior information to a target user.
In this embodiment, the functions of the processor 801 correspond to the steps in the embodiments shown in fig. 3 to 5, and are not described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A robot control method applied to a robot control system, comprising:
Acquiring current behavior information of a target user;
Searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user, combining current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and/or a target object to be executed;
If at least one piece of associated behavior information is searched and the at least one piece of associated behavior information does not comprise a corresponding object to be executed, recommending the associated behavior information to the target user;
If at least one piece of associated behavior information is searched and comprises a corresponding object to be executed, searching the object to be executed from the acquired environment image information, and if the object to be executed is searched, recommending the associated behavior information to the target user;
if the feedback information of the target user is received within the first preset time period, analyzing whether the feedback information comprises meaning representation agreeing to accept the recommended associated behavior information or not;
If yes, judging whether the operation is assisted by the auxiliary target object according to the feedback information, generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction.
2. The robot control method according to claim 1, wherein the step of recommending the associated behavior information to the target user further comprises:
If the associated behavior information comprises a corresponding object to be executed, judging whether the searched distance between the object to be executed and the target user exceeds a preset length, and if the distance exceeds the preset length, recommending the associated behavior information to the target user.
3. The robot control method according to claim 2, wherein the means for recommending the associated behavior information includes:
recommending the associated behavior information to the target user in a voice mode;
Or displaying the associated behavior information or the literal name or picture of the object to be executed on a display screen of the robot or a screen of a third party device for establishing communication connection.
4. A robot control method according to any one of claims 1 to 3, further comprising, prior to said generating the corresponding first operation instruction:
judging whether the operation is needed to be assisted by an auxiliary target object according to the feedback information, and if so, searching the auxiliary target object from the acquired environment image information;
if the auxiliary target object is searched, generating a corresponding first operation instruction, and executing a corresponding operation;
If the auxiliary target object is not searched, the target user is informed, and if the auxiliary target object is searched from the acquired environment image information within a second preset time period after the target user is informed, a corresponding first operation instruction is generated, and corresponding operation is executed.
5. The robot control method of claim 4, wherein the database includes a number of times the associated action is performed within a past preset time period;
the step of searching the associated behavior information matched with the current behavior information of the target user from the database further comprises the following steps:
If at least one piece of associated behavior information is searched, determining that the first N pieces of associated behaviors with the highest execution times in the past preset time period are recommended associated behavior information;
If the associated behavior information is not searched, searching the current behavior information of the target user through a third-party search engine, and combining the current time information and/or the first n pieces of information with highest search frequency, which are matched with the current position information of the target user, as the recommended associated behavior information.
6. A robot, comprising:
the first acquisition unit is used for acquiring current behavior information of the target user;
The first search unit is used for searching associated behavior information matched with the current behavior information of the target user from a database according to the current behavior information of the target user, combining the current time information and/or the current position information of the target user, wherein the associated behavior information comprises associated actions and a target object to be executed;
The first execution unit is used for recommending the associated behavior information to the target user when the first search unit searches at least one piece of associated behavior information and the at least one piece of associated behavior information does not comprise a corresponding target object to be executed;
The first execution unit is further configured to search for a target object to be executed from the acquired environmental image information when the first search unit searches for at least one piece of associated behavior information and the at least one piece of associated behavior information includes a corresponding target object to be executed, and recommend the associated behavior information to the target user if the target object to be executed is searched;
The robot further includes:
The first analysis unit is used for analyzing whether the feedback information comprises meaning representation agreeing to accept the recommended associated behavior information or not when the feedback information of the target user is received within a first preset time period;
And the second execution unit is used for judging whether the operation needs to be assisted by an auxiliary target object according to the feedback information when the first analysis unit analyzes that the feedback information comprises meaning representation agreeing to accept the recommended associated behavior information, generating a corresponding first operation instruction, and executing corresponding operation according to the first operation instruction.
7. The robot of claim 6, wherein the first execution unit is further configured to determine whether a distance between the searched object to be executed and the target user exceeds a preset length when the associated behavior information includes the corresponding object to be executed, and recommend the associated behavior information to the target user if the distance exceeds the preset length.
8. The robot of claim 7, wherein the means for recommending the associated behavior information in the first execution unit comprises:
recommending the associated behavior information to the target user in a voice mode;
Or alternatively, the first and second heat exchangers may be,
And displaying the associated behavior information or the literal name or picture of the object to be executed on a display screen of the robot or a screen of a third party device for establishing communication connection.
9. The robot of any one of claims 6 to 8, wherein the second execution unit comprises:
The second judging module is used for judging whether the operation is needed to be assisted by an auxiliary target object according to the feedback information;
The third searching module is used for searching an auxiliary target object from the acquired environment image information when the second judging module determines that the operation needs to be assisted by the auxiliary target object;
the third execution module is used for generating a corresponding operation instruction and executing a corresponding operation when the third search module searches the auxiliary target object;
And the third execution module is also used for informing the target user when the third search module does not search the auxiliary target object, generating a corresponding first operation instruction when the auxiliary target object is searched from the acquired environment image information within a second preset time period after informing the target user, and executing corresponding operation.
10. The robot of claim 9, wherein the database includes a number of executions of the associated action over a predetermined period of time;
the robot further includes:
The recommendation determining unit is used for determining that the first N associated actions with the highest execution times in the past preset time period are recommended associated behavior information when at least one associated behavior information is searched; and the method is also used for searching the current behavior information of the target user through a networked third-party search engine when the associated behavior information is not searched, and combining the current time information and/or the first n pieces of information with highest search frequency, which are matched with the current position information of the target user, are recommended associated behavior information.
CN202111653094.9A 2021-12-30 2021-12-30 Robot control method and robot Active CN114488879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111653094.9A CN114488879B (en) 2021-12-30 2021-12-30 Robot control method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111653094.9A CN114488879B (en) 2021-12-30 2021-12-30 Robot control method and robot

Publications (2)

Publication Number Publication Date
CN114488879A CN114488879A (en) 2022-05-13
CN114488879B true CN114488879B (en) 2024-05-31

Family

ID=81508608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111653094.9A Active CN114488879B (en) 2021-12-30 2021-12-30 Robot control method and robot

Country Status (1)

Country Link
CN (1) CN114488879B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506816B (en) * 2023-04-18 2023-10-17 广州市小粤云科技有限公司 Telephone information recommendation device and recommendation system thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005131713A (en) * 2003-10-28 2005-05-26 Advanced Telecommunication Research Institute International Communication robot
WO2006080820A1 (en) * 2005-01-28 2006-08-03 Sk Telecom Co., Ltd. Method and system for recommending preferred service using mobile robot
WO2012008553A1 (en) * 2010-07-15 2012-01-19 日本電気株式会社 Robot system
CN106462256A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 A function recommendation method, system and robot based on positive wakeup
CN107498555A (en) * 2017-08-11 2017-12-22 上海思依暄机器人科技股份有限公司 One kind action transmitting method, device and robot
CN108858207A (en) * 2018-09-06 2018-11-23 顺德职业技术学院 It is a kind of that Target Searching Method and system are cooperateed with based on the multirobot remotely controlled
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN111680147A (en) * 2020-07-07 2020-09-18 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium
CN111975772A (en) * 2020-07-31 2020-11-24 深圳追一科技有限公司 Robot control method, device, electronic device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463698B (en) * 2017-08-15 2020-11-20 北京百度网讯科技有限公司 Method and device for pushing information based on artificial intelligence
US20210342124A1 (en) * 2018-09-28 2021-11-04 Element Ai Inc. Context-based recommendations for robotic process automation design
US12094196B2 (en) * 2019-12-03 2024-09-17 Samsung Electronics Co., Ltd. Robot and method for controlling thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005131713A (en) * 2003-10-28 2005-05-26 Advanced Telecommunication Research Institute International Communication robot
WO2006080820A1 (en) * 2005-01-28 2006-08-03 Sk Telecom Co., Ltd. Method and system for recommending preferred service using mobile robot
WO2012008553A1 (en) * 2010-07-15 2012-01-19 日本電気株式会社 Robot system
CN106462256A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 A function recommendation method, system and robot based on positive wakeup
CN107498555A (en) * 2017-08-11 2017-12-22 上海思依暄机器人科技股份有限公司 One kind action transmitting method, device and robot
CN108858207A (en) * 2018-09-06 2018-11-23 顺德职业技术学院 It is a kind of that Target Searching Method and system are cooperateed with based on the multirobot remotely controlled
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN111680147A (en) * 2020-07-07 2020-09-18 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium
CN111975772A (en) * 2020-07-31 2020-11-24 深圳追一科技有限公司 Robot control method, device, electronic device and storage medium

Also Published As

Publication number Publication date
CN114488879A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN106873773B (en) Robot interaction control method, server and robot
US12026977B2 (en) Model training method and apparatus, face recognition method and apparatus, device, and storage medium
US11605193B2 (en) Artificial intelligence-based animation character drive method and related apparatus
CN110599557B (en) Image description generation method, model training method, device and storage medium
CN107370649B (en) Household appliance control method, system, control terminal and storage medium
CN108297098A (en) The robot control system and method for artificial intelligence driving
WO2018108176A1 (en) Robot video call control method, device and terminal
CN110517685A (en) Audio recognition method, device, electronic equipment and storage medium
CN111353299B (en) Dialog scene determining method based on artificial intelligence and related device
KR102466438B1 (en) Cognitive function assessment system and method of assessing cognitive funtion
CN113821720A (en) Behavior prediction method and device and related product
EP3143565A1 (en) Asset estimate generation system
CN114488879B (en) Robot control method and robot
CN109886408A (en) A kind of deep learning method and device
CN110717026A (en) Text information identification method, man-machine conversation method and related device
CN111314771B (en) Video playing method and related equipment
CN114190823B (en) Intelligent household robot and control method
WO2006080820A1 (en) Method and system for recommending preferred service using mobile robot
CN106777066B (en) Method and device for image recognition and media file matching
CN109086448B (en) Voice question searching method based on gender characteristic information and family education equipment
CN115086094B (en) Equipment selection method and related device
CN113761989B (en) Behavior recognition method and device, computer and readable storage medium
CN114970562A (en) Semantic understanding method, device, medium and equipment
CN112752155B (en) Media data display method and related equipment
KR101873458B1 (en) Robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant