[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108780361A - Human-computer interaction method and device, robot and computer readable storage medium - Google Patents

Human-computer interaction method and device, robot and computer readable storage medium Download PDF

Info

Publication number
CN108780361A
CN108780361A CN201880001295.0A CN201880001295A CN108780361A CN 108780361 A CN108780361 A CN 108780361A CN 201880001295 A CN201880001295 A CN 201880001295A CN 108780361 A CN108780361 A CN 108780361A
Authority
CN
China
Prior art keywords
robot
information
characteristic information
human
interacted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880001295.0A
Other languages
Chinese (zh)
Inventor
张含波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Publication of CN108780361A publication Critical patent/CN108780361A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the technical field of robots and discloses a human-computer interaction method, a human-computer interaction device, a robot and a computer readable storage medium. In the present application, the human-computer interaction method is applied to a robot, and includes: extracting biometric information of the identified at least one object; wherein the biological characteristic information comprises physiological characteristic information and/or behavior characteristic information; determining a target interaction object needing interaction from at least one object according to the biological characteristic information; the robot is controlled to respond in a manner that matches the target interaction object. The man-machine interaction method can enable the robot to only respond to the object needing interaction, effectively avoids error response operation, and greatly improves user experience.

Description

Human-computer interaction method and device, robot and computer readable storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a human-computer interaction method, an apparatus, a robot, and a computer-readable storage medium.
Background
Human-Computer Interaction (Human-Computer Interaction or Human-machine Interaction, HCI or HMI for short) is a study for studying the Interaction between a system and a user. The system may be a variety of machines, and may be a computerized system and software. Taking an interactive robot placed in a public place such as a bank business hall, a large mall, an airport, etc., as an example, the robot can respond through a computer system to provide services for a user, such as actively initiating a greeting, answering a question of the user, guiding the user to transact business, etc.
However, the inventors found that at least the following problems exist in the prior art: because the people flow in public places is large and various sound interferences such as broadcast and music can occur, the existing robot cannot shield the interference factors at all and can continuously respond, so that the processing resources of the robot are seriously occupied, the robot cannot provide effective services for users who really need help, and the use experience of the users is seriously influenced.
Disclosure of Invention
An object of some embodiments of the present invention is to provide a human-computer interaction method, an apparatus, a robot, and a computer-readable storage medium, so as to solve the above technical problems.
An embodiment of the present application provides a human-computer interaction method, which is applied to a robot, and includes: extracting biometric information of the identified at least one object; wherein the biological characteristic information comprises physiological characteristic information and/or behavior characteristic information; determining a target interaction object needing interaction from at least one object according to the biological characteristic information; the robot is controlled to respond in a manner that matches the target interaction object.
An embodiment of the present application provides a human-computer interaction device, which is applied to a robot, and includes: the device comprises an extraction module, a determination module and a control module; the extraction module is used for extracting the biological characteristic information of the identified at least one object; wherein the biological characteristic information comprises physiological characteristic information and/or behavior characteristic information; the determining module is used for determining a target interaction object needing to be interacted from at least one object according to the biological characteristic information; and the control module is used for controlling the robot to make a response matched with the target interactive object.
One embodiment of the present application provides a robot comprising at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the human-computer interaction method in any method embodiment of the application.
An embodiment of the present application provides a computer-readable storage medium storing computer instructions for causing a computer to execute a human-computer interaction method referred to in any method embodiment of the present application.
Compared with the prior art, the robot extracts the biological characteristic information of the identified object when identifying the object, determines the object which really needs to be interacted according to the biological characteristic information, and makes a response matched with the object after determining the object which really needs to be interacted. Through the man-machine interaction mode, the robot can only respond to the object needing interaction, so that the error response operation is effectively avoided, and the user experience is greatly improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flowchart of a human-computer interaction method in a first embodiment of the present application;
FIG. 2 is a schematic diagram of a robot determining a target interaction object in a first embodiment of the present application;
FIG. 3 is a flowchart of a human-computer interaction method according to a second embodiment of the present application;
FIG. 4 is a diagram illustrating a robot determining a target interaction object according to a second embodiment of the present application;
FIG. 5 is a flowchart of a human-computer interaction method according to a third embodiment of the present application;
FIG. 6 is a block diagram of a human-computer interaction device according to a fourth embodiment of the present application;
fig. 7 is a block diagram of a robot according to a fifth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, some embodiments of the present application will be described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
A first embodiment of the present application relates to a human-computer interaction method, which is applied to a robot, and a specific flow is shown in fig. 1.
It should be noted that the robot in this embodiment is a generic term of an automatic control machine, and includes all machines (such as machine dog, machine cat, etc.) that simulate human behavior or thought and simulate other creatures.
In step 101, biometric information of the identified at least one subject is extracted.
Specifically, in this embodiment, the operation of extracting the biometric information of the identified at least one object may be specifically triggered when the at least one object is detected to be approaching within a preset range (for example, 5 meters) taking the position of the robot as a center of a circle, and the robot can sense the object within a range of 360 degrees of the position of the robot by using the detection method.
It is worth mentioning that, in this embodiment, the operation of identifying the object may be determined by the robot, and specifically, may be implemented by a proximity sensor installed on the robot, for example, after the robot is placed in a public place and started, the proximity sensor may sense whether an object is approaching within a range of 5 meters around the center of the robot, if the movement information or the presence information of the object is sensed, the sensed information is converted into an electrical signal, and a processor of the robot controls a biometric acquisition device of the robot to extract biometric information of at least one identified object.
In order to facilitate understanding of a specific implementation manner of extracting the biometric information, several specific extraction manners are listed as follows:
the first method is as follows: and controlling the robot to collect images, and extracting the biological characteristics of at least one object from the collected images to obtain the biological characteristic information of the at least one object.
The second method comprises the following steps: and controlling the robot to collect voice, and extracting the biological characteristics of at least one object from the collected voice to obtain the biological characteristic information of the at least one object.
The third method comprises the following steps: and controlling the robot to carry out image acquisition and voice acquisition, extracting the biological characteristics of at least one object from the acquired image to obtain the biological characteristic information of the at least one object, and extracting the biological characteristics of the at least one object from the acquired voice to obtain the biological characteristic information of the at least one object.
In addition, when the third mode is adopted to extract the biological feature information, the biological feature information of the object obtained from the image and the biological feature information of the object obtained from the voice can be further analyzed and processed, so that the biological feature information of the same object is determined, and the subsequent operation of determining the target interactive object can be performed with comprehensive analysis according to the biological feature information of the same object from the object in the image and the biological feature information of the object in the voice, so that the accuracy of determining the target interactive object is improved.
It should be noted that, in this embodiment, the extracted biometric information specifically includes physiological characteristic information and/or behavior characteristic information.
The physiological characteristic information may specifically be any one or any combination of related information such as facial information, eye information, voiceprint information (specifically, information from which a voice can be analyzed), and the like of the recognized subject, and the behavior characteristic information may specifically be any one or any combination of related information such as displacement information of the recognized subject, voice content information in the spoken utterance (specifically, information from which the spoken content can be recognized), and the like.
For example, when extracting at least one biological feature of a subject from an acquired image, physiological feature information such as face information and/or eye information of the subject, and behavior feature information such as displacement information may be generally extracted.
For example, when extracting at least one biological feature of a subject from the acquired voice, physiological feature information such as voiceprint information of the subject and behavior feature information such as voice content information may be generally extracted.
In addition, the robot is controlled to acquire images, specifically, the robot may be controlled to acquire images by an image acquisition device, such as a camera, or acquired by an external image acquisition device in communication connection with the robot, such as a monitoring device installed in a market, or acquired by matching of two modes.
Similarly, the robot may be controlled to perform voice collection, or the robot may use its own voice collection device and/or an external voice collection device in communication connection with the robot to perform the voice collection.
In addition, it is worth mentioning that after the object is determined to be recognized and before the robot is controlled to perform image collection and/or voice collection, the robot can be controlled to rotate to the direction facing the recognized object according to the direction information of the perceived object, and then the robot is controlled to perform image and/or voice collection operation, so that the recognized object is ensured to exist in the collected image and voice, the biological characteristic information of the subsequently extracted object is more complete, and the finally determined target interactive object is more accurate.
In addition, the acquired image in this embodiment is not limited to the image information such as a photograph, and may also be the image information in a video, and is not limited herein.
It should be noted that the above is merely an example, and in practical applications, technical means grasped by those skilled in the art may be reasonably set according to the above, as long as the target interactive object can be determined from the identified at least one object according to the extracted biometric information.
In step 102, a target interaction object needing interaction is determined from at least one object according to the biological characteristic information.
In this embodiment, the operation of determining a target interaction object that needs to be interacted from at least one object according to the biometric information may specifically be implemented in the following manner:
firstly, at least one object is determined as an object to be interacted according to the biological characteristic information. For convenience of explanation, the present embodiment specifically explains the object to be interacted as a person.
In particular, because in practical applications, the objects close to the robot are not necessarily all objects that need to interact with, such as a small animal or other terminal equipment close to the robot, not a human. Therefore, the extracted biological characteristic information can be compared with the pre-stored sample information of the person, and the non-human objects are eliminated, so that the accuracy of subsequent operation is ensured.
In addition, when determining that there are a plurality of persons in the identified objects, it is further possible to determine the persons who really seek help as the objects to be interacted by analyzing the biological features of each person, such as the displacement direction (whether moving toward the robot, etc.), the eye information (whether watching the robot, etc.), and the like to determine whether they are seeking help.
And then, selecting one object to be interacted which meets the requirements from the determined objects to be interacted as a target interaction object, namely the object which is finally selected by the robot to carry out human-computer interaction.
Specifically, if the number of the objects to be interacted is equal to 1, the objects to be interacted are directly determined as target interaction objects, if the number of the objects to be interacted is greater than 1, a priority is set for each object to be interacted according to a preset priority setting condition, and then the object to be interacted with the highest priority is determined as the target interaction object.
For ease of understanding, the following detailed description is made in conjunction with fig. 2.
As shown in fig. 2, 3 objects, A, B, C respectively, appear in the range that can be recognized by the robot, and after the judgment is performed according to the biometric information, all the 3 objects meet the interaction condition, that is, all the objects are to-be-interacted objects. In this case, the manner of determining the target interaction object may be determined by the priority level, for example, the priority level is set according to the position information of the object to be interacted.
Specifically, as shown in fig. 2, the obtained position information of the object to be interacted a is (x0, y0), the position information of the object to be interacted B is (x1, y1), and the position information of the object to be interacted C is (x2, y2), and according to the distance calculation formula, the distance calculation formula is used for calculating the distance between the object to be interacted a and the object to be interacted BThe distances d0, d1 and d from the object to be interacted A, B, C to the robot can be calculated,d2. If d2<d0<d1, setting priorities for the object to be interacted A, B, C according to preset priority setting conditions (the closer to the robot, the higher the priority, the farther from the robot, the lower the priority), wherein the set priorities are: the object to be interacted C (with the highest priority), the object to be interacted B (with the lowest priority), and the object to be interacted A (with the priority between the object to be interacted C and the object to be interacted B), at this time, the object to be interacted C can be determined as a target interaction object.
In addition, it is worth mentioning that in practical applications, there may be a case where the positions of the multiple objects to be interacted from the robot are the same, and in this case, the priority determination may be made by a principle that the rotation angle required when the robot moves to which object to be interacted is the smallest.
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
In step 103, position information of the target interaction object is acquired.
In step 104, the robot is controlled to move towards the target interaction object based on the position information.
Specifically, after the target interactive object is determined, the robot can be controlled to move towards the target interactive object according to the acquired position information of the target interactive object, so that the robot can actively carry out interactive operation, and the user experience is improved.
Compared with the prior art, the man-machine interaction method provided by the embodiment can enable the robot to only respond to the object needing interaction, thereby effectively avoiding false response operation and greatly improving user experience.
A second embodiment of the present application relates to a human-computer interaction method. The embodiment is further improved on the basis of the first embodiment, and the specific improvement is as follows: in the process of controlling the robot to make a response matching the target interactive object, identity information of the target interactive object is also acquired, and after the robot moves to the area where the target interactive object is located, a response matching the target interactive object is made according to the identity information, for convenience of description, the following description is specifically made with reference to fig. 3 and 4.
Specifically, in this embodiment, steps 301 to 305 are included, where steps 301, 302, and 304 are substantially the same as steps 101, 102, and 104 in the first embodiment, and are not repeated here, and differences are mainly introduced below, and details of the technique not described in detail in this embodiment may be referred to the human-computer interaction method provided in the first embodiment, and are not repeated here.
In step 303, the location information and identity information of the target interaction object are obtained.
Taking the target interactive object as an example, the identity information of the target interactive object acquired in this embodiment may include any one or any combination of related information such as name, gender, age, whether the target interactive object is a VIP client, and the like.
It should be noted that the above identity information may specifically be obtained by matching the information of the target interaction object with face data stored in a face database of a user transacting business recorded in an occasion (e.g., a bank business hall) where the robot is located through a face recognition technology, and after matching is successful, the recorded relevant identity information of the user transacting business may be directly obtained. If the matching is not successful, determining the gender and the approximate age range according to the face recognition technology, and then searching through the Internet to further improve the identity information of the target interactive object.
In addition, it is worth mentioning that, in practical applications, when determining the target interactive object, the determination may be performed in combination with the identity information of the object to be interacted, for example, the priority of the object to be interacted is set according to the VIP parameter carried in the identity information, and the target interactive object is determined by comprehensively considering factors such as distance, which is described in detail below in conjunction with fig. 4 for easy understanding.
Specifically, there are A, B, C three objects to be interacted within the range that the robot can recognize, and the position information and the identity information of each object to be interacted are labeled as in fig. 4, where the distance from the object to be interacted A, B, C to the robot is d0, d1, d2, respectively, and d2< d0< d 1.
In this case, the method for determining the target interactive object may be to preferentially consider the distance factor and select the object C to be interacted as the target interactive object; VIP factors can also be considered preferentially, and an object A to be interacted is selected as a target interaction object; and also can give priority to age factors, and preferentially determines the aged object to be interacted as the target interactive object.
It should be noted that the above is only a distance description, and does not limit the technical solution and the protection scope of the present application, and in practical applications, those skilled in the art can reasonably set the distance according to actual needs, and the present disclosure is not limited herein.
In step 305, after moving to the area where the target interactive object is located, a response matching the target interactive object is made according to the identity information.
For example, the target interactive object is C in fig. 4, and after moving to the area where the target interactive object C is located (for example, a position one meter away from the target interactive object), the robot may actively perform service inquiry or service guidance, for example, "mr. zhang, you good ask what service you need to handle? ".
Further, in order to improve the user experience, after inquiring the target interactive object C, in the process of waiting for the target interactive object C to answer, the user can also make a response to the object to be interacted a and the object to be interacted B, where "there are more guests and you are waiting for patience! "voice prompt.
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
Compared with the prior art, the man-machine interaction method provided by the embodiment can further obtain the identity information of the target interaction object when the position information of the target interaction object is obtained, so that after the robot moves to the area where the target interaction object is located according to the position information of the target interaction object, a response matched with the target interaction object can be made according to the identity information, and the user experience is further improved.
A third embodiment of the present application relates to a human-computer interaction method. The embodiment is further improved on the basis of the first embodiment or the second embodiment, and the specific improvements are as follows: after the control robot makes a response matching with the target interaction object, when the target interaction object needing to be interacted is determined again, it needs to first determine whether a new object is close to the robot, and a specific flow is shown in fig. 5.
Specifically, in the present embodiment, steps 501 to 508 are included, where steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and are not repeated here, and differences are mainly introduced below, and technical details that are not described in detail in this embodiment may refer to the human-computer interaction method provided in the first embodiment or the second embodiment, and are not repeated here.
In step 505, it is determined whether a new object is approaching the robot. If it is determined that a new object is close to the robot, go to step 506; otherwise, step 507 is directly entered, and one object to be interacted is reselected from the objects to be interacted remaining in the last human-computer interaction process to make the target interaction object.
Specifically, in this embodiment, the manner of determining whether a new object approaches the robot may be as described in the first embodiment, and if a new object is detected to approach within a preset range (e.g., 5 meters) with the current position of the robot as a center of circle, it is determined that a new object approaches the robot, and detailed determination operations are not described herein again.
In addition, in this embodiment, there may be one new object approaching the robot, or may be larger than 1, and this is not limited here.
In step 506, biometric information of the new subject is extracted.
In step 507, the target interaction object that needs to be interacted is determined again.
Specifically, the target object to be re-determined in this embodiment, which needs to be interacted with, is specifically selected from a new object and objects other than the target interaction object of the last interaction operation.
For ease of understanding, the following detailed description is made:
in practical applications, especially in public places with large pedestrian volume, a plurality of objects which need to interact with the robot may exist at the same time (i.e. more than 1 object is determined as an object to be interacted which needs to be interacted according to the biological characteristic information of the identified object), however, when human-computer interaction is performed, the robot can only respond to one object to be interacted (i.e. a target interaction object needs to be selected for interaction) at the same time, and can interact with other objects to be interacted after one interaction is completed. However, after one interaction is completed, the robot may respond to the waiting robot in addition to the previously determined object to be interacted, and a new object to be interacted may appear, so that in this case, the operation of re-determining the object to be interacted needs to be performed, and it is necessary to re-select one object to be interacted from the newly determined object to be interacted and the remaining objects to be interacted in the previous human-computer interaction process to make the object to be interacted.
In addition, it should be noted that, because the manner of re-determining the target interaction object that needs to be interacted in this embodiment is substantially the same as the determination manner in the first embodiment, the identified object needs to be determined as the object to be interacted according to the biological characteristic information, and then the target interaction object that needs to be interacted finally is selected from the objects to be interacted, and specific implementation details are not described herein again.
In addition, regarding the selection of the target interactive object, in this embodiment, the selection may still be performed according to the priority of each object to be interacted, and certainly, a new target interactive object may also be determined according to other selection manners, which is not limited herein.
In step 508, the robot is controlled to respond that matches the re-determined target interaction object.
Specifically, the robot is controlled to make a response matching the re-determined target interaction object, and the response process may be: and moving towards the target interactive object, and actively performing service consultation or service guidance after moving to the area where the target interactive object is located, wherein a specific response mode can be set according to the re-determined related information of the target interactive object, and is not limited here.
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
Compared with the prior art, the man-machine interaction method provided by the embodiment comprises the steps of monitoring whether a new object approaches the robot or not after one man-machine interaction operation is completed, extracting biological characteristic information of the new object and determining whether the new object is an object to be interacted or not when it is determined that the new object approaches the robot, and if the new object is the object to be interacted, reselecting one object to be interacted from the newly determined object to be interacted and the rest objects to be interacted in the last man-machine interaction process to serve as a target interaction object, and then carrying out man-machine interaction; and if the newly-appeared object is not the object to be interacted, directly reselecting one object to be interacted from the rest objects to be interacted in the last human-computer interaction process to make a target interaction object, and then performing human-computer interaction.
Through the above description, it is easy to find that the human-computer interaction method provided in this embodiment enables the robot to dynamically update and perceive the state of the object in the working process, so that a response conforming to the current scene can be accurately made, misoperation is reduced, and user experience is further improved.
A fourth embodiment of the present application relates to a human-computer interaction device, which is applied to a robot and has a specific structure shown in fig. 6.
As shown in fig. 6, the human-computer interaction device includes an extraction module 601, a determination module 602, and a control module 603.
The extracting module 601 is configured to extract biometric information of the identified at least one object.
A determining module 602, configured to determine, according to the biometric information, a target interaction object that needs to be interacted from the at least one object.
And the control module 603 is used for controlling the robot to make a response matched with the target interaction object.
Specifically, in this embodiment, the biometric information of the identified at least one object extracted by the extraction module 601 may specifically be any one of physiological characteristic information and behavior characteristic information, or a combination of both of the physiological characteristic information and the behavior characteristic information.
In addition, it should be noted that, in this embodiment, the physiological characteristic information extracted by the extraction module 601 may specifically be any one or any combination of facial information, eye information, voiceprint information, and the like of the subject. The behavior feature information extracted by the extraction module 601 may specifically be any one or a combination of displacement information, voice content information, and the like of the object.
When the determining module 602 determines the target interaction object that needs to be interacted from at least one object according to the various biometric information, specifically, the target interaction object may be: firstly, determining that the identified object is an object to be interacted (an object to be interacted) according to the various biological characteristic information, such as analyzing the eye gaze condition of the object according to the eye information of the identified object and the displacement information of the object to determine whether the object is seeking help currently, thereby determining whether the object is the object to be interacted; then, after the objects to be interacted are determined, an object meeting the requirements is selected from the objects to be interacted as a target interaction object (an object which needs to be interacted finally).
In addition, in this embodiment, the control module 603 controls the robot to make a response matching with the target interaction object, specifically, may control the robot to move toward the target interaction object.
Further, after the robot moves to the area where the target interaction object is located, the robot may be controlled to make a response matched with the target interaction object according to the identity information of the object, such as actively making a service inquiry, a service guide, and the like, which specifically may be: "do you ask what business you go to do? "
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
In addition, technical details that are not described in detail in this embodiment may be referred to a human-computer interaction method provided in any embodiment of the present application, and are not described herein again.
It is not difficult to find out through the above description that, in the human-computer interaction device provided in this embodiment, the extraction module is adopted to extract the biological characteristic information of the identified at least one object, the determination module determines the target interaction object to be interacted from the at least one object according to the biological characteristic information, then the control module is utilized to control the robot to make a response matched with the target interaction object, and through the direct mutual cooperation of the above modules, the robot equipped with the human-computer interaction device can only make a response for the object to be interacted, so that the false response operation is effectively avoided, and the user experience is greatly improved.
The above-described embodiments of the apparatus are merely illustrative, and do not limit the scope of the present application, and in practical applications, a person skilled in the art may select some or all of the modules to implement the purpose of the embodiments according to practical needs, and the present invention is not limited herein.
A fifth embodiment of the present application relates to a robot, and the specific structure is shown in fig. 7.
The robot may be an intelligent machine device located in a facility public place such as a banking office, a large mall, an airport, etc. The internal part of the system specifically includes one or more processors 701 and a memory 702, and one processor 701 is taken as an example in fig. 7.
In this embodiment, all the functional modules in the human-computer interaction device described in the above embodiments are disposed on the processor 701, the processor 701 and the memory 702 may be connected by a bus or by another method, and fig. 7 illustrates an example of connection by a bus.
The memory 702 is a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the human-computer interaction method in any of the method embodiments of the present application. The processor 701 executes various functional applications of the server and data processing by executing software programs, instructions and modules stored in the memory 702, that is, implements the human-computer interaction method referred to in any method embodiment of the present application.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may establish a history database for storing priority setting conditions and the like. In addition, the Memory 702 may include a high-speed Random Access Memory (RAM), a read-write Memory (RAM), and the like. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected to a terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In practical applications, the memory 702 may store instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the human-computer interaction method according to any embodiment of the present application, and control each functional module in the human-computer interaction device to complete the positioning operation in the human-computer interaction method.
A sixth embodiment of the present application relates to a computer-readable storage medium having stored thereon computer instructions for enabling a computer to execute the human-computer interaction method described in any of the method embodiments of the present application.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (10)

1. A human-computer interaction method is applied to a robot and comprises the following steps:
extracting biometric information of the identified at least one object; wherein the biometric information comprises physiological characteristic information and/or behavioral characteristic information;
determining a target interaction object needing to be interacted from the at least one object according to the biological characteristic information;
controlling the robot to respond in a manner that matches the target interaction object.
2. The human-computer interaction method according to claim 1, wherein the extracting of the biometric information of the identified at least one object specifically comprises:
and detecting that at least one object approaches the robot within a preset range taking the position of the robot as the center of a circle, and extracting the biological characteristic information of the at least one object.
3. The human-computer interaction method according to claim 1 or 2, wherein the extracting of the biometric information of the identified at least one object specifically comprises:
controlling the robot to collect images, and extracting biological characteristics of the at least one object from the collected images to obtain physiological characteristic information and/or behavior characteristic information of the at least one object; wherein the physiological characteristic information comprises facial information and/or eye information, and the behavior characteristic information comprises displacement information;
and/or controlling the robot to perform voice collection, and extracting biological characteristics of the at least one object from the collected voice to obtain physiological characteristic information and/or behavior characteristic information of the at least one object; wherein the physiological characteristic information comprises voiceprint information, and the behavior characteristic information comprises voice content information.
4. The human-computer interaction method according to any one of claims 1 to 3, wherein the determining, according to the biometric information, a target interaction object to be interacted from the at least one object specifically comprises:
determining the at least one object as an object to be interacted according to the biological characteristic information;
if the number of the objects to be interacted is equal to 1, determining the objects to be interacted as the target interaction objects;
if the number of the objects to be interacted is larger than 1, setting a priority for each object to be interacted according to a preset priority setting condition, and determining the object to be interacted with the highest priority as the target interactive object.
5. The human-computer interaction method according to any one of claims 1 to 4, wherein the controlling the robot to make a response matching the target interaction object specifically comprises:
acquiring the position information of the target interactive object;
and controlling the robot to move towards the target interaction object according to the position information.
6. The human-computer interaction method of claim 5, wherein the controlling the robot to respond to the matching of the target interaction object specifically comprises:
acquiring identity information of the target interactive object;
and after the robot moves to the area where the target interaction object is located, making a response matched with the target interaction object according to the identity information.
7. The human-computer interaction method of claim 6, wherein after controlling the robot to make a response matching the target interaction object, the human-computer interaction method further comprises:
determining that a new object is proximate to the robot;
extracting the biological characteristic information of the new object, and re-determining a target interactive object needing interaction from the new object and objects except the target interactive object in the at least one object;
controlling the robot to make a response matching the re-determined target interaction object.
8. A human-computer interaction device is applied to a robot, and comprises: the device comprises an extraction module, a determination module and a control module;
the extraction module is used for extracting the biological characteristic information of the identified at least one object; wherein the biometric information comprises physiological characteristic information and/or behavioral characteristic information;
the determining module is used for determining a target interactive object needing to be interacted from the at least one object according to the biological characteristic information;
and the control module is used for controlling the robot to make a response matched with the target interaction object.
9. A robot, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the human-computer interaction method of any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the human-computer interaction method of any one of claims 1 to 7.
CN201880001295.0A 2018-02-05 2018-02-05 Human-computer interaction method and device, robot and computer readable storage medium Pending CN108780361A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075263 WO2019148491A1 (en) 2018-02-05 2018-02-05 Human-computer interaction method and device, robot, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108780361A true CN108780361A (en) 2018-11-09

Family

ID=64029123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880001295.0A Pending CN108780361A (en) 2018-02-05 2018-02-05 Human-computer interaction method and device, robot and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108780361A (en)
WO (1) WO2019148491A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062482A (en) * 2018-07-26 2018-12-21 百度在线网络技术(北京)有限公司 Man-machine interaction control method, device, service equipment and storage medium
CN110085225A (en) * 2019-04-24 2019-08-02 北京百度网讯科技有限公司 Voice interactive method, device, intelligent robot and computer readable storage medium
CN110228073A (en) * 2019-06-26 2019-09-13 郑州中业科技股份有限公司 Active response formula intelligent robot
CN110465947A (en) * 2019-08-20 2019-11-19 苏州博众机器人有限公司 Multi-modal fusion man-machine interaction method, device, storage medium, terminal and system
CN110689889A (en) * 2019-10-11 2020-01-14 深圳追一科技有限公司 Man-machine interaction method and device, electronic equipment and storage medium
CN112764950A (en) * 2021-01-27 2021-05-07 上海淇玥信息技术有限公司 Event interaction method and device based on combined behaviors and electronic equipment
CN113486765A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN113724454A (en) * 2021-08-25 2021-11-30 上海擎朗智能科技有限公司 Interaction method of mobile equipment, device and storage medium
CN114715175A (en) * 2022-05-06 2022-07-08 Oppo广东移动通信有限公司 Target object determination method and device, electronic equipment and storage medium
CN115476366A (en) * 2021-06-15 2022-12-16 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot type robot
CN117251048A (en) * 2022-12-06 2023-12-19 北京小米移动软件有限公司 Control method and device of terminal equipment, terminal equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716634A (en) * 2019-08-28 2020-01-21 北京市商汤科技开发有限公司 Interaction method, device, equipment and display equipment
CN114633267B (en) * 2022-03-17 2024-06-25 上海擎朗智能科技有限公司 Interactive content determination method, mobile device, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143523A2 (en) * 2010-05-13 2011-11-17 Alexander Poltorak Electronic personal interactive device
CN104936091A (en) * 2015-05-14 2015-09-23 科大讯飞股份有限公司 Intelligent interaction method and system based on circle microphone array
CN105701447A (en) * 2015-12-30 2016-06-22 上海智臻智能网络科技股份有限公司 Guest-greeting robot
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN106113038A (en) * 2016-07-08 2016-11-16 纳恩博(北京)科技有限公司 Mode switching method based on robot and device
CN106203050A (en) * 2016-07-22 2016-12-07 北京百度网讯科技有限公司 The exchange method of intelligent robot and device
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143523A2 (en) * 2010-05-13 2011-11-17 Alexander Poltorak Electronic personal interactive device
CN104936091A (en) * 2015-05-14 2015-09-23 科大讯飞股份有限公司 Intelligent interaction method and system based on circle microphone array
CN105701447A (en) * 2015-12-30 2016-06-22 上海智臻智能网络科技股份有限公司 Guest-greeting robot
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN106113038A (en) * 2016-07-08 2016-11-16 纳恩博(北京)科技有限公司 Mode switching method based on robot and device
CN106203050A (en) * 2016-07-22 2016-12-07 北京百度网讯科技有限公司 The exchange method of intelligent robot and device
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062482A (en) * 2018-07-26 2018-12-21 百度在线网络技术(北京)有限公司 Man-machine interaction control method, device, service equipment and storage medium
CN110085225A (en) * 2019-04-24 2019-08-02 北京百度网讯科技有限公司 Voice interactive method, device, intelligent robot and computer readable storage medium
CN110085225B (en) * 2019-04-24 2024-01-02 北京百度网讯科技有限公司 Voice interaction method and device, intelligent robot and computer readable storage medium
CN110228073A (en) * 2019-06-26 2019-09-13 郑州中业科技股份有限公司 Active response formula intelligent robot
CN110465947A (en) * 2019-08-20 2019-11-19 苏州博众机器人有限公司 Multi-modal fusion man-machine interaction method, device, storage medium, terminal and system
CN110465947B (en) * 2019-08-20 2021-07-02 苏州博众机器人有限公司 Multi-mode fusion man-machine interaction method, device, storage medium, terminal and system
CN110689889A (en) * 2019-10-11 2020-01-14 深圳追一科技有限公司 Man-machine interaction method and device, electronic equipment and storage medium
CN110689889B (en) * 2019-10-11 2021-08-17 深圳追一科技有限公司 Man-machine interaction method and device, electronic equipment and storage medium
CN112764950B (en) * 2021-01-27 2023-05-26 上海淇玥信息技术有限公司 Event interaction method and device based on combined behaviors and electronic equipment
CN112764950A (en) * 2021-01-27 2021-05-07 上海淇玥信息技术有限公司 Event interaction method and device based on combined behaviors and electronic equipment
CN115476366A (en) * 2021-06-15 2022-12-16 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot type robot
CN115476366B (en) * 2021-06-15 2024-01-09 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot robot
CN113486765A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN113486765B (en) * 2021-06-30 2023-06-16 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN113724454A (en) * 2021-08-25 2021-11-30 上海擎朗智能科技有限公司 Interaction method of mobile equipment, device and storage medium
CN114715175A (en) * 2022-05-06 2022-07-08 Oppo广东移动通信有限公司 Target object determination method and device, electronic equipment and storage medium
CN117251048A (en) * 2022-12-06 2023-12-19 北京小米移动软件有限公司 Control method and device of terminal equipment, terminal equipment and storage medium

Also Published As

Publication number Publication date
WO2019148491A1 (en) 2019-08-08

Similar Documents

Publication Publication Date Title
CN108780361A (en) Human-computer interaction method and device, robot and computer readable storage medium
CN110741433A (en) Intercom communication using multiple computing devices
CN110875060A (en) Voice signal processing method, device, system, equipment and storage medium
US11145299B2 (en) Managing voice interface devices
CN110516083B (en) Album management method, storage medium and electronic device
CN112016367A (en) Emotion recognition system and method and electronic equipment
CN109447232A (en) Robot active inquiry method, apparatus, electronic equipment and storage medium
CN109839614B (en) Positioning system and method of fixed acquisition equipment
WO2020043040A1 (en) Speech recognition method and device
CN104933791A (en) Intelligent security control method and equipment
CN107833328B (en) Access control verification method and device based on face recognition and computing equipment
CN107398900A (en) Active system for tracking after robot identification human body
CN108074571A (en) Sound control method, system and the storage medium of augmented reality equipment
CN108805035A (en) Interactive teaching and learning method based on gesture identification and device
CN102890777A (en) Computer system capable of identifying facial expressions
CN109147379A (en) Garage parking intelligently guiding terminal and its control method
CN110134233B (en) Intelligent sound box awakening method based on face recognition and terminal
CN110084187B (en) Position identification method, device, equipment and storage medium based on computer vision
CN113093907B (en) Man-machine interaction method, system, equipment and storage medium
CN114881680A (en) Robot, robot interaction method, and storage medium
CN103984415B (en) A kind of information processing method and electronic equipment
CN114067403A (en) Old people self-help money depositing and withdrawing obstacle recognition method and device
JP5844375B2 (en) Object search system and object search method
CN109648573B (en) Robot session switching method and device and computing equipment
CN112669030A (en) Mobile payment method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210207

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information