Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, some embodiments of the present application will be described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
A first embodiment of the present application relates to a human-computer interaction method, which is applied to a robot, and a specific flow is shown in fig. 1.
It should be noted that the robot in this embodiment is a generic term of an automatic control machine, and includes all machines (such as machine dog, machine cat, etc.) that simulate human behavior or thought and simulate other creatures.
In step 101, biometric information of the identified at least one subject is extracted.
Specifically, in this embodiment, the operation of extracting the biometric information of the identified at least one object may be specifically triggered when the at least one object is detected to be approaching within a preset range (for example, 5 meters) taking the position of the robot as a center of a circle, and the robot can sense the object within a range of 360 degrees of the position of the robot by using the detection method.
It is worth mentioning that, in this embodiment, the operation of identifying the object may be determined by the robot, and specifically, may be implemented by a proximity sensor installed on the robot, for example, after the robot is placed in a public place and started, the proximity sensor may sense whether an object is approaching within a range of 5 meters around the center of the robot, if the movement information or the presence information of the object is sensed, the sensed information is converted into an electrical signal, and a processor of the robot controls a biometric acquisition device of the robot to extract biometric information of at least one identified object.
In order to facilitate understanding of a specific implementation manner of extracting the biometric information, several specific extraction manners are listed as follows:
the first method is as follows: and controlling the robot to collect images, and extracting the biological characteristics of at least one object from the collected images to obtain the biological characteristic information of the at least one object.
The second method comprises the following steps: and controlling the robot to collect voice, and extracting the biological characteristics of at least one object from the collected voice to obtain the biological characteristic information of the at least one object.
The third method comprises the following steps: and controlling the robot to carry out image acquisition and voice acquisition, extracting the biological characteristics of at least one object from the acquired image to obtain the biological characteristic information of the at least one object, and extracting the biological characteristics of the at least one object from the acquired voice to obtain the biological characteristic information of the at least one object.
In addition, when the third mode is adopted to extract the biological feature information, the biological feature information of the object obtained from the image and the biological feature information of the object obtained from the voice can be further analyzed and processed, so that the biological feature information of the same object is determined, and the subsequent operation of determining the target interactive object can be performed with comprehensive analysis according to the biological feature information of the same object from the object in the image and the biological feature information of the object in the voice, so that the accuracy of determining the target interactive object is improved.
It should be noted that, in this embodiment, the extracted biometric information specifically includes physiological characteristic information and/or behavior characteristic information.
The physiological characteristic information may specifically be any one or any combination of related information such as facial information, eye information, voiceprint information (specifically, information from which a voice can be analyzed), and the like of the recognized subject, and the behavior characteristic information may specifically be any one or any combination of related information such as displacement information of the recognized subject, voice content information in the spoken utterance (specifically, information from which the spoken content can be recognized), and the like.
For example, when extracting at least one biological feature of a subject from an acquired image, physiological feature information such as face information and/or eye information of the subject, and behavior feature information such as displacement information may be generally extracted.
For example, when extracting at least one biological feature of a subject from the acquired voice, physiological feature information such as voiceprint information of the subject and behavior feature information such as voice content information may be generally extracted.
In addition, the robot is controlled to acquire images, specifically, the robot may be controlled to acquire images by an image acquisition device, such as a camera, or acquired by an external image acquisition device in communication connection with the robot, such as a monitoring device installed in a market, or acquired by matching of two modes.
Similarly, the robot may be controlled to perform voice collection, or the robot may use its own voice collection device and/or an external voice collection device in communication connection with the robot to perform the voice collection.
In addition, it is worth mentioning that after the object is determined to be recognized and before the robot is controlled to perform image collection and/or voice collection, the robot can be controlled to rotate to the direction facing the recognized object according to the direction information of the perceived object, and then the robot is controlled to perform image and/or voice collection operation, so that the recognized object is ensured to exist in the collected image and voice, the biological characteristic information of the subsequently extracted object is more complete, and the finally determined target interactive object is more accurate.
In addition, the acquired image in this embodiment is not limited to the image information such as a photograph, and may also be the image information in a video, and is not limited herein.
It should be noted that the above is merely an example, and in practical applications, technical means grasped by those skilled in the art may be reasonably set according to the above, as long as the target interactive object can be determined from the identified at least one object according to the extracted biometric information.
In step 102, a target interaction object needing interaction is determined from at least one object according to the biological characteristic information.
In this embodiment, the operation of determining a target interaction object that needs to be interacted from at least one object according to the biometric information may specifically be implemented in the following manner:
firstly, at least one object is determined as an object to be interacted according to the biological characteristic information. For convenience of explanation, the present embodiment specifically explains the object to be interacted as a person.
In particular, because in practical applications, the objects close to the robot are not necessarily all objects that need to interact with, such as a small animal or other terminal equipment close to the robot, not a human. Therefore, the extracted biological characteristic information can be compared with the pre-stored sample information of the person, and the non-human objects are eliminated, so that the accuracy of subsequent operation is ensured.
In addition, when determining that there are a plurality of persons in the identified objects, it is further possible to determine the persons who really seek help as the objects to be interacted by analyzing the biological features of each person, such as the displacement direction (whether moving toward the robot, etc.), the eye information (whether watching the robot, etc.), and the like to determine whether they are seeking help.
And then, selecting one object to be interacted which meets the requirements from the determined objects to be interacted as a target interaction object, namely the object which is finally selected by the robot to carry out human-computer interaction.
Specifically, if the number of the objects to be interacted is equal to 1, the objects to be interacted are directly determined as target interaction objects, if the number of the objects to be interacted is greater than 1, a priority is set for each object to be interacted according to a preset priority setting condition, and then the object to be interacted with the highest priority is determined as the target interaction object.
For ease of understanding, the following detailed description is made in conjunction with fig. 2.
As shown in fig. 2, 3 objects, A, B, C respectively, appear in the range that can be recognized by the robot, and after the judgment is performed according to the biometric information, all the 3 objects meet the interaction condition, that is, all the objects are to-be-interacted objects. In this case, the manner of determining the target interaction object may be determined by the priority level, for example, the priority level is set according to the position information of the object to be interacted.
Specifically, as shown in fig. 2, the obtained position information of the object to be interacted a is (x0, y0), the position information of the object to be interacted B is (x1, y1), and the position information of the object to be interacted C is (x2, y2), and according to the distance calculation formula, the distance calculation formula is used for calculating the distance between the object to be interacted a and the object to be interacted BThe distances d0, d1 and d from the object to be interacted A, B, C to the robot can be calculated,d2. If d2<d0<d1, setting priorities for the object to be interacted A, B, C according to preset priority setting conditions (the closer to the robot, the higher the priority, the farther from the robot, the lower the priority), wherein the set priorities are: the object to be interacted C (with the highest priority), the object to be interacted B (with the lowest priority), and the object to be interacted A (with the priority between the object to be interacted C and the object to be interacted B), at this time, the object to be interacted C can be determined as a target interaction object.
In addition, it is worth mentioning that in practical applications, there may be a case where the positions of the multiple objects to be interacted from the robot are the same, and in this case, the priority determination may be made by a principle that the rotation angle required when the robot moves to which object to be interacted is the smallest.
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
In step 103, position information of the target interaction object is acquired.
In step 104, the robot is controlled to move towards the target interaction object based on the position information.
Specifically, after the target interactive object is determined, the robot can be controlled to move towards the target interactive object according to the acquired position information of the target interactive object, so that the robot can actively carry out interactive operation, and the user experience is improved.
Compared with the prior art, the man-machine interaction method provided by the embodiment can enable the robot to only respond to the object needing interaction, thereby effectively avoiding false response operation and greatly improving user experience.
A second embodiment of the present application relates to a human-computer interaction method. The embodiment is further improved on the basis of the first embodiment, and the specific improvement is as follows: in the process of controlling the robot to make a response matching the target interactive object, identity information of the target interactive object is also acquired, and after the robot moves to the area where the target interactive object is located, a response matching the target interactive object is made according to the identity information, for convenience of description, the following description is specifically made with reference to fig. 3 and 4.
Specifically, in this embodiment, steps 301 to 305 are included, where steps 301, 302, and 304 are substantially the same as steps 101, 102, and 104 in the first embodiment, and are not repeated here, and differences are mainly introduced below, and details of the technique not described in detail in this embodiment may be referred to the human-computer interaction method provided in the first embodiment, and are not repeated here.
In step 303, the location information and identity information of the target interaction object are obtained.
Taking the target interactive object as an example, the identity information of the target interactive object acquired in this embodiment may include any one or any combination of related information such as name, gender, age, whether the target interactive object is a VIP client, and the like.
It should be noted that the above identity information may specifically be obtained by matching the information of the target interaction object with face data stored in a face database of a user transacting business recorded in an occasion (e.g., a bank business hall) where the robot is located through a face recognition technology, and after matching is successful, the recorded relevant identity information of the user transacting business may be directly obtained. If the matching is not successful, determining the gender and the approximate age range according to the face recognition technology, and then searching through the Internet to further improve the identity information of the target interactive object.
In addition, it is worth mentioning that, in practical applications, when determining the target interactive object, the determination may be performed in combination with the identity information of the object to be interacted, for example, the priority of the object to be interacted is set according to the VIP parameter carried in the identity information, and the target interactive object is determined by comprehensively considering factors such as distance, which is described in detail below in conjunction with fig. 4 for easy understanding.
Specifically, there are A, B, C three objects to be interacted within the range that the robot can recognize, and the position information and the identity information of each object to be interacted are labeled as in fig. 4, where the distance from the object to be interacted A, B, C to the robot is d0, d1, d2, respectively, and d2< d0< d 1.
In this case, the method for determining the target interactive object may be to preferentially consider the distance factor and select the object C to be interacted as the target interactive object; VIP factors can also be considered preferentially, and an object A to be interacted is selected as a target interaction object; and also can give priority to age factors, and preferentially determines the aged object to be interacted as the target interactive object.
It should be noted that the above is only a distance description, and does not limit the technical solution and the protection scope of the present application, and in practical applications, those skilled in the art can reasonably set the distance according to actual needs, and the present disclosure is not limited herein.
In step 305, after moving to the area where the target interactive object is located, a response matching the target interactive object is made according to the identity information.
For example, the target interactive object is C in fig. 4, and after moving to the area where the target interactive object C is located (for example, a position one meter away from the target interactive object), the robot may actively perform service inquiry or service guidance, for example, "mr. zhang, you good ask what service you need to handle? ".
Further, in order to improve the user experience, after inquiring the target interactive object C, in the process of waiting for the target interactive object C to answer, the user can also make a response to the object to be interacted a and the object to be interacted B, where "there are more guests and you are waiting for patience! "voice prompt.
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
Compared with the prior art, the man-machine interaction method provided by the embodiment can further obtain the identity information of the target interaction object when the position information of the target interaction object is obtained, so that after the robot moves to the area where the target interaction object is located according to the position information of the target interaction object, a response matched with the target interaction object can be made according to the identity information, and the user experience is further improved.
A third embodiment of the present application relates to a human-computer interaction method. The embodiment is further improved on the basis of the first embodiment or the second embodiment, and the specific improvements are as follows: after the control robot makes a response matching with the target interaction object, when the target interaction object needing to be interacted is determined again, it needs to first determine whether a new object is close to the robot, and a specific flow is shown in fig. 5.
Specifically, in the present embodiment, steps 501 to 508 are included, where steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and are not repeated here, and differences are mainly introduced below, and technical details that are not described in detail in this embodiment may refer to the human-computer interaction method provided in the first embodiment or the second embodiment, and are not repeated here.
In step 505, it is determined whether a new object is approaching the robot. If it is determined that a new object is close to the robot, go to step 506; otherwise, step 507 is directly entered, and one object to be interacted is reselected from the objects to be interacted remaining in the last human-computer interaction process to make the target interaction object.
Specifically, in this embodiment, the manner of determining whether a new object approaches the robot may be as described in the first embodiment, and if a new object is detected to approach within a preset range (e.g., 5 meters) with the current position of the robot as a center of circle, it is determined that a new object approaches the robot, and detailed determination operations are not described herein again.
In addition, in this embodiment, there may be one new object approaching the robot, or may be larger than 1, and this is not limited here.
In step 506, biometric information of the new subject is extracted.
In step 507, the target interaction object that needs to be interacted is determined again.
Specifically, the target object to be re-determined in this embodiment, which needs to be interacted with, is specifically selected from a new object and objects other than the target interaction object of the last interaction operation.
For ease of understanding, the following detailed description is made:
in practical applications, especially in public places with large pedestrian volume, a plurality of objects which need to interact with the robot may exist at the same time (i.e. more than 1 object is determined as an object to be interacted which needs to be interacted according to the biological characteristic information of the identified object), however, when human-computer interaction is performed, the robot can only respond to one object to be interacted (i.e. a target interaction object needs to be selected for interaction) at the same time, and can interact with other objects to be interacted after one interaction is completed. However, after one interaction is completed, the robot may respond to the waiting robot in addition to the previously determined object to be interacted, and a new object to be interacted may appear, so that in this case, the operation of re-determining the object to be interacted needs to be performed, and it is necessary to re-select one object to be interacted from the newly determined object to be interacted and the remaining objects to be interacted in the previous human-computer interaction process to make the object to be interacted.
In addition, it should be noted that, because the manner of re-determining the target interaction object that needs to be interacted in this embodiment is substantially the same as the determination manner in the first embodiment, the identified object needs to be determined as the object to be interacted according to the biological characteristic information, and then the target interaction object that needs to be interacted finally is selected from the objects to be interacted, and specific implementation details are not described herein again.
In addition, regarding the selection of the target interactive object, in this embodiment, the selection may still be performed according to the priority of each object to be interacted, and certainly, a new target interactive object may also be determined according to other selection manners, which is not limited herein.
In step 508, the robot is controlled to respond that matches the re-determined target interaction object.
Specifically, the robot is controlled to make a response matching the re-determined target interaction object, and the response process may be: and moving towards the target interactive object, and actively performing service consultation or service guidance after moving to the area where the target interactive object is located, wherein a specific response mode can be set according to the re-determined related information of the target interactive object, and is not limited here.
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
Compared with the prior art, the man-machine interaction method provided by the embodiment comprises the steps of monitoring whether a new object approaches the robot or not after one man-machine interaction operation is completed, extracting biological characteristic information of the new object and determining whether the new object is an object to be interacted or not when it is determined that the new object approaches the robot, and if the new object is the object to be interacted, reselecting one object to be interacted from the newly determined object to be interacted and the rest objects to be interacted in the last man-machine interaction process to serve as a target interaction object, and then carrying out man-machine interaction; and if the newly-appeared object is not the object to be interacted, directly reselecting one object to be interacted from the rest objects to be interacted in the last human-computer interaction process to make a target interaction object, and then performing human-computer interaction.
Through the above description, it is easy to find that the human-computer interaction method provided in this embodiment enables the robot to dynamically update and perceive the state of the object in the working process, so that a response conforming to the current scene can be accurately made, misoperation is reduced, and user experience is further improved.
A fourth embodiment of the present application relates to a human-computer interaction device, which is applied to a robot and has a specific structure shown in fig. 6.
As shown in fig. 6, the human-computer interaction device includes an extraction module 601, a determination module 602, and a control module 603.
The extracting module 601 is configured to extract biometric information of the identified at least one object.
A determining module 602, configured to determine, according to the biometric information, a target interaction object that needs to be interacted from the at least one object.
And the control module 603 is used for controlling the robot to make a response matched with the target interaction object.
Specifically, in this embodiment, the biometric information of the identified at least one object extracted by the extraction module 601 may specifically be any one of physiological characteristic information and behavior characteristic information, or a combination of both of the physiological characteristic information and the behavior characteristic information.
In addition, it should be noted that, in this embodiment, the physiological characteristic information extracted by the extraction module 601 may specifically be any one or any combination of facial information, eye information, voiceprint information, and the like of the subject. The behavior feature information extracted by the extraction module 601 may specifically be any one or a combination of displacement information, voice content information, and the like of the object.
When the determining module 602 determines the target interaction object that needs to be interacted from at least one object according to the various biometric information, specifically, the target interaction object may be: firstly, determining that the identified object is an object to be interacted (an object to be interacted) according to the various biological characteristic information, such as analyzing the eye gaze condition of the object according to the eye information of the identified object and the displacement information of the object to determine whether the object is seeking help currently, thereby determining whether the object is the object to be interacted; then, after the objects to be interacted are determined, an object meeting the requirements is selected from the objects to be interacted as a target interaction object (an object which needs to be interacted finally).
In addition, in this embodiment, the control module 603 controls the robot to make a response matching with the target interaction object, specifically, may control the robot to move toward the target interaction object.
Further, after the robot moves to the area where the target interaction object is located, the robot may be controlled to make a response matched with the target interaction object according to the identity information of the object, such as actively making a service inquiry, a service guide, and the like, which specifically may be: "do you ask what business you go to do? "
It should be noted that the above is only an example, and does not limit the technical solution and the scope of the present application, and in practical applications, those skilled in the art can reasonably set the technical solution and the scope to be protected according to actual needs, and the present disclosure is not limited herein.
In addition, technical details that are not described in detail in this embodiment may be referred to a human-computer interaction method provided in any embodiment of the present application, and are not described herein again.
It is not difficult to find out through the above description that, in the human-computer interaction device provided in this embodiment, the extraction module is adopted to extract the biological characteristic information of the identified at least one object, the determination module determines the target interaction object to be interacted from the at least one object according to the biological characteristic information, then the control module is utilized to control the robot to make a response matched with the target interaction object, and through the direct mutual cooperation of the above modules, the robot equipped with the human-computer interaction device can only make a response for the object to be interacted, so that the false response operation is effectively avoided, and the user experience is greatly improved.
The above-described embodiments of the apparatus are merely illustrative, and do not limit the scope of the present application, and in practical applications, a person skilled in the art may select some or all of the modules to implement the purpose of the embodiments according to practical needs, and the present invention is not limited herein.
A fifth embodiment of the present application relates to a robot, and the specific structure is shown in fig. 7.
The robot may be an intelligent machine device located in a facility public place such as a banking office, a large mall, an airport, etc. The internal part of the system specifically includes one or more processors 701 and a memory 702, and one processor 701 is taken as an example in fig. 7.
In this embodiment, all the functional modules in the human-computer interaction device described in the above embodiments are disposed on the processor 701, the processor 701 and the memory 702 may be connected by a bus or by another method, and fig. 7 illustrates an example of connection by a bus.
The memory 702 is a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the human-computer interaction method in any of the method embodiments of the present application. The processor 701 executes various functional applications of the server and data processing by executing software programs, instructions and modules stored in the memory 702, that is, implements the human-computer interaction method referred to in any method embodiment of the present application.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may establish a history database for storing priority setting conditions and the like. In addition, the Memory 702 may include a high-speed Random Access Memory (RAM), a read-write Memory (RAM), and the like. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected to a terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In practical applications, the memory 702 may store instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the human-computer interaction method according to any embodiment of the present application, and control each functional module in the human-computer interaction device to complete the positioning operation in the human-computer interaction method.
A sixth embodiment of the present application relates to a computer-readable storage medium having stored thereon computer instructions for enabling a computer to execute the human-computer interaction method described in any of the method embodiments of the present application.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.