CN106903695B - Projection interactive method and system applied to intelligent robot - Google Patents
Projection interactive method and system applied to intelligent robot Download PDFInfo
- Publication number
- CN106903695B CN106903695B CN201710027806.3A CN201710027806A CN106903695B CN 106903695 B CN106903695 B CN 106903695B CN 201710027806 A CN201710027806 A CN 201710027806A CN 106903695 B CN106903695 B CN 106903695B
- Authority
- CN
- China
- Prior art keywords
- projection
- data
- user
- output
- modal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of projection interactive methods and system applied to intelligent robot, which comprises during carrying out projection output, the triggering of setting event is judged whether there is, if so, then suspending projection output;Receive and parse through multi-modal input data from the user;According to the parsing result parsed to the multi-modal input data, projection data corresponding thereto is exported.The present invention makes robot that can carry out multi-modal interaction with user in projection process, the intelligence and class human nature of robot is improved, so that teaching, game effect etc. be better achieved.
Description
Technical field
The present invention relates to field in intelligent robotics more particularly to a kind of projection interactive method applied to intelligent robot and
System.
Background technique
With the continuous development of science and technology, the introducing of information technology, computer technology and artificial intelligence technology, machine
Industrial circle is gradually walked out in the research of people, gradually extends to the neck such as medical treatment, health care, family, amusement and service industry
Domain.And requirement of the people for robot also conform to the principle of simplicity single duplicate mechanical action be promoted to have anthropomorphic question and answer, independence and with
The intelligent robot that other robot interacts, human-computer interaction also just become an important factor for determining intelligent robot development,
Therefore, the interactive capability for improving intelligent robot promotes the intelligence of intelligent robot, becomes the weight of current urgent need to resolve
Want problem.
Summary of the invention
The first technical problem to be solved by the present invention is to need to provide one kind to make robot can be in projection process
User carries out multi-modal interaction, improves the intelligence of robot and the solution of class human nature.
In order to solve the above-mentioned technical problem, embodiments herein provides firstly a kind of throwing applied to intelligent robot
Shadow exchange method, which comprises during carrying out projection output, the triggering of setting event is judged whether there is, if setting
Determine event triggering, then suspends projection output;Receive and parse through multi-modal input data from the user;According to described multi-modal
The parsing result that input data is parsed exports projection data corresponding thereto.
Preferably, the setting event includes monitoring that the multi-modal input data of user's output or projection output time reach
To specific interaction time point;The multi-modal input data from the user includes the multi-modal input data that user actively exports
With the multi-modal input data for responding multi-modal output data, the multi-modal output data is robot in the specific interaction
The data that time point issues the user with.
Preferably, the projection interactive method is realized by way of projecting application program, obtained according to parsing result
User is intended to, and judges to project in application program with the presence or absence of data for projection predetermined corresponding with user's intention, and if it exists,
The output data for projection is then projected, otherwise, then is intended to the user be sent to cloud server, reception passes through cloud service
Feedback data after device analysis.
Preferably, this method further include: by projection application program judge whether that the feedback data project defeated
Out.
Preferably, also export that corresponding with the data for projection other are multi-modal while projection exports the data for projection
Data.
The embodiment of the present application also provides a kind of projection interactive system applied to intelligent robot, the system comprises:
Output control module is projected, during carrying out projection output, the triggering of setting event is judged whether there is, if there is setting event
Triggering then suspends projection output;User data receiving module receives and parses through multi-modal input data from the user;It throws
Shadow data outputting module exports corresponding throwing according to the parsing result parsed to the multi-modal input data
Shadow data.
Preferably, the setting event includes monitoring that the multi-modal input data of user's output or projection output time reach
To specific interaction time point;The multi-modal input data from the user includes the multi-modal input data that user actively exports
With the multi-modal input data for responding multi-modal output data, the multi-modal output data is robot in the specific interaction
The data that time point issues the user with.
Preferably, the data for projection output module further obtains user according to parsing result and is intended to, judges whether
In the presence of data for projection predetermined corresponding with user's intention, and if it exists, then project the output data for projection, otherwise, then
The user is intended to be sent to cloud server, receives the feedback data after analyzing by cloud server.
Preferably, the data for projection output module further determines whether project to the feedback data defeated
Out.
Preferably, the data for projection output module, it is further also defeated while projection exports the data for projection
Other multi-modal datas corresponding with the data for projection out.
Compared with prior art, one or more embodiments in above scheme can have following advantage or beneficial to effect
Fruit:
The embodiment provides a kind of projection interactive methods applied to intelligent robot, are carrying out projection output
During, by setting event triggering moment pause projection output, then receive and parse through multi-modal input from the user
Data, finally according to parsing result export projection data corresponding thereto, make robot in projection process can with user into
The multi-modal interaction of row improves the intelligence and class human nature of robot, so that teaching, game effect etc. be better achieved.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that being understood by implementing technical solution of the present invention.The objectives and other advantages of the invention can by
Specifically noted structure and/or process are achieved and obtained in specification, claims and attached drawing.
Detailed description of the invention
Attached drawing is used to provide to the technical solution of the application or further understanding for the prior art, and constitutes specification
A part.Wherein, the attached drawing for expressing the embodiment of the present application is used to explain the technical side of the application together with embodiments herein
Case, but do not constitute the limitation to technical scheme.
Fig. 1 is to be related to the flow diagram of the example one of the projection interactive method applied to intelligent robot of the invention.
Fig. 2 is to be related to the flow diagram of the example two of the projection interactive method applied to intelligent robot of the invention.
Fig. 3 is to be related to the flow diagram of the example three of the projection interactive method applied to intelligent robot of the invention.
Fig. 4 is to be related to the structural block diagram of the example four of the projection interactive device 400 for being applied to intelligent robot of the invention.
Specific embodiment
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and examples, how to apply to the present invention whereby
Technological means solves technical problem, and the realization process for reaching relevant art effect can fully understand and implement.This Shen
Please each feature in embodiment and embodiment, can be combined with each other under the premise of not colliding, be formed by technical solution
It is within the scope of the present invention.
In addition, the process of attached drawing can be in the computer system of such as a group of computer-executable instructions the step of illustrating
Middle execution.Also, although logical order is shown in flow charts, and it in some cases, can be to be different from herein
Sequence executes shown or described step.
In the prior art, for existing projection device during being projected, majority is all by image data or view
Frequency is according to being projected on projection screen (for example, ground or wall), in some cases, outfit is also played while output projects
Sound or background music, to statically be watched for user.However, coming in the projection process especially with projection device
Education activities are carried out, since viewer is generally the lower child of know-how, are often generated during study each
The problem of kind various kinds, therefore, viewing for a long time projects the dislike for easily causing user in the case where unmanned indication and interaction, leads
It causes to have little effect brought by education activities.Therefore, the embodiment of the present invention provides one kind and can be improved intelligent robot and intend
Human nature, and then make robot that can carry out the solution of multi-modal interaction with user in projection process.
The projection interactive method applied to intelligent robot of the embodiment of the present invention can make robot project it is defeated
It is interacted during out with user, improves intelligent robot and class human nature.Specifically, for example, passing through in robot
Projection is come during being imparted knowledge to students, in specific events trigger, for example, the projection output time reaches specific interaction time point
When, robot can in time actively pause projection exports as teacher, and inquires that child is to the content said just now
No understanding, robot are obtained the information that child shakes the head or nods by visual capacity, can also be obtained by voice mode small
The voice messaging for the yes/no that friend issues, to projection before deciding whether to replay or continue to play.In addition,
During projection output, robot is also monitored user, if monitoring, user actively exports multi-modal input data
When setting event, for example, child interrupts " what meaning bent item is " asked suddenly, due to prison when saying ancient poetry to child
The voice data of user, therefore robot pause projection output are measured, the multi-modal input data of user is parsed, and root
Projection data corresponding thereto is exported according to parsing result.
In specific implementation, in order to reduce development cost and manpower and material resources, the side of application program (APP) can be used
Formula realizes projection interactive method, after be properly termed as " projection application program ".
It should be noted that first judging whether there is and being tied with parsing when exporting data for projection corresponding with parsing result
User in fruit is intended to corresponding data for projection predetermined, and if it exists, then exports the data for projection.And it is pre- if it does not exist
When the data for projection first defined, user is intended to be sent to cloud server, receives the feedback after analyzing by cloud server
Data.Cloud server may be considered the cloud brain of robot, the task or letter that can not will be locally handled by robot
Breath is sent to cloud server processing, and robot can be helped to realize complicated task and can mitigate the place of robot local cpu
Reason burden.After robot obtains the feedback data from cloud, judge whether to carry out projection output to the feedback data.
The feedback data in cloud can be the group of data for projection or data for projection and the matched voice data of data for projection to be projected
Data are closed, certainly, feedback data can also be the result data etc. for processing result.
In addition, robot can also export other multi-modal numbers corresponding with data for projection while carrying out projection output
According to, for example make robot execute certain limb action Mechanical course instruction, make robot realize facial expression state expression
Control instruction, can apish expression and movement in certain practical application scenes, improve user use robot degree
And viscosity.
Embodiment one
Fig. 1 is to be related to the flow diagram of the example one of the projection interactive method applied to intelligent robot of the invention,
The method of the embodiment mainly includes the following steps that.
In step s 110, intelligent robot carries out projection output according to the data for projection of setting.
The intelligent robot of the present embodiment has projecting function, can obtain from external equipment wait throw before being projected
Shadow data can also select data to be projected according to the control instruction of user from the memory of itself.In the storage of robot
A large amount of data for projection is stored in device, these data can be image data and be also possible to video data.To setting to
When data for projection (for example, image data or video data) is projected, data for projection is treated first and is decoded, it will be to be projected
Data are converted to corresponding projection information, then project to decoded projection information.
In the step s 120, during projecting output, the triggering of setting event is judged whether there is, if judgement has setting event
Triggering then suspends projection output, wherein setting event includes monitoring the multi-modal input data of user's output or reaching specific
Interaction time point.
In this step, on the one hand robot carries out projection output, and various ask on the other hand can be monitored in projection process
Seek event, such as the request event of the request event of timer, semaphore.The present embodiment is in order to realize robot in projection process
In can be interacted with user, define setting event trigger pause projection output operation.Using application program come
In the case where realizing the embodiment of the present invention, application program all includes a message loop, which continues to monitor repeatedly and disappear
Queue is ceased, has checked whether setting event message, these setting event messages include that timer reaches, monitors what user issued
Multi-modal input data etc..In fact, these events are first received by robot operating system, when robot operating system receives
To after these events, the message of some corresponding description events can be generated, and these message are dealt into the application program.Using journey
After sequence is connected to these message, query messages mapping table calls its corresponding message response function, completes the behaviour of pause projection output
Make.
It should be noted that the timer arrival in setting event, which refers to, reaches specific interaction time point, the specific interaction
The time point that time point can be set to after playing with setting content in project content is consistent, in this way for applying in teaching
Pause, which plays to project and issue the user with multi-modal output data, in scene, after robot reaches at the time point carrys out inquiry user
Teaching efficiency can be better achieved in understanding to broadcasting content before.On the other hand, during projecting broadcasting, machine
The image capture device and sound collection equipment of people is real-time or is spaced set period of time to acquire the information of user, if collecting use
When the multi-modal input data that family issues, then it is assumed that setting event is triggered, and pause projection plays.For example, passing through in robot
When projecting the mode exported to child's explanation ancient poetry, child interrupts ask suddenly during teaching: " bent item is assorted
The meaning " after robot collects the information by sound collection equipment, then assert the setting event that triggers, and pause, which plays, throws
Shadow, and record the content of Current projection broadcasting.
In step s 130, multi-modal input data from the user is received and parsed through.Multi-modal input from the user
Data include the user's actively multi-modal input data of output and the multi-modal input data for responding multi-modal output data, multimode
State output data is the data that robot is issued the user in the specific interaction time point.
It should be noted that if robot reach at a set point in time after pause projection play, then robot can actively to
User issues multi-modal output data, which is mainly voice data, is mostly for inquiring user to before
Understanding of project content shown etc inquiry content, such as " child, the content understanding said just now? ".User is then
The multi-modal output data can be responded and generate multi-modal input data, " understanding ", " not understanding ", " XXX is also not for example, issuing
It is clear " etc. voice messagings, alternatively, making the movement of " shaking the head ", " nodding ".
After receiving multi-modal input data from the user, which is parsed.Specific parsing result can be with
The information such as the mission bit stream characterized including the data characteristics and/or multi-modal input data that identify multi-modal input data.Needle
To different multi-modal input datas, the complexity and process of dissection process are entirely different.If the information of acquisition is sound letter
Breath, then the multi-modal data is submitted to ASR engine or local and the cloud of local ASR engine or cloud server by robot
The ASR and VPR (Application on Voiceprint Recognition, Voiceprint Recognition) engine of server mixing.These engines use ASR technology
Convert voice data to text information.The pretreatment of such as denoising etc is specifically first carried out to multi-modal input data, then
Pretreated voice messaging is carried out to the comprehensive analysis of speech recognition, generates text information corresponding with voice messaging.Into one
For step, pre-stored sound template and the voice of input can be believed according to the model of speech recognition in identification process
Number feature be compared, according to certain search and matching strategy, find out a series of optimal moulds with input voice match
Plate.Then according to the definition of this template, recognition result can be provided by tabling look-up.If the information obtained is image data,
It parses to obtain human body attitude by motion analysis technology based on the two dimensional image.
In step S140, according to the parsing result parsed to multi-modal input data, corresponding throwing is exported
Shadow data.
After parsing obtains the parsing result that user responds the multi-modal output data that robot exports, robot is looked into
Parsing result data for projection corresponding with the output of the map listing of data for projection is ask, for example, the parsing result in user is " to pay no attention to
Solution ", then robot exports the content before specific interaction time point, or output than in content explanation in more detail again
Hold to user.If robot continues to play the projection of setting in the case where the parsing result of user is " understanding ".
It is readily appreciated that, after exporting corresponding with parsing result data for projection, also not due to the data for projection that sets before
All output, therefore robot is that starting point continues to play projection according to the project content recorded when suspending and projecting, completes this
Projection output.
To sum up, the embodiment of the present invention can make robot that can carry out multi-modal interaction with user in projection process, mention
The intelligence and class human nature of high robot, so that teaching, game effect etc. be better achieved.
Embodiment two
Fig. 2 is to be related to the flow diagram of the example two of the projection interactive method applied to intelligent robot of the invention,
The method of the embodiment mainly includes the following steps that, wherein the step similar to embodiment one marked with identical label, and
Its particular content is repeated no more, only difference step is specifically described.
In step s 110, intelligent robot carries out projection output according to the data for projection of setting.
In the step s 120, during projecting output, the triggering of setting event is judged whether there is, if judgement has setting event
Triggering then suspends projection output, wherein setting event includes monitoring the multi-modal input data or projection output of user's output
Time reaches specific interaction time point.
In step s 130, multi-modal input data from the user is received and parsed through.
In step S210, user is obtained according to parsing result and is intended to.
In step S130 as above by information that multi-modal data is analyzed typically just with voice messaging pair
The text information answered either human body attitude information corresponding with user action, but the user of these embodying informations expression is intended to tool
What body is, it is also necessary to which robot can understand after further being screened and being matched.By taking voice messaging as an example, pass through language
The analysis that sound identifies is the result is that " what meaning bent item is ", robot parses to believe emphasis therein after obtaining above content
Breath such as " bent item ", " what meaning " extract, and are to guide to screen from preset user view data library with these information
Matched user is intended to out, such as is anticipated according to users such as parsing result " what meaning bent item is " acquisitions " explaining the meaning of bent item "
Figure.
In step S220, data for projection predetermined corresponding with user's intention is judged whether there is.If it exists with
User is intended to corresponding data for projection predetermined, thens follow the steps S230, no to then follow the steps S240.
User is stored in robot local storage in advance to be intended to and the interrelated number of preset data for projection
According to library, by inquiring the database, data for projection corresponding with user's intention can be found.For example, with " containing for bent item is explained
It is the image data comprising a winding goose neck that the user of justice ", which is intended to corresponding data for projection,.If by application program come
It realizes the embodiment of the present invention, then judges to project in application program with the presence or absence of projection number predetermined corresponding with user's intention
According to.
In step S230, projection exports data for projection, return step S110 after end of output.
After inquiring data for projection corresponding with user's intention, which is exported.Before guaranteeing
The integrality of the data for projection of output, the project content recorded when suspending the time point for projecting output continue to play as starting point
The projection not exported.
In step S240, user is intended to be sent to cloud server, be received anti-after being analyzed by cloud server
Present data.
When the database that inquiry robot is locally stored is to obtain data for projection, due to robot interior memory hardware
Limitation in equipment and CPU processing capacity, it is likely that data for projection predetermined corresponding with user's intention is not stored, but
It is when not inquiring corresponding data for projection, then to be intended to user be sent to cloud to preferably interact with user
Server is held, is handled by cloud server.
Cloud server is intended to handle according to user, inquires content corresponding with user's intention, which can wrap
Include data for projection, the combination of data for projection+voice data, other forms feedback data.
In step s 250, judge whether to carry out projection output to feedback data.If the determination result is YES, it thens follow the steps
S230, otherwise, return continue to execute step S110.
Robot receives the feedback data from cloud server, judges whether the feedback data includes that can project output
Data for projection, if include data for projection, then follow the steps S230 according to feedback data and carry out projection output, otherwise, according to temporary
Stopping the project content recorded when projection is that starting point continues to play projection, completes this projection output.If by application program come
It realizes the embodiment of the present invention, then judges whether to carry out projection output to feedback data by projecting application program.
Embodiment three
Fig. 3 is to be related to the flow diagram of the example three of the projection interactive method applied to intelligent robot of the invention,
The method of the embodiment mainly includes the following steps that, wherein will step similar with embodiment one and embodiment two with identical
Label mark, and its particular content is repeated no more, only difference step is specifically described.
In step s 110, intelligent robot carries out projection output according to the data for projection of setting.
In the step s 120, during projecting output, the triggering of setting event is judged whether there is, if judgement has setting event
Triggering then suspends projection output, wherein setting event includes monitoring the multi-modal input data or projection output of user's output
Time reaches specific interaction time point.
In step s 130, multi-modal input data from the user is received and parsed through.
In step S210, user is obtained according to parsing result and is intended to.
In step S220, data for projection predetermined corresponding with user's intention is judged whether there is.If it exists with
User is intended to corresponding data for projection predetermined, thens follow the steps S310, no to then follow the steps S240.
In step s310, further determine whether there are other multi-modal datas corresponding with the data for projection, if sentencing
Disconnected result is the presence of other multi-modal datas corresponding with the data for projection, thens follow the steps S320, no to then follow the steps S230.
It should be noted that other multi-modal datas, such as can be the machinery for making robot execute certain limb action
Control instruction makes robot realize the expression control instruction of facial expression state, can imitate in certain practical application scenes
The expression and movement of people.In this example, the memory of robot in addition to associated storage have user be intended to and projection predetermined
Other than data, data for projection and the associated database of multi-modal data is also stored.By inquiring the database, judge whether
In the presence of other multi-modal datas corresponding with data for projection predetermined.For example, the user of " meaning for explaining bent item " is intended to
It has corresponded to the data for projection of " a winding goose neck ", the data for projection and has corresponded to the voice number explained to " bent item "
According to: " bent item " two fonts hold the state that goose is sung heartily to day, refer to the winding neck of goose.
In step s 320, projection output data for projection, and other multi-modal datas are exported, return step after end of output
S110。
Export other multi-modal datas while projection exports data for projection or later, other multi-modal datas it is defeated
The time is without limiting out.In such manner, it is possible to which further data for projection is explained, user is helped to understand project content.
If multi-modal data is voice data, it is output by voice equipment and executes operation, if multi-modal data is Mechanical course instruction,
Then the corresponding mechanical structure of robot is executed according to instruction.
In step S230, projection exports data for projection, return step S110 after end of output.
In step S240, user is intended to be sent to cloud server, be received anti-after being analyzed by cloud server
Present data.
In step s 250, judge whether to carry out projection output to feedback data.If the determination result is YES, it thens follow the steps
S230, otherwise, return continue to execute step S110.
Example IV
Fig. 4 is the structural block diagram of the projection interactive system 400 applied to intelligent robot of the embodiment of the present application.Such as Fig. 4
Shown, the projection interactive system 400 of the embodiment of the present application specifically includes that projection output control module 410, user data receive mould
Block 420 and data for projection output module 430.
Output control module 410 is projected, during carrying out projection output, judges whether there is the triggering of setting event,
If so, then suspending projection output, the setting event includes monitoring the multi-modal input data or projection output of user's output
Time reaches specific interaction time point.
User data receiving module 420 receives and parses through multi-modal input data from the user, described to come from user
Multi-modal input data include that user actively the multi-modal input data of output and responds the multi-modal of multi-modal output data
Input data, the multi-modal output data are the data that robot is issued the user in the specific interaction time point.
Data for projection output module 430, according to the parsing result parsed to the multi-modal input data, output
Projection data corresponding thereto.The data for projection output module 430 further obtains user according to parsing result and is intended to,
Judge whether there is data for projection predetermined corresponding with user's intention, and if it exists, the output data for projection is then projected,
Otherwise, then the user is intended to be sent to cloud server, receives the feedback data after analyzing by cloud server.It is described
Data for projection output module 430 further determines whether to carry out projection output to the feedback data, in addition, defeated projecting
Other multi-modal datas corresponding with the data for projection are also exported while the data for projection out.
By being rationally arranged, the projection interactive system 400 of the present embodiment can execute embodiment one, embodiment two and implement
Each step of example three, details are not described herein again.
It describes to realize in computer systems due to method of the invention.The computer system for example can be set
In the control core processor of robot.For example, method described herein can be implemented as to be performed with control logic
Software is executed by the CPU in robot operating system.Function as described herein, which can be implemented as being stored in non-transitory, to be had
Program instruction set in shape computer-readable medium.When implemented in this fashion, which includes one group of instruction,
When group instruction is run by computer, it promotes computer to execute the method that can implement above-mentioned function.Programmable logic can be temporary
When or be permanently mounted in non-transitory visible computer readable medium, such as ROM chip, computer storage,
Disk or other storage mediums.In addition to software come other than realizing, logic as described herein can utilize discrete parts, integrated electricity
Road, programmable the patrolling with programmable logic device (such as, field programmable gate array (FPGA) or microprocessor) combined use
Volume, or any other equipment including their any combination embodies.All such embodiments are intended to fall within model of the invention
Within enclosing.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein, processing step
Or material, and the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also manage
Solution, term as used herein is used only for the purpose of describing specific embodiments, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means the special characteristic described in conjunction with the embodiments, structure
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs
Apply example " or " embodiment " the same embodiment might not be referred both to.
While it is disclosed that embodiment content as above but described only to facilitate understanding the present invention and adopting
Embodiment is not intended to limit the invention.Any those skilled in the art to which this invention pertains are not departing from this
Under the premise of the disclosed spirit and scope of invention, any modification and change can be made in the implementing form and in details,
But scope of patent protection of the invention, still should be subject to the scope of the claims as defined in the appended claims.
Claims (8)
1. a kind of projection interactive method applied to intelligent robot, which is characterized in that the described method includes:
During carrying out projection output, the triggering of setting event is judged whether there is, if there is the triggering of setting event, suspends projection
Output;
Receive and parse through multi-modal input data from the user;
According to the parsing result parsed to the multi-modal input data, projection data corresponding thereto is exported,
Wherein, the setting event includes monitoring that the multi-modal input data of user's output or projection output time reach specific
Interaction time point;
The multi-modal input data from the user includes that actively the multi-modal input data of output and response are multi-modal by user
The multi-modal input data of output data, the multi-modal output data be robot in the specific interaction time point to user
The data of sending.
2. the method according to claim 1, wherein
The projection interactive method is realized by way of application program,
User is obtained according to parsing result to be intended to, and is judged to project in application program with the presence or absence of corresponding with user's intention fixed in advance
The data for projection of justice, and if it exists, then project the output data for projection, otherwise, then be intended to the user be sent to cloud clothes
Business device receives and passes through the feedback data after cloud server is analyzed.
3. according to the method described in claim 2, it is characterized by further comprising:
Judge whether to carry out projection output to the feedback data by projecting application program.
4. method described in any one of claim 1 to 3, which is characterized in that
Other multi-modal datas corresponding with the data for projection are also exported while projection exports the data for projection.
5. a kind of projection interactive system applied to intelligent robot, which is characterized in that the system comprises:
Output control module is projected, during carrying out projection output, the triggering of setting event is judged whether there is, if there is setting
Event triggering then suspends projection output;
User data receiving module receives and parses through multi-modal input data from the user;
Data for projection output module, according to the parsing result parsed to the multi-modal input data, it is right therewith to export
The data for projection answered,
Wherein, the setting event includes monitoring that the multi-modal input data of user's output or projection output time reach specific
Interaction time point;The multi-modal input data from the user includes the multi-modal input data and response that user actively exports
The multi-modal input data of multi-modal output data, the multi-modal output data are robot in the specific interaction time point
The data issued the user with.
6. system according to claim 5, which is characterized in that
The data for projection output module further obtains user according to parsing result and is intended to, judges whether there is and user
It is intended to corresponding data for projection predetermined, and if it exists, the output data for projection is then projected, otherwise, then by the user
Intention is sent to cloud server, receives the feedback data after analyzing by cloud server.
7. system according to claim 6, which is characterized in that
The data for projection output module further determines whether to carry out projection output to the feedback data.
8. the system according to any one of claim 5~7, which is characterized in that
The data for projection output module, the further also output and projection number while projection exports the data for projection
According to other corresponding multi-modal datas.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710027806.3A CN106903695B (en) | 2017-01-16 | 2017-01-16 | Projection interactive method and system applied to intelligent robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710027806.3A CN106903695B (en) | 2017-01-16 | 2017-01-16 | Projection interactive method and system applied to intelligent robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106903695A CN106903695A (en) | 2017-06-30 |
CN106903695B true CN106903695B (en) | 2019-04-26 |
Family
ID=59206486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710027806.3A Active CN106903695B (en) | 2017-01-16 | 2017-01-16 | Projection interactive method and system applied to intelligent robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106903695B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107553505A (en) * | 2017-10-13 | 2018-01-09 | 刘杜 | Autonomous introduction system platform robot and explanation method |
CN107741882B (en) * | 2017-11-22 | 2021-08-20 | 创新先进技术有限公司 | Task allocation method and device and electronic equipment |
CN108748141A (en) * | 2018-05-04 | 2018-11-06 | 安徽三弟电子科技有限责任公司 | A kind of children animation dispensing robot control system based on voice control |
KR102673293B1 (en) * | 2018-11-08 | 2024-06-11 | 현대자동차주식회사 | Service robot and method for operating thereof |
CN114274184B (en) * | 2021-12-17 | 2024-05-24 | 重庆特斯联智慧科技股份有限公司 | Logistics robot man-machine interaction method and system based on projection guidance |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102355118B1 (en) * | 2014-01-06 | 2022-01-26 | 삼성전자주식회사 | Electronic device, and method for displaying an event on a virtual reality mode |
US10152117B2 (en) * | 2014-08-07 | 2018-12-11 | Intel Corporation | Context dependent reactions derived from observed human responses |
CN104985599B (en) * | 2015-07-20 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | Study of Intelligent Robot Control method, system and intelligent robot based on artificial intelligence |
CN205521501U (en) * | 2015-11-14 | 2016-08-31 | 华中师范大学 | Robot based on three -dimensional head portrait of holographically projected 3D |
CN105807933B (en) * | 2016-03-18 | 2019-02-12 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
CN105835064B (en) * | 2016-05-03 | 2018-05-01 | 北京光年无限科技有限公司 | The multi-modal output method and intelligent robot system of a kind of intelligent robot |
CN106228982B (en) * | 2016-07-27 | 2019-11-15 | 华南理工大学 | A kind of interactive learning system and exchange method based on education services robot |
-
2017
- 2017-01-16 CN CN201710027806.3A patent/CN106903695B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106903695A (en) | 2017-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106903695B (en) | Projection interactive method and system applied to intelligent robot | |
US11765439B2 (en) | Intelligent commentary generation and playing methods, apparatuses, and devices, and computer storage medium | |
CN107030691B (en) | Data processing method and device for nursing robot | |
Zhang et al. | Driver behavior recognition via interwoven deep convolutional neural nets with multi-stream inputs | |
KR20210110620A (en) | Interaction methods, devices, electronic devices and storage media | |
CN109710748B (en) | Intelligent robot-oriented picture book reading interaction method and system | |
CN105843118B (en) | A kind of robot interactive method and robot system | |
CN104777911B (en) | A kind of intelligent interactive method based on holographic technique | |
WO2023226913A1 (en) | Virtual character drive method, apparatus, and device based on expression recognition | |
CN105843381A (en) | Data processing method for realizing multi-modal interaction and multi-modal interaction system | |
CN109176535A (en) | Exchange method and system based on intelligent robot | |
CN108460324A (en) | A method of child's mood for identification | |
CN112632778A (en) | Operation method and device of digital twin model and electronic equipment | |
CN112668492A (en) | Behavior identification method for self-supervised learning and skeletal information | |
Gao et al. | Architecture of visual design creation system based on 5G virtual reality | |
CN109213304A (en) | Gesture interaction method and system for live broadcast teaching | |
CN109343695A (en) | Exchange method and system based on visual human's behavioral standard | |
CN109977238A (en) | Generate the system for drawing this, method and apparatus | |
CN109857929A (en) | A kind of man-machine interaction method and device for intelligent robot | |
Peters et al. | Smart objects for attentive agents | |
CN110837549B (en) | Information processing method, device and storage medium | |
CN109086351A (en) | A kind of method and user tag system obtaining user tag | |
CN108983966A (en) | Reformation of convicts assessment system and method based on virtual reality and eye movement technique | |
CN110309470A (en) | A kind of virtual news main broadcaster system and its implementation based on air imaging | |
CN112860213B (en) | Audio processing method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |