CN102446428A - Robot-based interactive learning system and interactive method thereof - Google Patents
Robot-based interactive learning system and interactive method thereof Download PDFInfo
- Publication number
- CN102446428A CN102446428A CN2011102711199A CN201110271119A CN102446428A CN 102446428 A CN102446428 A CN 102446428A CN 2011102711199 A CN2011102711199 A CN 2011102711199A CN 201110271119 A CN201110271119 A CN 201110271119A CN 102446428 A CN102446428 A CN 102446428A
- Authority
- CN
- China
- Prior art keywords
- voice
- interactive
- robot
- control
- touch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000006243 chemical reaction Methods 0.000 claims description 25
- 230000006399 behavior Effects 0.000 claims description 17
- 238000001514 detection method Methods 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000033001 locomotion Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 230000006698 induction Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 238000000151 deposition Methods 0.000 claims description 2
- 230000003203 everyday effect Effects 0.000 claims description 2
- 230000014509 gene expression Effects 0.000 claims description 2
- 230000013011 mating Effects 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 claims 2
- 230000003993 interaction Effects 0.000 abstract description 14
- 238000005516 engineering process Methods 0.000 description 13
- 230000009471 action Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008451 emotion Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- GPUADMRJQVPIAS-QCVDVZFFSA-M cerivastatin sodium Chemical compound [Na+].COCC1=C(C(C)C)N=C(C(C)C)C(\C=C\[C@@H](O)C[C@@H](O)CC([O-])=O)=C1C1=CC=C(F)C=C1 GPUADMRJQVPIAS-QCVDVZFFSA-M 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000008450 motivation Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000011514 reflex Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008131 children development Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000003643 water by type Substances 0.000 description 1
Images
Landscapes
- Electrically Operated Instructional Devices (AREA)
- Toys (AREA)
Abstract
The invention provides an interactive learning system based on a robot and an interactive method thereof, which receive instructions from a user outside the system in real time through core processing and issue the instructions to the system through two modes of touch and voice control, and then interact with the user through voice, image, picture, text or other software interaction means giving corresponding instructions after operation matching, so that the user can learn easily and happily in a brand new experience mode.
Description
Technical field
The present invention relates to human-computer interaction technique field, refer in particular to a kind of exchange method and system based on robot.
Background technology
Human-computer interaction technology (Human-Computer Interaction Techniques) is meant through the computing machine Input/Output Device, realizes the technology of people and computer dialog with effective and efficient manner.It comprises that machine provides to the people through output or display device and reaches prompting in a large number for information about and ask for instructions etc., and the people answers a question and points out and ask for instructions etc. through input equipment to the machine input for information about.And in traditional man-machine interactive system, the people is considered to the operator, just machine is operated, and is not had real interacting activity.
Tradition children's teaching pattern all is that the educational pattern that the teacher says on lectern, children listen is main, and child in the face of this unalterable educational pattern, can feel dry as dust always unavoidably in the course of time.Therefore traditional children education pattern more can not be satisfied with Modern Family's education, along with the develop rapidly of science and technology, the technological breakthrough mouth occurred promptly---man-machine interaction, and the robot of employing possesses the distinguishing feature of feedback immediately, and this is that any other medium are unexistent.If further combine the audiovisual unification function that televisor had and the interactive function of robot, can produce a kind of man-machine interaction mode that both pictures and texts are excellent, rich and varied, and can feed back to the user immediately.This interactive mode can excite children's learning interest effectively, makes children produce strong desire learn, thereby forms learning motivation, and interactive means also more meets children learner's age characteristics and cognitive law simultaneously.
Yet existing interpersonal mutual many just to the adult, what its input was adopted is the keyboard input, and this has relatively high expectations for the operation of equipment people, and special children are difficult to operation, therefore are difficult to directly be applied in provide on the interactive learning.
Summary of the invention
The objective of the invention is to overcome above-mentioned defective, provide a kind of identification of provided touch-control, acoustic control identification based on robot to combine man-machine interaction and use voice, image, text mode to show that intuitively, specifically, vividly, visually cognitive object carries out interaction type learning method and system for children particularly.
The objective of the invention is to realize like this:
Beneficial effect of the present invention is based on robot a kind of interactive learning system and exchange method to be provided; It receives from system's external user in real time through core processing and through touch and voice acoustic control dual mode instruction is assigned by system; Carry out interaction through sound, image, picture, text or other software interactive means and the user who gives corresponding instruction after the computing coupling then, make the user can be under a kind of new experience pattern carefree study.
Description of drawings
Below in conjunction with detailed description of the drawings concrete structure of the present invention:
Fig. 1 is a system architecture synoptic diagram of the present invention;
Fig. 2 is a method flow sketch of the present invention;
Fig. 3 is a method flow diagram of the present invention.
Embodiment
By specifying technology contents of the present invention, structural attitude, realized purpose and effect, give explanation below in conjunction with embodiment and conjunction with figs. are detailed.
Robot in the man-machine interaction should have the emotion ability, and the emotion Calculation and Study is exactly the emotion of attempting to create a kind of ability perception, identification and understanding the people, and can make the robot computing system of intelligence, sensitivity, friendly reaction to people's emotion.The present invention relates to a kind of interactive learning system based on robot makes external object can control the instruction that lets the identification user of robot send through touch or voice acoustic control; And change into corresponding processing instruction and pass to core processing module; Software interactive patterns such as the sound that provides through system, image, picture, text reach and children interact the destination of study.See also Fig. 1, concrete system comprises:
One, acoustic control identification module; Being used to monitor the external object acoustic control triggers; Be whether external user is mutual through what take place between voice mode and the native system, if acoustic control is punished, then module is gathered voice and the voice that receive is changed into handled instruction back and transmits and go to carry out core processing.Acoustic control identification module function realizes, has utilized intelligence sensor, speech recognition technology, in conjunction with persona certa's identification and keyword capturing technology, when system's external user sends voice acoustic control instruction, judges back feedback corresponding information via above-mentioned device.
Saying specifically, for collection and the conversion of accomplishing voice, the acoustic control identification module further comprises:
1, speech detection unit, (as: 8k 16bit) then through speech detection algorithms, detects the starting point and the middle stop of voice to be used for that the speech conversion of gathering is become standard format.
2, feature extraction unit is used for it being carried out Digital Signal Processing, thereby from voice signal, extracting the information of reacting its essential attribute most from the eigenvector stream of voice extraction signal.
3, identification search unit is used for mating according to the eigenvector of voice stream and preset acoustic model storehouse, dictionary/dictionary and identification syntactic information storehouse content, obtains the word sequence of the characteristic of suitable these voice.
This module is the core of acoustic control identification, specifies in the face of the several storehouses that relate to down:
4, acoustic model storehouse is used to deposit the predetermined acoustic model; This storehouse is most crucial engine resource file, and it has comprised the accurate description for voice signal frequency spectrum and time series characteristic, and this acoustic model storehouse obtains through a large amount of speakers being comprised especially children train at the speech database of different scenes.
5, dictionary/dictionary is used for depositing word, the word information of preset works and expressions for everyday use; This tables of data is the super character library that has comprised tens thousand of speech, contains most individual characters that children can use.
6, discern the syntactic information storehouse, be used to deposit the information of preset language grammer; Saying that further the identification grammer has comprised the description for identification mission, simply say, is exactly sentence (perhaps word sequence) information that wherein comprises various doctrine of correspondence language methods and task scene.Because the singularity of the complete full brain model of brain children education system user, the various sentences of speaking grammers and task scene that we comprise all are based on and comprise term or sentences such as science popularization, traffic under common term and the mode of learning in children's daily life.
7, the syntactic information storehouse of identification mission is used to deposit different identification missions, and corresponding every kind of identification mission is provided with corresponding syntactic information storehouse; Specifically, this storehouse influences the subsequent searches algorithm, is exactly in unknown sentence (perhaps word sequence) candidate space, candidate's sentence that search obtains having best matching result.
Mainly be need the time domain sound wave of phonetic entry be converted to a kind of digitized vector characteristic to describe the different pronunciation of differentiation in this acoustic control identification module; We are referred to as phonetic feature; Based on this characteristic a sound model is set up in all pronunciations; We are referred to as acoustic model usually in field of speech recognition for this, and all speech recognition systems all must have an acoustic model; Simultaneously, for big vocabulary continuous speech recognition system, also need a language model.The purpose of speech recognition will be an initial conditions in given a string sound characteristic sequence exactly; Utilize acoustic model and language model; Adopt searching algorithm, output recognition result (word, speech or sentence), in other words; Speech recognition system is exactly in huge sentence (or word, speech) space, finds out and given input feature vector sequence the sentence with maximum probability (or word, speech) that is complementary.
8, semantic analysis unit is used for the syntactic information based on preset identification mission, carries out grammer, semantic analysis through the word sequence that search is obtained, and obtains the semantic information of recognition result.
Two, touch-control identification module is used to monitor external object and touches triggering, and promptly whether external user is triggered if touch through having taken place between touch manner and the native system alternately, and core processing was carried out in transmission after then this module was converted into the handled instruction with triggering.Touch-control identification module function realizes, is to utilize functions such as intelligence sensor, touch inductor, when system's external user touches the robot screen, judges back feedback corresponding information via above-mentioned device, and is mutual so that the user carries out touch-control to it.
Should comprise multiple induction region or inductor according to actual conditions touch-control identification module; For example during robot touch-screen, button, the infrared induction etc., different inductors or induction region are then confirmed with different instruction codes, when user's touch inductor or induction region; After inductor detects, receives code command sent to the mcu of core processing module; After mcu handles and resolves, report the mid (mid is appreciated that and is mobile internet device) of core processing module here, mid is through analyzing and processing; Feedback command is sent to mcu, and the interactive execution module of mcu control robot makes specific action or information feedback is given the user.
For example; When system runs into external object such as situation such as touch neck, fall down, judge that via above-mentioned device the interactive execution module of back robot just sends corresponding help request, for example: "; beat my head always, all beaten me stupidly " etc. forms a kind of interaction with external object thus.
Three, core processing module is used for receiving in real time that external object touch-control that acoustic control identification module and touch-control identification module send here triggers and acoustic control triggers, thereby and carries out behavior reaction according to the interactive execution module of instruction control and reach and the external object real-time, interactive.
Core processing module further comprises,
The learning data table is used for the memory by using learning stuff and makes study be able to the process data of carrying out therebetween, like following table 1.
Table 1-learning data table example
The pattern tables of data sees table 2, the pattern of the behavior reaction that produces between the learning period of being used to prestore.
Table 2 action data table example
Core processing module is after receiving in real time that external object touch-control that acoustic control identification module and touch-control identification module send here triggers and acoustic control triggers; Transfer corresponding learning stuff in the learning data table, reach and the external object real-time, interactive thereby then carry out behavior reaction according to the interactive execution module of instruction control corresponding in the action data table.
Four, interactive execution module, the control that is used to receive core processing module is then carried out real-time behavior reaction to external object.In order to realize interactive behavior reaction, interactive execution module generally can comprise:
Display screen is used for controlling the interactive display that external object is provided image, picture, text results according to core processing module;
Sound-producing device is used for controlling the interactive display that external object is provided the sound result according to core processing module;
The robot motion unit is used for basis and property processor control oneself and comprises: the interactive display of motions such as walking, shake the head, wave.
System provides a kind of interactive learning system that is suitable for children that can realize through touch-control, acoustic control dual mode thus.The touch control method of system does; Through being loaded in the tangible display screen on the robot body, the software for display system interface is provided with specific touch perception zone in the interface; When the user touches this specific region; Be converted into corresponding instruction through the touch-control recognition system and pass to core processing module, pass through the computing of core processing module then, corresponding results is displayed on display screen; Or make corresponding reflex action by interactive execution module, thereby realize user and system alternately; Acoustic-controlled method does; Voice receiver module through being loaded in the interactive execution module body receives the phonetic order that the user sends; Pass to core processing module and discern, store and handle, make corresponding reflex action through the interactive execution module of voice output module controls again.
Like Fig. 2, the present invention also provides a kind of exchange method based on robot simultaneously, and it comprises step:
Monitor the external object acoustic control and trigger, gather voice and the voice that receive are changed into handled instruction back transmission and go to carry out core processing;
The process that in this step the voice that receive is changed into handled instruction specifically comprises step,
A), speech detection; The raw tone of gathering is carried out pre-service and detection, comprising: with the primary speech signal data-switching become standard data format (as: 8k, 16bit); And pass through voice signal detection algorithm efficiently, detect the starting point and the middle stop of voice.
B), feature extraction, the voice after the detection are admitted to, and extract the eigenvector stream of its voice signal.Phonetic feature is to utilize Digital Signal Processing, from voice signal, extracts the information of reacting its essential attribute most.Extraction obtains the eigenvector stream of voice.
C), identification search, constitute engine according to the eigenvector stream of voice with preset acoustic model storehouse, dictionary/dictionary and identification syntactic information storehouse and carry out search matched, obtain the word sequence of the characteristic of suitable these voice; The content in acoustic model storehouse, dictionary/dictionary and identification syntactic information storehouse is described referring to components of system as directed.
D), semantic analysis, based on the syntactic information storehouse of preset identification mission, carry out grammer, semantic analysis through the word sequence that search is obtained, obtain the semantic information of recognition result.Say that simply the identification grammer has comprised the description for identification mission, is exactly sentence (perhaps word sequence) information that wherein comprises various doctrine of correspondence language methods and task scene.
Monitor external object and touch triggering, will touch to trigger to be converted into to transmit after handled is instructed and carry out core processing;
Behavior reaction is mutual in real time to trigger the back; Receive triggering of external object touch-control and acoustic control in real time and trigger the processing instruction after transforming; And according to instruction; Specifically instruction comprises that the process data of utilizing learning stuff to make study be able to carry out therebetween that reads correspondence then reads the pattern of the behavior reaction that produces between the learning period; Carry out behavior reaction and reach and the external object real-time, interactive thereby control interactive execution module by this pattern, above-mentioned behavior reaction includes but not limited to, displays through the accordingly result that display screen will touch or acoustic control triggers after the handled instruction process of conversion; And/or control the interactive action that interactive execution module is carried out sounding, motion.
Pass through said method; Can be well through receiving the external object user through the instruction that touches, voice acoustic control dual mode is sent to; Thereby after systematic analysis, provide corresponding sound, image, picture, text or other software interactive means and the user carries out interaction according to instruction, make the user can be under a kind of new experience pattern carefree study
Method embodiment one:
With " back of the body Tang poetry " this interdynamic studying method is example; Order " back of the body Tang poetry " when the user sends acoustic control, after triggering via acoustic control or touch-control, the processing instruction of conversion transfers to core processing module through wireless 2.4G; Core processing module gets into learning data table and the search of action data table; In Search Results, do not have corresponding " back of the body Tang poetry " this voice entry, then play-over preset feedback voice document (for example: what you say, and I do not understand, and bother you to make clear a little); Call preset action command simultaneously, let interactive execution module make movement response; When searching for when " back of the body Tang poetry " this sort command; Then carry out to play " back of the body Tang poetry " this plate respective guide language (for example: baby, you can carry on the back several first Tang poetrys, what I can carry on the back can be how); Call back of the body Tang poetry special combination action command simultaneously, let interactive execution module make movement response.
Concrete interactive examples is following:
1, the user touches " counting " icon to start interactive teaching;
2, system receives the request of landing, and starts and operation;
3, system receives information, opens interactive---and interactive execution module sends interactive voice: child counts, and how many cow heads have you seen on picture?
4, the user clicks the screen answer, clicks correctly, and system receives touch-control and also feeds back interactive execution module and send applause, and " you are excellent to follow voice encouragement! ", get into next page automatically, also can select to get into the page through the left side small arrow;
E. interactive execution module gives voice and encourages that " try again, you one answer questions surely when the user clicks wrong answer! ", picture stays at current page, after answering questions, can get into next page automatically, also can select to get into the page through the left side small arrow.
Method embodiment two:
1, the user touches " science secret " icon to start interactive teaching;
2, system receives the request of landing, and starts and operation;
3, receive the voice acoustic control triggering that the user sends: " being what color under the seabed? "
4, system receives instruction, and computing rear drive sound is with voice feedback corresponding information: " not dark under water place looks still blue; but more deeply; blueness has not had soon, and is in the place of 150 meters of the depth of waters, very black; that arrived 1000 meters place under water, that is exactly as dark as a stack of black cats.
In sum, technical scheme provided by the invention provides a kind of new teaching means, can let children transfer various sense organs fully through forms such as diversified touch, acoustic controls, helps children's acquire knowledge in autonomous, happy atmosphere.Scheme possesses following advantage:
1, man-machine interaction is taught through lively activities
Adopt system and method for the present invention can transfer the learner of children fully, with the interaction of system mutual as an active; It can excite children's learning interest effectively, makes children produce strong desire learn, thereby forms learning motivation.
2, abbreviate, vivid
System is through interactive execution module, thereby from interactive form demonstration learning contents such as dynamic sound, image, picture, literal, to children's characteristics of cognition, the content complicated in the teaching is presented in face of the children visually.These rich pictures dynamic, visual patternization can not only cause children's interest, and can strengthen aspiration to knowledge, the initiative that excites them to learn.
3, small step is gone forward one by one, learning of learn
In the interactive learning environment of being built of native system method, children can select the content of own required study according to the learning interest of oneself, can select to be fit to the exercise of own level.It has really embodied a cardinal principle---" the teaching students in accordance with their aptitude " of educating the inside children.Child can select the complexity of software according to the ability of oneself, and progressively study progressively improves, and in the middle of conscious, promotes really to have embodied children's cognitive subject effect from the raising on previous level.
4, the correct evaluation cultivated initiative
System can provide loose atmosphere and give children incentive evaluation, makes initiatively participation activity of children, and interested in the activity, confidence can be achieved success.
The above is merely embodiments of the invention; Be not so limit claim of the present invention; Every equivalent structure or equivalent flow process conversion that utilizes instructions of the present invention and accompanying drawing content to be done; Or directly or indirectly be used in other relevant technical fields, and can comprise like speech recognition technology: one type is persona certa's speech recognition, one type is the unspecified person speech recognition.Persona certa's speech recognition technology is the recognition technology to a specific people, simply says to be exactly the sound of only discerning a people, is not suitable for colony widely; And the unspecified person recognition technology is on the contrary, can satisfy the speech recognition requirement of different people, is fit to extensive crowd and uses.More outstanding thus the present invention program is to the characteristics of children's development and Design, and mostly its user is asophia, fuzzy child; The speech recognition of system platform unspecified person is need not be to the recognition technology of specifying speaker, and of all ages, the sex of this speech recognition technology is that identical language just can be discerned as long as speaker says.In addition, scheme also can make interactive execution module through functions such as wifi, bluetooth link internet, satisfies multiple needs such as user's download, information, amusement, on user experience, occupies advantage very.Therefore above-mentioned additional aspects etc. all in like manner is included in the scope of patent protection of the present invention.
Claims (10)
1. exchange method based on robot is characterized in that: it comprises,
The acoustic control of monitoring external object triggers, and gathers voice and the voice that receive are changed into handled instruction back and transmit the step of going to carry out core processing;
The monitoring external object touches and triggers, and the touch triggering is converted into handled instruction back transmits the step of carrying out core processing;
Behavior reaction is mutual in real time to trigger the back, receives in real time that the external object touch-control triggers and acoustic control triggers the processing instruction after transforming, thereby and carries out behavior reaction according to the interactive execution module of instruction control and reach the step with the external object real-time, interactive.
2. the exchange method based on robot as claimed in claim 1 is characterized in that: the said process that the voice that receive are changed into handled instruction comprises step,
A), speech detection, carry out the pre-service and the detection of voice;
Said pre-service comprises the step that the speech conversion of gathering is become standard format;
Said detection comprises through speech detection algorithms, detects the starting point of voice and the step of middle stop;
B), feature extraction, extract the eigenvector stream obtain voice;
The characteristic of said voice is from voice, to extract the information of reacting its essential attribute most through Digital Signal Processing;
C), identification search, mate according to eigenvector stream and preset acoustic model storehouse, dictionary/dictionary and the identification syntactic information storehouse of voice, obtain the word sequence of the characteristic of suitable these voice;
D), semantic analysis, based on the syntactic information storehouse of preset identification mission, carry out grammer, semantic analysis through the word sequence that search is obtained, obtain the semantic information of recognition result.
3. the exchange method based on robot as claimed in claim 1 is characterized in that: said touch triggers and comprises that the robot touch-screen triggers and/or button triggers and/or the infrared induction device triggers.
4. the exchange method based on robot as claimed in claim 1 is characterized in that: said behavior reaction comprises,
Accordingly result through after the handled instruction process that display screen will touch or the acoustic control triggering is changed displays; And/or, control the interactive action that interactive execution module is carried out sounding, motion.
5. the exchange method based on robot as claimed in claim 1 is characterized in that: the instruction that control robot is carried out behavior reaction comprises that the process data of utilizing learning stuff to make study be able to carry out therebetween that reads correspondence then reads the step of the pattern of the behavior reaction that produces between the learning period.
6. interactive learning system based on robot is characterized in that: it comprises,
The acoustic control identification module is used to monitor the external object acoustic control and triggers, and gathers voice and the voice that receive are changed into handled instruction back transmission and go to carry out core processing;
The touch-control identification module is used to monitor external object and touches triggering, triggering is converted into handled instruction back transmission carries out core processing;
Core processing module is used for receiving in real time that external object touch-control that acoustic control identification module and touch-control identification module send here triggers and acoustic control triggers, thereby and carry out behavior reaction according to the instruction control robot and reach and the external object real-time, interactive;
Interactive execution module, the control that is used to receive core processing module is then carried out real-time behavior reaction to external object.
7. the interactive learning system based on robot as claimed in claim 6 is characterized in that: said touch-control identification module comprises robot touch-screen and/or button and/or infrared induction device.
8. the interactive learning system based on robot as claimed in claim 6 is characterized in that: said acoustic control identification module comprises,
The acoustic model storehouse is used to deposit the predetermined acoustic model;
Dictionary/dictionary is used for depositing word, the word information of preset works and expressions for everyday use;
Discern the syntactic information storehouse, be used to deposit the information of preset language grammer;
The syntactic information storehouse of identification mission is used to deposit different identification missions, and corresponding every kind of identification mission is provided with corresponding syntactic information storehouse;
The speech detection unit is used for becoming standard format then through speech detection algorithms the speech conversion of gathering, and detects the starting point and the middle stop of voice;
Feature extraction unit is used for extracting the information of reacting its essential attribute most from voice;
The identification search unit is used for mating according to the eigenvector of voice stream and preset acoustic model storehouse, dictionary/dictionary and identification syntactic information storehouse content, obtains the word sequence of the characteristic of suitable these voice;
The semantic analysis unit is used for the syntactic information based on preset identification mission, carries out grammer, semantic analysis through the word sequence that search is obtained, and obtains the semantic information of recognition result.
9. the interactive learning system based on robot as claimed in claim 6 is characterized in that: said core processing module comprises,
The learning data table is used for the memory by using learning stuff and makes study be able to the process data of carrying out therebetween;
The pattern tables of data, the pattern of the behavior reaction that produces between the learning period of being used to prestore.
10. the interactive learning system based on robot as claimed in claim 6 is characterized in that: said interactive execution module comprises,
Display screen is used for controlling the interactive display that external object is provided image, picture, text results according to core processing module;
Sound-producing device is used for controlling the interactive display that external object is provided the sound result according to core processing module;
The robot motion unit is used for walking or/and the interactive display that moves according to core processing module control oneself.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102711199A CN102446428A (en) | 2010-09-27 | 2011-09-14 | Robot-based interactive learning system and interactive method thereof |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010293561 | 2010-09-27 | ||
CN201010293561.7 | 2010-09-27 | ||
CN2011102711199A CN102446428A (en) | 2010-09-27 | 2011-09-14 | Robot-based interactive learning system and interactive method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102446428A true CN102446428A (en) | 2012-05-09 |
Family
ID=44467393
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010206437461U Expired - Lifetime CN201940040U (en) | 2010-09-27 | 2010-12-06 | Domestic robot |
CN2011102711199A Pending CN102446428A (en) | 2010-09-27 | 2011-09-14 | Robot-based interactive learning system and interactive method thereof |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010206437461U Expired - Lifetime CN201940040U (en) | 2010-09-27 | 2010-12-06 | Domestic robot |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN201940040U (en) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116463A (en) * | 2013-01-31 | 2013-05-22 | 广东欧珀移动通信有限公司 | Interface control method of personal digital assistant applications and mobile terminal |
CN103336788A (en) * | 2013-06-05 | 2013-10-02 | 上海交通大学 | Humanoid robot added Internet information acquisition method and system |
CN103810998A (en) * | 2013-12-05 | 2014-05-21 | 中国农业大学 | Method for off-line speech recognition based on mobile terminal device and achieving method |
CN104252287A (en) * | 2014-09-04 | 2014-12-31 | 广东小天才科技有限公司 | Interaction device and method for improving expression capability based on interaction device |
CN104536677A (en) * | 2015-01-20 | 2015-04-22 | 湖南化身科技有限公司 | Three-dimensional digital portrait with intelligent voice interaction function |
CN104635574A (en) * | 2014-12-15 | 2015-05-20 | 山东大学 | Infant-oriented early-education accompanying and tending robot system |
CN104698996A (en) * | 2013-12-05 | 2015-06-10 | 上海能感物联网有限公司 | Robot system under Chinese text field control |
CN104992578A (en) * | 2015-07-07 | 2015-10-21 | 常州市拓源电缆成套有限公司 | Voice-controlled children's story machine |
CN105126355A (en) * | 2015-08-06 | 2015-12-09 | 上海元趣信息技术有限公司 | Child companion robot and child companioning system |
CN105184718A (en) * | 2015-08-28 | 2015-12-23 | 上海市同济医院 | System and method for multimedia propagation |
CN105206263A (en) * | 2015-08-11 | 2015-12-30 | 东莞市凡豆信息科技有限公司 | Speech and meaning recognition method based on dynamic dictionary |
CN105690385A (en) * | 2016-03-18 | 2016-06-22 | 北京光年无限科技有限公司 | Application calling method and device based on intelligent robot |
CN105872828A (en) * | 2016-03-30 | 2016-08-17 | 乐视控股(北京)有限公司 | Television interactive learning method and device |
CN105868419A (en) * | 2016-06-02 | 2016-08-17 | 泉港区奇妙工业设计服务中心 | Robot device for searching and studying of children |
CN106024016A (en) * | 2016-06-21 | 2016-10-12 | 上海禹昌信息科技有限公司 | Children's guarding robot and method for identifying crying of children |
CN106127526A (en) * | 2016-06-30 | 2016-11-16 | 佛山市天地行科技有限公司 | Intelligent robot system and method for work thereof |
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
WO2016206643A1 (en) * | 2015-06-26 | 2016-12-29 | 北京贝虎机器人技术有限公司 | Method and device for controlling interactive behavior of robot and robot thereof |
CN106325228A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
CN106313113A (en) * | 2015-06-30 | 2017-01-11 | 芋头科技(杭州)有限公司 | System and method for training robot |
CN106444987A (en) * | 2016-09-22 | 2017-02-22 | 上海葡萄纬度科技有限公司 | Virtual intelligent equipment for child and operation method thereof |
CN106486121A (en) * | 2016-10-28 | 2017-03-08 | 北京光年无限科技有限公司 | It is applied to the voice-optimizing method and device of intelligent robot |
CN106528544A (en) * | 2016-10-11 | 2017-03-22 | 深圳前海勇艺达机器人有限公司 | Method and device for starting translation function of robot |
CN106598215A (en) * | 2016-11-02 | 2017-04-26 | 惠州Tcl移动通信有限公司 | Virtual reality system implementation method and virtual reality device |
CN106799736A (en) * | 2017-01-19 | 2017-06-06 | 深圳市鑫益嘉科技股份有限公司 | The interactive triggering method and robot of a kind of robot |
CN106965172A (en) * | 2017-01-23 | 2017-07-21 | 浙江斯玛特信息科技有限公司 | A kind of control system of service robot |
CN107123420A (en) * | 2016-11-10 | 2017-09-01 | 厦门创材健康科技有限公司 | Voice recognition system and interaction method thereof |
CN107300918A (en) * | 2017-06-21 | 2017-10-27 | 上海思依暄机器人科技股份有限公司 | A kind of control method and control device for changing motion state |
CN107300970A (en) * | 2017-06-05 | 2017-10-27 | 百度在线网络技术(北京)有限公司 | Virtual reality exchange method and device |
CN107343778A (en) * | 2017-06-30 | 2017-11-14 | 罗颖莉 | A kind of sweeping robot voice activated control based on intelligent terminal |
CN108460124A (en) * | 2018-02-26 | 2018-08-28 | 北京物灵智能科技有限公司 | Exchange method and electronic equipment based on figure identification |
CN108511042A (en) * | 2018-03-27 | 2018-09-07 | 哈工大机器人集团有限公司 | It is robot that a kind of pet, which is cured, |
CN108877347A (en) * | 2018-08-02 | 2018-11-23 | 安徽硕威智能科技有限公司 | Classroom outdoor scene reproducing interactive tutoring system based on robot projection function |
CN109166365A (en) * | 2018-09-21 | 2019-01-08 | 深圳市科迈爱康科技有限公司 | The method and system of more mesh robot language teaching |
CN109260733A (en) * | 2018-09-12 | 2019-01-25 | 苏州颗粒智能玩具有限公司 | A kind of educational toy with interaction function |
CN109300341A (en) * | 2018-08-30 | 2019-02-01 | 合肥虹慧达科技有限公司 | Interactive early education robot and its exchange method |
CN109493650A (en) * | 2018-12-05 | 2019-03-19 | 安徽智训机器人技术有限公司 | A kind of language teaching system and method based on artificial intelligence |
CN109524003A (en) * | 2018-12-29 | 2019-03-26 | 出门问问信息科技有限公司 | The information processing method of smart-interactive terminal and smart-interactive terminal |
CN109841122A (en) * | 2019-03-19 | 2019-06-04 | 深圳市播闪科技有限公司 | A kind of intelligent robot tutoring system and student's learning method |
CN111312220A (en) * | 2019-12-02 | 2020-06-19 | 西安冉科信息技术有限公司 | Learning method based on dialogue exchange of learning machine |
CN112289339A (en) * | 2020-06-04 | 2021-01-29 | 郭亚力 | System for converting voice into picture |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103552077A (en) * | 2013-10-24 | 2014-02-05 | 哈尔滨工业大学 | Domestic company robot |
CN104924310A (en) * | 2015-05-12 | 2015-09-23 | 上海人智信息科技有限公司 | Smart home voice interaction robot |
CN104959987B (en) * | 2015-07-14 | 2017-03-08 | 百度在线网络技术(北京)有限公司 | Partner robot based on artificial intelligence |
CN105364930A (en) * | 2015-10-26 | 2016-03-02 | 湖南荣乐科技有限公司 | Service robot |
JP6436548B2 (en) | 2016-07-11 | 2018-12-12 | Groove X株式会社 | Autonomous robot |
CN106003099A (en) * | 2016-08-05 | 2016-10-12 | 苏州库浩斯信息科技有限公司 | Intelligent housekeeper type robot |
CN106378786B (en) * | 2016-11-30 | 2018-12-21 | 北京百度网讯科技有限公司 | Robot based on artificial intelligence |
CN107186730B (en) * | 2017-06-23 | 2023-09-15 | 歌尔科技有限公司 | Robot |
CN109045721B (en) * | 2018-09-11 | 2020-10-30 | 广东宏穗晶科技服务有限公司 | Pet robot |
CN114949876B (en) * | 2022-05-16 | 2023-05-12 | 浙江师范大学 | Expansion and contraction intermittent conversion type children cognitive training toy robot |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1372505A (en) * | 2000-04-03 | 2002-10-02 | 索尼公司 | Control device and control method for robot |
CN101071564A (en) * | 2006-05-11 | 2007-11-14 | 通用汽车公司 | Distinguishing out-of-vocabulary speech from in-vocabulary speech |
CN101414412A (en) * | 2007-10-19 | 2009-04-22 | 陈修志 | Interaction type acoustic control children education studying device |
WO2009157733A1 (en) * | 2008-06-27 | 2009-12-30 | Yujin Robot Co., Ltd. | Interactive learning system using robot and method of operating the same in child education |
-
2010
- 2010-12-06 CN CN2010206437461U patent/CN201940040U/en not_active Expired - Lifetime
-
2011
- 2011-09-14 CN CN2011102711199A patent/CN102446428A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1372505A (en) * | 2000-04-03 | 2002-10-02 | 索尼公司 | Control device and control method for robot |
CN101071564A (en) * | 2006-05-11 | 2007-11-14 | 通用汽车公司 | Distinguishing out-of-vocabulary speech from in-vocabulary speech |
CN101414412A (en) * | 2007-10-19 | 2009-04-22 | 陈修志 | Interaction type acoustic control children education studying device |
WO2009157733A1 (en) * | 2008-06-27 | 2009-12-30 | Yujin Robot Co., Ltd. | Interactive learning system using robot and method of operating the same in child education |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116463A (en) * | 2013-01-31 | 2013-05-22 | 广东欧珀移动通信有限公司 | Interface control method of personal digital assistant applications and mobile terminal |
CN103336788A (en) * | 2013-06-05 | 2013-10-02 | 上海交通大学 | Humanoid robot added Internet information acquisition method and system |
CN103810998B (en) * | 2013-12-05 | 2016-07-06 | 中国农业大学 | Based on the off-line audio recognition method of mobile terminal device and realize method |
CN103810998A (en) * | 2013-12-05 | 2014-05-21 | 中国农业大学 | Method for off-line speech recognition based on mobile terminal device and achieving method |
CN104698996A (en) * | 2013-12-05 | 2015-06-10 | 上海能感物联网有限公司 | Robot system under Chinese text field control |
CN104252287A (en) * | 2014-09-04 | 2014-12-31 | 广东小天才科技有限公司 | Interaction device and method for improving expression capability based on interaction device |
CN104635574A (en) * | 2014-12-15 | 2015-05-20 | 山东大学 | Infant-oriented early-education accompanying and tending robot system |
CN104635574B (en) * | 2014-12-15 | 2017-07-25 | 山东大学 | A kind of early education towards child is accompanied and attended to robot system |
CN104536677A (en) * | 2015-01-20 | 2015-04-22 | 湖南化身科技有限公司 | Three-dimensional digital portrait with intelligent voice interaction function |
WO2016206643A1 (en) * | 2015-06-26 | 2016-12-29 | 北京贝虎机器人技术有限公司 | Method and device for controlling interactive behavior of robot and robot thereof |
CN106325228B (en) * | 2015-06-26 | 2020-03-20 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
CN106325228A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
WO2016206642A1 (en) * | 2015-06-26 | 2016-12-29 | 北京贝虎机器人技术有限公司 | Method and apparatus for generating control data of robot |
CN106313113A (en) * | 2015-06-30 | 2017-01-11 | 芋头科技(杭州)有限公司 | System and method for training robot |
CN106313113B (en) * | 2015-06-30 | 2019-06-07 | 芋头科技(杭州)有限公司 | The system and method that a kind of pair of robot is trained |
CN104992578A (en) * | 2015-07-07 | 2015-10-21 | 常州市拓源电缆成套有限公司 | Voice-controlled children's story machine |
CN105126355A (en) * | 2015-08-06 | 2015-12-09 | 上海元趣信息技术有限公司 | Child companion robot and child companioning system |
CN105206263A (en) * | 2015-08-11 | 2015-12-30 | 东莞市凡豆信息科技有限公司 | Speech and meaning recognition method based on dynamic dictionary |
CN105184718A (en) * | 2015-08-28 | 2015-12-23 | 上海市同济医院 | System and method for multimedia propagation |
CN105690385B (en) * | 2016-03-18 | 2019-04-26 | 北京光年无限科技有限公司 | Call method and device are applied based on intelligent robot |
CN105690385A (en) * | 2016-03-18 | 2016-06-22 | 北京光年无限科技有限公司 | Application calling method and device based on intelligent robot |
CN105872828A (en) * | 2016-03-30 | 2016-08-17 | 乐视控股(北京)有限公司 | Television interactive learning method and device |
CN105868419A (en) * | 2016-06-02 | 2016-08-17 | 泉港区奇妙工业设计服务中心 | Robot device for searching and studying of children |
CN106024016A (en) * | 2016-06-21 | 2016-10-12 | 上海禹昌信息科技有限公司 | Children's guarding robot and method for identifying crying of children |
CN106127526A (en) * | 2016-06-30 | 2016-11-16 | 佛山市天地行科技有限公司 | Intelligent robot system and method for work thereof |
CN106228982B (en) * | 2016-07-27 | 2019-11-15 | 华南理工大学 | A kind of interactive learning system and exchange method based on education services robot |
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
CN106444987A (en) * | 2016-09-22 | 2017-02-22 | 上海葡萄纬度科技有限公司 | Virtual intelligent equipment for child and operation method thereof |
WO2018053918A1 (en) * | 2016-09-22 | 2018-03-29 | 上海葡萄纬度科技有限公司 | Child virtual smart device and method for operating same |
CN106528544A (en) * | 2016-10-11 | 2017-03-22 | 深圳前海勇艺达机器人有限公司 | Method and device for starting translation function of robot |
CN106486121A (en) * | 2016-10-28 | 2017-03-08 | 北京光年无限科技有限公司 | It is applied to the voice-optimizing method and device of intelligent robot |
CN106598215A (en) * | 2016-11-02 | 2017-04-26 | 惠州Tcl移动通信有限公司 | Virtual reality system implementation method and virtual reality device |
CN106598215B (en) * | 2016-11-02 | 2019-11-08 | Tcl移动通信科技(宁波)有限公司 | The implementation method and virtual reality device of virtual reality system |
WO2018082626A1 (en) * | 2016-11-02 | 2018-05-11 | 惠州Tcl移动通信有限公司 | Virtual reality system implementation method and virtual reality device |
CN107123420A (en) * | 2016-11-10 | 2017-09-01 | 厦门创材健康科技有限公司 | Voice recognition system and interaction method thereof |
CN106799736A (en) * | 2017-01-19 | 2017-06-06 | 深圳市鑫益嘉科技股份有限公司 | The interactive triggering method and robot of a kind of robot |
CN106965172A (en) * | 2017-01-23 | 2017-07-21 | 浙江斯玛特信息科技有限公司 | A kind of control system of service robot |
CN107300970B (en) * | 2017-06-05 | 2020-12-11 | 百度在线网络技术(北京)有限公司 | Virtual reality interaction method and device |
CN107300970A (en) * | 2017-06-05 | 2017-10-27 | 百度在线网络技术(北京)有限公司 | Virtual reality exchange method and device |
CN107300918B (en) * | 2017-06-21 | 2020-12-25 | 上海思依暄机器人科技股份有限公司 | Control method and control device for changing motion state |
CN107300918A (en) * | 2017-06-21 | 2017-10-27 | 上海思依暄机器人科技股份有限公司 | A kind of control method and control device for changing motion state |
CN107343778A (en) * | 2017-06-30 | 2017-11-14 | 罗颖莉 | A kind of sweeping robot voice activated control based on intelligent terminal |
CN108460124A (en) * | 2018-02-26 | 2018-08-28 | 北京物灵智能科技有限公司 | Exchange method and electronic equipment based on figure identification |
CN108511042A (en) * | 2018-03-27 | 2018-09-07 | 哈工大机器人集团有限公司 | It is robot that a kind of pet, which is cured, |
CN108877347A (en) * | 2018-08-02 | 2018-11-23 | 安徽硕威智能科技有限公司 | Classroom outdoor scene reproducing interactive tutoring system based on robot projection function |
CN109300341A (en) * | 2018-08-30 | 2019-02-01 | 合肥虹慧达科技有限公司 | Interactive early education robot and its exchange method |
CN109260733A (en) * | 2018-09-12 | 2019-01-25 | 苏州颗粒智能玩具有限公司 | A kind of educational toy with interaction function |
CN109166365A (en) * | 2018-09-21 | 2019-01-08 | 深圳市科迈爱康科技有限公司 | The method and system of more mesh robot language teaching |
CN109493650A (en) * | 2018-12-05 | 2019-03-19 | 安徽智训机器人技术有限公司 | A kind of language teaching system and method based on artificial intelligence |
CN109524003A (en) * | 2018-12-29 | 2019-03-26 | 出门问问信息科技有限公司 | The information processing method of smart-interactive terminal and smart-interactive terminal |
CN109841122A (en) * | 2019-03-19 | 2019-06-04 | 深圳市播闪科技有限公司 | A kind of intelligent robot tutoring system and student's learning method |
CN111312220A (en) * | 2019-12-02 | 2020-06-19 | 西安冉科信息技术有限公司 | Learning method based on dialogue exchange of learning machine |
CN112289339A (en) * | 2020-06-04 | 2021-01-29 | 郭亚力 | System for converting voice into picture |
Also Published As
Publication number | Publication date |
---|---|
CN201940040U (en) | 2011-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102446428A (en) | Robot-based interactive learning system and interactive method thereof | |
US11241789B2 (en) | Data processing method for care-giving robot and apparatus | |
CN108000526B (en) | Dialogue interaction method and system for intelligent robot | |
CN113760142B (en) | Interaction method and device based on virtual roles, storage medium and computer equipment | |
CN108921284B (en) | Interpersonal interaction limb language automatic generation method and system based on deep learning | |
CN109940627B (en) | Man-machine interaction method and system for picture book reading robot | |
US11511436B2 (en) | Robot control method and companion robot | |
CN106097793B (en) | Intelligent robot-oriented children teaching method and device | |
CN109710748B (en) | Intelligent robot-oriented picture book reading interaction method and system | |
CN105126355A (en) | Child companion robot and child companioning system | |
CN103377568B (en) | Multifunctional child somatic sensation educating system | |
CN110598576A (en) | Sign language interaction method and device and computer medium | |
CN111858861B (en) | Question-answer interaction method based on picture book and electronic equipment | |
CN109933198B (en) | Semantic recognition method and device | |
CN110808038B (en) | Mandarin evaluating method, device, equipment and storage medium | |
CN110245253B (en) | Semantic interaction method and system based on environmental information | |
Henderson et al. | Development of an American Sign Language game for deaf children | |
CN110176163A (en) | A kind of tutoring system | |
CN117874185A (en) | Conversational artificial intelligent driving personality simulating system based on context awareness and operation method | |
CN112232066A (en) | Teaching outline generation method and device, storage medium and electronic equipment | |
CN112230777A (en) | Cognitive training system based on non-contact interaction | |
CN111515970A (en) | Interaction method, mimicry robot and related device | |
CN111949773A (en) | Reading equipment, server and data processing method | |
KR20210019818A (en) | Interactive sympathetic learning contents providing system and method | |
Green | Designing and Evaluating Human-Robot Communication: Informing Design through Analysis of User Interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120509 |