CN111026864B - Dictation content determining method and device - Google Patents
Dictation content determining method and device Download PDFInfo
- Publication number
- CN111026864B CN111026864B CN201910335089.XA CN201910335089A CN111026864B CN 111026864 B CN111026864 B CN 111026864B CN 201910335089 A CN201910335089 A CN 201910335089A CN 111026864 B CN111026864 B CN 111026864B
- Authority
- CN
- China
- Prior art keywords
- dictation
- dictation content
- content
- user
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000013507 mapping Methods 0.000 claims abstract description 24
- 238000012216 screening Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 10
- 230000001960 triggered effect Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 12
- 230000009286 beneficial effect Effects 0.000 abstract description 7
- 238000004590 computer program Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 239000000470 constituent Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/04—Electrically-operated educational appliances with audible presentation of the material to be studied
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a device for determining dictation content, wherein the method comprises the following steps: determining characteristic information of a dictation content set selected by a user, wherein the characteristic information of the dictation content set comprises target classification categories corresponding to dictation contents in the dictation content set; determining a target sub dictation content library matched with the target classification category from all pre-stored sub dictation content libraries according to the mapping relation between the pre-constructed classification category and the dictation content library, wherein the target sub dictation content library comprises at least one dictation content; dictation content included in the target sub-dictation content library is determined as dictation content requiring dictation by the user. Therefore, the method and the device can automatically determine the dictation content matched with the dictation content selected by the user according to the classification type of the dictation content selected by the user, namely intelligently determining the dictation content suitable for the user according to the dictation content selected by the user, are beneficial to improving the dictation effect of the user and further beneficial to improving the dictation experience of the user.
Description
Technical Field
The invention relates to the technical field of intelligent terminal equipment, in particular to a method and a device for determining dictation content.
Background
At present, some dictation applications, dictation terminals, learning applications with dictation functions and the like appear on the market, which brings an intelligent dictation mode for students, so that the students can practice dictation at any time and any place. Taking a dictation terminal as an example, when students need to do dictation exercises, the dictation terminal can automatically report and read dictation contents, so that the students can finish the dictation exercises. Practice shows that the dictation content of students in dictation practice is usually arranged for the students by parents or teachers or is determined according to the post-class practice content on textbooks, namely, the dictation content of the students in dictation practice is determined in advance. When students do autonomous dictation exercises, the students can hardly select dictation contents suitable for the students, and the dictation effect of the students is not improved.
Disclosure of Invention
The embodiment of the invention discloses a method and a device for determining dictation content, which can intelligently match dictation content suitable for a user according to the dictation content selected by the user, and are beneficial to improving the dictation effect of the user.
The first aspect of the embodiment of the invention discloses a method for determining dictation content, which comprises the following steps:
Determining characteristic information of a dictation content set selected by a user, wherein the dictation content set comprises at least one dictation content, and the characteristic information of the dictation content set comprises target classification types corresponding to the dictation content in the dictation content set;
determining a target sub dictation content library matched with the target classification category from all pre-stored sub dictation content libraries according to the mapping relation between the pre-constructed classification category and the sub dictation content library, wherein the target sub dictation content library comprises at least one dictation content;
and determining dictation content included in the target sub-dictation content library as dictation content required to be dictated by a user.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before determining, from all pre-stored sub-dictation content libraries, a target sub-dictation content library matching the target classification category according to a mapping relationship between a pre-constructed classification category and the sub-dictation content library, the method further includes:
classifying dictation contents in a dictation content library acquired in advance according to a preset classification class set to obtain classification classes of each dictation content in the dictation content library;
And constructing a mapping relation between the classification category and the sub dictation content library based on the classification category of each dictation content in the dictation content library and the dictation content library, wherein the dictation content library consists of all the sub dictation content libraries.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before determining the feature information of the dictation content set selected by the user, the method further includes:
detecting whether the dictation content included in the dictation content set is completely written, and determining a dictation result of a user for the dictation content set according to a dictation parameter of the user for each dictation content included in the dictation content set when the dictation content included in the dictation content set is completely written, wherein the dictation parameter at least comprises a dictation track of the corresponding dictation content;
judging whether the mastering degree of the dictation content included in the dictation content set reaches a preset degree threshold according to the dictation result, and triggering and executing the operation of determining the characteristic information of the dictation content set selected by the user when the mastering degree is not judged to reach the preset degree threshold.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the method further includes:
setting a weight value of a classification category included in the preset classification category set;
and after determining the dictation result of the user for the dictation content set according to the dictation parameters of the user for each dictation content included in the dictation content set, the method further comprises:
according to the dictation result, determining target dictation contents of which the dictation parameters do not meet preset dictation requirements from all dictation contents included in the dictation content set;
the target classification category specifically comprises a classification category corresponding to the target dictation content;
and after determining the dictation content included in the target sub-dictation content library as the dictation content required to be dictated by the user, the method further comprises:
detecting whether a user triggers a re-dictation operation, and screening a plurality of dictation contents matched with the target dictation contents from the target sub-dictation content library according to screening proportion corresponding to the weight value of the target classification class when the user triggers the re-dictation operation, wherein the plurality of dictation contents are used as dictation contents corresponding to the re-dictation operation;
And responding to the re-dictation operation, and outputting dictation content corresponding to the re-dictation operation by voice.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the dictation parameter further includes a dictation duration corresponding to the dictation content and/or a dictation number corresponding to the dictation content;
the number of dictation times of each dictation content included in the dictation content set by a user is equal to the sum of the number of output times of the dictation content and the number of output times of the prompt information corresponding to the dictation content, and the prompt information corresponding to each dictation content included in the dictation content set is used for prompting the corresponding dictation content to the user.
The second aspect of the embodiment of the invention discloses a device for determining dictation content, which comprises:
the first determining module is used for determining characteristic information of a dictation content set selected by a user, wherein the dictation content set comprises at least one dictation content, and the characteristic information of the dictation content set comprises target classification categories corresponding to the dictation content in the dictation content set;
the second determining module is used for determining a target sub-dictation content library matched with the target classification category from all pre-stored sub-dictation content libraries according to the mapping relation between the pre-constructed classification category and the sub-dictation content library, wherein the target sub-dictation content library comprises at least one dictation content;
And the third determining module is used for determining the dictation content included in the target sub-dictation content library as the dictation content required to be dictated by the user.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the apparatus further includes:
the classification module is used for classifying the dictation content in the pre-acquired dictation content library according to a preset classification class set before the second determination module determines a target sub-dictation content library matched with the target classification class from all pre-stored sub-dictation content libraries according to the mapping relation between the pre-constructed classification class and the sub-dictation content library, so as to obtain the classification class of each dictation content in the dictation content library;
and the construction module is used for constructing a mapping relation between the classification category and the sub dictation content library based on the classification category of each dictation content in the dictation content library and the dictation content library, and the dictation content library consists of all the sub dictation content libraries.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the apparatus further includes:
the detection module is used for detecting whether the dictation content included in the dictation content set is completely dictated or not;
The third determining module is further configured to determine, when the detecting module detects that the dictation content included in the dictation content set is completely dictated, a dictation result of the user for the dictation content set according to a dictation parameter of the user for each dictation content included in the dictation content set, where the dictation parameter at least includes a dictation track of a corresponding dictation content;
the judging module is used for judging whether the grasping degree of the user on the dictation content included in the dictation content set reaches a preset degree threshold value or not according to the dictation result;
the first determining module is specifically configured to:
and when the judging module judges that the mastering degree does not reach the preset degree threshold, determining the characteristic information of the dictation content set selected by the user.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the apparatus further includes:
the setting module is used for setting the weight value of the classification category included in the preset classification category set;
the third determining module is further configured to determine, according to the dictation result, target dictation content whose dictation parameter does not meet a preset dictation requirement from all the dictation contents included in the dictation content set after determining, according to the dictation result, a dictation result of a user for the dictation content set according to the dictation parameter of each dictation content included in the dictation content set by the user;
The target classification category specifically comprises a classification category corresponding to the target dictation content;
the detection module is further configured to detect whether a user triggers a re-dictation operation after the third determination module determines dictation content included in the target sub-dictation content library as dictation content that needs to be dictated by the user;
the screening module is used for screening a plurality of dictation contents matched with the target dictation content from the target sub-dictation content library according to a screening proportion corresponding to the weight value of the target classification class when the detection module detects the re-dictation operation triggered by the user, and the dictation contents are used as dictation contents corresponding to the re-dictation operation;
and the voice output module is used for responding to the re-dictation operation and outputting dictation content corresponding to the re-dictation operation by voice.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the dictation parameter further includes a dictation duration corresponding to the dictation content and/or a dictation number corresponding to the dictation content;
the number of dictation times of each dictation content included in the dictation content set by a user is equal to the sum of the number of output times of the dictation content and the number of output times of the prompt information corresponding to the dictation content, and the prompt information corresponding to each dictation content included in the dictation content set is used for prompting the corresponding dictation content to the user.
In a third aspect, an embodiment of the present invention discloses another apparatus for determining dictation, where the apparatus includes:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform all or part of the steps in any one of the dictation content determining methods disclosed in the first aspect of the embodiment of the invention.
A fourth aspect of the embodiment of the present invention discloses a computer-readable storage medium, which is characterized in that it stores a computer program for electronic data exchange, where the computer program causes a computer to execute all or part of the steps in any one of the methods for determining dictation content disclosed in the first aspect of the embodiment of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product which, when run on a computer, causes the computer to perform part or all of the steps of any one of the dictation content determination methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the characteristic information of the dictation content set selected by the user is determined, the dictation content set comprises at least one dictation content, and the characteristic information of the dictation content set comprises the target classification category corresponding to the dictation content in the dictation content set; determining a target sub dictation content library matched with the target classification category from all pre-stored sub dictation content libraries according to the mapping relation between the pre-constructed classification category and the dictation content library, wherein the target sub dictation content library comprises at least one dictation content; dictation content included in the target sub-dictation content library is determined as dictation content requiring dictation by the user. Therefore, by implementing the embodiment of the invention, the dictation content matched with the dictation content selected by the user can be automatically determined according to the classification category of the dictation content selected by the user, namely the dictation content suitable for the user can be intelligently determined according to the dictation content selected by the user, so that the dictation effect of the user is improved, and further the dictation experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for determining dictation according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for determining dictation according to an embodiment of the invention;
fig. 3 is a schematic structural diagram of a dictation content determining apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another dictation determining apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a still further embodiment of the invention for dictation determination.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a dictation content determining method and device, which can automatically determine dictation content matched with dictation content selected by a user according to classification categories of the dictation content selected by the user, namely intelligently determine dictation content suitable for the user according to the dictation content selected by the user, thereby being beneficial to improving dictation effect of the user and further improving dictation experience of the user. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a method for determining dictation according to an embodiment of the invention. The method described in fig. 1 may be applied to any user terminal with dictation control function, such as a smart phone (Android mobile phone, iOS mobile phone, etc.), a tablet computer, a palm computer, an intelligent wearable device, a mobile internet device (Mobile Internet Devices, MID), etc., which is not limited in the embodiment of the present invention. As shown in fig. 1, the method for determining dictation may include the following operations:
101. The user terminal determines characteristic information of a dictation content set selected by a user, wherein the dictation content set comprises at least one dictation content, and the characteristic information of the dictation content set comprises target classification types corresponding to the dictation content in the dictation content set.
In the embodiment of the invention, the dictation content set is a set formed by all dictation contents selected when a user has a dictation requirement or needs to do autonomous dictation exercise, and optionally, the dictation content set can be embodied in the form of a dictation content list. The classification category of the dictation content can include at least one of a part-of-speech category, an error prone point category, a compendium requirement category, a content composition characteristic category and the like, and the content composition characteristic category can be subdivided into a structural category obtained by structural division, a theme category obtained by theme division and the like, and the embodiment of the invention is not limited. Optionally, the feature information of the dictation content set may further include a subject to which the dictation content belongs, and the target classification category corresponding to the dictation content may specifically be a target classification category corresponding to the dictation content under the subject. It should be noted that, the subject of the invention is different, and the classification category of the dictation content may be the same or different, which is not limited by the embodiment of the invention.
102. And the user terminal determines a target sub-dictation content library matched with the target classification category from all pre-stored sub-dictation content libraries according to the mapping relation between the pre-constructed classification category and the sub-dictation content library, wherein the target sub-dictation content library comprises at least one dictation content.
In the embodiment of the invention, the user terminal can pre-construct the mapping relation between the classification category and the sub dictation content library so as to automatically determine other dictation contents matched with the classification category of the dictation content selected by the user when the user of the user terminal performs autonomous dictation exercise.
103. The user terminal determines dictation contents included in the target sub-dictation content library as dictation contents required to be dictated by the user.
In an alternative embodiment, before performing step 101, the method for determining dictation may further include the following operations:
the user terminal determines the identity type of the selector of the dictation content set;
when the identity type of the selector indicates that the selector is a user with dictation requirements, execution of step 101 is triggered.
Further optionally, the determining, by the user terminal, an identity type of the selector of the dictation content set may include:
The user terminal judges whether the set identifier corresponding to the dictation content set is sent by a control terminal establishing a dictation control relation with the user terminal;
when judging that the set identifier corresponding to the dictation content set is not sent by the control terminal, the user terminal determines that the identity type of the selector of the dictation content set indicates that the selector is a user with dictation requirements;
when the set identifier corresponding to the dictation content set is judged to be sent by the control terminal, the user terminal determines that the identity type of the selector of the dictation content set indicates that the selector is not the user with dictation requirements.
Still further optionally, after determining that the set identifier corresponding to the dictation content set is not sent by the control terminal, the method for determining dictation content may further include the following operations:
the user terminal judges whether the operation information for selecting the selection operation of the dictation content set comprises user identity information;
when the user identity information is included and is the user identity information of the user with dictation requirements, the user terminal executes the above determination that the identity type of the selector of the dictation content set indicates that the selector is the user with dictation requirements;
When the user identity information is included and is the user identity information of the supervisor (such as a parent or a teacher) corresponding to the user with the dictation requirement, the user terminal determines that the identity type of the selector of the dictation content set indicates that the selector is not the user with the dictation requirement.
Still further optionally, the method for determining dictation may further include:
when the operation information for selecting the selection operation of the dictation content collection does not include user identity information, the user terminal determines the identity type of the selector of the dictation content collection based on the scene parameter identified when the selection operation is detected. The scene parameter may include at least one of a geographical position parameter, a current time parameter, an environmental sound parameter, and the like in the current scene where the selection operation is detected, which is not limited in the embodiment of the present invention.
Therefore, the optional embodiment can judge whether the dictation content set is selected by the user with the dictation requirement before determining the characteristic information of the dictation content set selected by the user, if so, the subsequent operation is executed, so that unnecessary operations can be reduced only by determining the proper dictation content for the user when the user autonomously dictates, and the accuracy and the reliability of determining the dictation content are improved.
Therefore, the method for determining the dictation content described in fig. 1 can automatically determine the dictation content matched with the dictation content selected by the user according to the classification category of the dictation content selected by the user, namely, the method can intelligently determine the dictation content suitable for the user according to the dictation content selected by the user, is beneficial to improving the dictation effect of the user, and further can improve the dictation experience of the user.
Example two
Referring to fig. 2, fig. 2 is a flowchart illustrating another method for determining dictation according to an embodiment of the invention. The method described in fig. 2 may be applied to any user terminal with dictation control function, such as a smart phone (Android mobile phone, iOS mobile phone, etc.), a tablet computer, a palm computer, an intelligent wearable device, a mobile internet device (Mobile Internet Devices, MID), etc., which is not limited in the embodiment of the present invention. As shown in fig. 2, the method for determining dictation may include the following operations:
201. the user terminal classifies the dictation content in the dictation content library acquired in advance according to a preset classification class set to obtain the classification class of each dictation content in the dictation content library.
In the embodiment of the present invention, the preset classification category set may include at least one of a part-of-speech category, an error prone point category, a compendium requirement category, a content composition feature category, and the like. Optionally, according to different division rules, the content composition feature categories may be subdivided into structural categories obtained by dividing according to word structures, topic categories obtained by dividing according to topics, and the like, which is not limited by the embodiment of the present invention. The dictation content library obtained in advance is a dictation content library matching with the learning ability of the user or the dictation ability of the user.
202. The user terminal constructs a mapping relation between the classification category and the sub dictation content library based on the classification category and the dictation content library of each dictation content in the dictation content library, and the dictation content library consists of all the sub dictation content libraries.
203. The user terminal sets a weight value of the classification category included in the preset classification category set.
In the embodiment of the invention, different classification categories can have different weight values, wherein the higher the weight value is, the greater the decision function of the corresponding classification category in determining dictation content is, and the greater the screening proportion of the dictation content of the corresponding classification category is. Furthermore, the user terminal may set different weight values for different types of content constituent feature categories, which is not limited by the embodiment of the present invention. It should be noted that, the user terminal may set different weight values only for different types of content constituent feature categories included in the content constituent feature categories, where the higher the weight value is, the greater the decision role of the corresponding content constituent feature category in determining dictation content is, and the greater the screening proportion of dictation content of the corresponding content constituent feature category is, which is not limited by the embodiment of the present invention.
It should be noted that, step 203 and steps 201 to 202 are not performed in time sequence, and may be performed before step 201, or may be performed after step 201, or may be performed before step 202, or may be performed after step 202, which is not limited in the embodiment of the present invention.
204. The user terminal outputs the dictation content included in the dictation content set selected by the user through voice, so that the user can dictate.
In an embodiment of the present invention, the user-selected dictation content set includes at least one dictation content.
In the embodiment of the present invention, it should be noted that, when the user terminal needs to output one of the dictation contents by voice, the user terminal may specifically perform the following operations:
the user terminal outputs (newspaper-reads) the one dictation content by voice;
after the one of the dictation contents is spoken by the voice, the user terminal judges whether the user confirms the one of the dictation contents based on whether the user triggers a re-output operation for the one of the dictation contents, facial expression information when the user dictates the one of the dictation contents or a dictation track corresponding to the one of the dictation contents;
when judging that the user does not confirm the one dictation content, the user terminal determines the prompt information corresponding to the one dictation content and outputs the prompt information corresponding to the one dictation content so as to prompt the user of the one dictation content.
In the embodiment of the invention, the user terminal can output the prompt information of the dictation content in a voice mode and/or a text mode. When the prompt information is output in a text mode, the prompt information can be output in a mode of popping up a text box. Optionally, when the prompt message is output in a text mode, if the prompt message includes a part or all of the content identical to the dictation content, the content identical to the dictation content in the prompt message is processed in a preset processing mode, for example, mosaicing is performed.
Further optionally, after determining that the user does not confirm the one of the dictation contents, the user terminal may further perform the following operations:
judging whether the total number of times of outputting the prompt information corresponding to one of the dictation contents exceeds a preset number of times threshold, and executing the operation of determining the prompt information corresponding to one of the dictation contents when the total number of times is not judged to exceed the preset number of times threshold; when the total times exceeds the preset times threshold, the next dictation content is read through the voice.
It should be noted that the prompt information of one of the dictation contents outputted each time is different.
Therefore, after the dictation content is output through the voice, if the user is judged to be unable to confirm the dictation content, the embodiment of the invention outputs the prompt information corresponding to the dictation content, so that the user can quickly and accurately know the dictation content which has been reported and read, and the dictation efficiency and the dictation accuracy of the user are improved. In addition, the occurrence of the low dictation efficiency caused by the fact that the user cannot confirm the dictation content for a long time can be avoided by setting the maximum prompting times.
205. The user terminal determines characteristic information of the dictation content set selected by the user, wherein the characteristic information of the dictation content set comprises target classification categories corresponding to the dictation content in the dictation content set.
206. And the user terminal determines a target sub-dictation content library matched with the target classification category from all pre-stored sub-dictation content libraries according to the mapping relation between the pre-constructed classification category and the sub-dictation content library, wherein the target sub-dictation content library comprises at least one dictation content.
207. The user terminal determines dictation contents included in the target sub-dictation content library as dictation contents required to be dictated by the user.
In the embodiment of the present invention, please refer to the detailed description of step 101 to step 103 in the first embodiment for the detailed description of step 205 to step 207, and the detailed description of the embodiment of the present invention is omitted.
In an alternative embodiment, the method for determining dictation may further include the operations of:
208. the user terminal detects whether the user triggers the re-dictation operation, and when the detection result of step 208 is yes, the step 209 is triggered and executed, and when the detection result of step 208 is no, the current flow can be ended.
209. The user terminal screens a plurality of dictation contents matched with the target dictation contents from the target sub-dictation content library according to the screening proportion corresponding to the weight value of the target classification category, and the dictation contents are used as dictation contents corresponding to the re-dictation operation.
The higher the weight value is, the larger the proportion of dictation contents of the corresponding classification category screened from the target sub-dictation content library is in a plurality of dictation contents matched with the target dictation contents, so that a user can practice the dictation with pertinence.
210. The user terminal responds to the re-dictation operation and outputs the dictation content corresponding to the re-dictation operation through voice.
Therefore, the embodiment of the invention can intelligently construct the mapping relation between the classification category and the sub dictation content library based on the classification category of different dictation contents, so that matched dictation contents can be intelligently screened for the user based on the dictation contents selected by the user when the user performs autonomous dictation training, the user can perform targeted dictation training, and the improvement of the dictation effect of the user is facilitated.
In another alternative embodiment, after the completing step 204 and before the performing step 205, the method for determining dictation may further include the following operations:
the user terminal detects whether the dictation content included in the dictation content set is completely dictated;
when the dictation of the dictation content included in the dictation content set is detected, the user terminal determines the dictation result of the user for the dictation content set according to the dictation parameters of each dictation content included in the dictation content set, wherein the dictation parameters at least comprise the dictation track of the corresponding dictation content;
the user terminal judges whether the mastering degree of the dictation content included in the dictation content set reaches a preset degree threshold value or not according to the dictation result;
when it is determined that the user's mastery degree of the dictation content included in the dictation content set does not reach the preset degree threshold, execution of step 205 is triggered.
In this optional embodiment, further optionally, after determining the dictation result of the user for the dictation content set according to the dictation parameter of the user for each dictation content included in the dictation content set, the method for determining the dictation content may further include:
and the user terminal determines target dictation contents with dictation parameters which do not meet the preset dictation requirements from all the dictation contents included in the dictation content set according to the dictation result, wherein the target classification category corresponding to the dictation contents in the dictation content set can comprise the classification category corresponding to the target dictation contents.
In the embodiment of the present invention, the user terminal detecting whether the dictation content included in the dictation content set is dictation-finished may include:
the user terminal detects whether all the dictation contents included in the dictation content set are completely output through voice, and when the detection result is yes, the dictation content included in the dictation content set is completely output; or,
the user terminal detects whether a terminal dictation instruction for the dictation content set is received, and when the terminal dictation instruction is detected, the user terminal determines that the dictation content included in the dictation content set is finished.
Therefore, the optional embodiment can intelligently determine the dictation result of the user after the user finishes the dictation operation, judge the mastering degree of the user on the dictation content according to the dictation result, select proper dictation content for the user to carry out the dictation exercise when the determined mastering degree does not meet the requirement, and select the dictation content for the user to be the dictation content matched with the classification category of the target dictation content of which the dictation parameter does not meet the dictation requirement, thereby improving the pertinence of the determined dictation content, facilitating the user to carry out the pertinence dictation exercise, and consolidating and reinforcing the dictation effect and mastering degree of the user on the unfamiliar dictation content.
In yet another alternative embodiment, the dictation parameters further include a dictation duration corresponding to the dictation content and/or a dictation number corresponding to the dictation content. And the number of dictation times of each dictation content included by the user aiming at the dictation content set is equal to the sum of the number of output times of the dictation content and the number of output times of the prompt information corresponding to the dictation content, and the prompt information corresponding to each dictation content included by the dictation content set is used for prompting the corresponding dictation content to the user.
Optionally, the target dictation content whose dictation parameter does not meet the preset dictation requirement may include at least one of a dictation track not matching with a preset correct track, a number of dictation times exceeding a preset number of times, and a dictation duration exceeding a preset duration, which is specifically matched with the content included in the dictation parameter.
Therefore, the optional embodiment can comprehensively determine the dictation result of the user through the dictation track and at least one of the dictation time length and the dictation times, so that the accuracy of the determined dictation result is improved, and the accuracy of the determined grasping degree of the user on the dictation content is further improved. In addition, the accuracy of the dictation result determined based on the dictation number can be further improved when the dictation track further comprises the dictation number based on the number of times of outputting the dictation content, the number of times of outputting the prompt information of the dictation content and the number of times of dictation serving as the dictation content.
Therefore, the implementation of the method for determining dictation content described in fig. 2 can intelligently construct the mapping relation between the classification category and the sub-dictation content library based on the classification category of different dictation content, so that matched dictation content can be intelligently screened for the user based on the dictation content selected by the user when the user performs autonomous dictation training, the user can perform targeted dictation training, and the improvement of the dictation effect of the user is facilitated.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a dictation determining apparatus according to an embodiment of the invention. The device described in fig. 3 may be applied to any user terminal with dictation control function, such as a smart phone (Android mobile phone, iOS mobile phone, etc.), a tablet computer, a palm computer, an intelligent wearable device, and a mobile internet device (Mobile Internet Devices, MID), which is not limited in the embodiment of the present invention. As shown in fig. 3, the dictation content determining apparatus may include:
the first determining module 301 is configured to determine feature information of a dictation content set selected by a user, where the dictation content set includes at least one dictation content, and the feature information of the dictation content set includes a target classification category corresponding to the dictation content in the dictation content set.
The second determining module 302 is configured to determine, from all pre-stored sub-dictation content libraries, a target sub-dictation content library matching the target classification category according to the mapping relationship between the pre-constructed classification category and the sub-dictation content library, where the target sub-dictation content library includes at least one dictation content.
And a third determining module 303, configured to determine dictation content included in the target sub-dictation content library as dictation content that needs to be dictated by the user.
Therefore, the device described in fig. 3 can automatically determine the dictation content matched with the dictation content selected by the user according to the classification category of the dictation content selected by the user, namely, the device can intelligently determine the dictation content suitable for the user according to the dictation content selected by the user, thereby being beneficial to improving the dictation effect of the user and further improving the dictation experience of the user.
In an alternative embodiment, as shown in fig. 4, the dictation determining apparatus may further include:
the classification module 304 is configured to classify, according to a preset classification category set, dictation contents in the dictation content library obtained in advance before the second determination module 302 determines a target sub-dictation content library matching with the target classification category from all sub-dictation content libraries stored in advance according to the mapping relationship between the pre-constructed classification category and the sub-dictation content library, so as to obtain a classification category of each dictation content in the dictation content library.
The construction module 305 is configured to construct a mapping relationship between the classification category and the sub-dictation content library based on the classification category of each dictation content in the dictation content library and the dictation content library, where the dictation content library is composed of all sub-dictation content libraries.
Therefore, the device described in fig. 4 can also intelligently construct the mapping relation between the classification category and the sub dictation content library based on the classification category of different dictation contents, so that matched dictation contents can be intelligently screened for the user based on the dictation contents selected by the user when the user performs autonomous dictation training, the user can perform targeted dictation training, and the improvement of the dictation effect of the user is facilitated.
In yet another alternative embodiment, as shown in fig. 4, the apparatus may further include:
the detecting module 306 is configured to detect whether the dictation content included in the dictation content set is completely dictated.
The third determining module 303 is further configured to determine, when the detecting module 306 detects that the dictation content included in the dictation content set is completely dictated, a dictation result of the user for the dictation content set according to a dictation parameter of the user for each dictation content included in the dictation content set, where the dictation parameter at least includes a dictation track of the corresponding dictation content.
The judging module 307 is configured to judge whether the user's mastering level of the dictation content included in the dictation content set reaches a preset level threshold according to the dictation result.
The first determining module 301 is specifically configured to:
when the judgment module 307 judges that the grasping degree does not reach the preset degree threshold, the feature information of the dictation content set selected by the user is determined.
In yet another optional embodiment, the third determining module 303 is further configured to determine, after determining a dictation result of the user for the dictation content set according to the dictation parameters of the user for each dictation content included in the dictation content set, from all the dictation contents included in the dictation content set, a target dictation content whose dictation parameters do not meet the preset dictation requirement according to the dictation result.
The target classification category specifically comprises a classification category corresponding to the target dictation content.
In yet another alternative embodiment, as shown in fig. 4, the apparatus may further include:
the setting module 308 is configured to set a weight value of a classification category included in the preset classification category set.
The detection module 306 is further configured to detect whether the user triggers a re-dictation operation after the third determination module 303 determines that the dictation content included in the target sub-dictation content library is dictation content that needs to be dictated by the user.
And a filtering module 309, configured to, when the detecting module 306 detects a re-dictation operation triggered by the user, filter, according to a filtering proportion corresponding to the weight value of the target classification category, a plurality of dictation contents matching the target dictation content from the target sub-dictation content library, as dictation contents corresponding to the re-dictation operation.
The speech output module 310 is configured to respond to the re-dictation operation and output the dictation content corresponding to the re-dictation operation.
Therefore, the device described in fig. 4 can also intelligently determine the dictation result of the user after the user finishes the dictation operation and judge the mastering degree of the user on the dictation content according to the dictation result, when the determined mastering degree does not meet the requirement, the user selects proper dictation content for the user to carry out the dictation exercise, and the dictation content selected by the user is the dictation content matched with the classification category of the target dictation content which does not meet the dictation requirement, so that the pertinence of the determined dictation content is improved, the user can conveniently carry out the pertinence dictation exercise, and the dictation effect and mastering degree of the user on the unfamiliar dictation content can be consolidated and enhanced.
In yet another alternative embodiment, the dictation parameters further include a dictation duration corresponding to the dictation content and/or a dictation number corresponding to the dictation content. And the number of dictation times of each dictation content included in the dictation content set by the user is equal to the sum of the number of output times of the dictation content and the number of output times of the prompt information corresponding to each dictation content, and the prompt information corresponding to each dictation content included in the dictation content set is used for prompting the corresponding dictation content to the user.
It can be seen that the device described in fig. 4 is implemented to comprehensively determine the dictation result of the user through the dictation track and at least one of the dictation duration and the dictation frequency, so that the accuracy of the determined dictation result is improved, and the accuracy of the determined grasping degree of the user on the dictation content is further improved. In addition, the accuracy of the dictation result determined based on the dictation number can be further improved when the dictation track further comprises the dictation number based on the number of times of outputting the dictation content, the number of times of outputting the prompt information of the dictation content and the number of times of dictation serving as the dictation content.
Example IV
Referring to fig. 5, fig. 5 is a schematic structural diagram of a device for determining dictation according to an embodiment of the invention. As shown in fig. 5, the dictation content determining apparatus may include:
a memory 501 in which executable program codes are stored;
a processor 502 coupled to the memory 501;
the processor 502 invokes executable program code stored in the memory 501 to perform steps in the method of determining dictation described in fig. 1 or fig. 2.
Example five
The embodiment of the invention discloses a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute steps in a method for determining dictation described in fig. 1 or fig. 2.
Example six
Embodiments of the present invention disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer program being operable to cause a computer to perform the steps of the method of determining dictation described in fig. 1 or fig. 2.
In various embodiments of the present invention, it should be understood that the sequence numbers of the foregoing processes do not imply that the execution sequences of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B may be determined. It should also be understood that determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory. Based on this understanding, the technical solution of the present invention, or a part contributing to the prior art or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, comprising several requests for a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in a computer device) to execute some or all of the steps of the above-mentioned method of the various embodiments of the present invention.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The above detailed description of a method and apparatus for determining dictation content disclosed in the embodiments of the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above description of the embodiments is only for helping to understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (8)
1. A method of determining dictation, the method comprising:
detecting whether the dictation content included in the dictation content set is completely written or not, and determining a dictation result of a user for the dictation content set according to dictation parameters of the user for each dictation content included in the dictation content set when the dictation content included in the dictation content set is completely written, wherein the dictation parameters at least comprise dictation tracks of corresponding dictation content;
judging whether the mastering degree of the dictation content included in the dictation content set reaches a preset degree threshold or not according to the dictation result, and determining characteristic information of the dictation content set selected by the user when the mastering degree is not judged to reach the preset degree threshold, wherein the dictation content set comprises at least one dictation content, and the characteristic information of the dictation content set comprises target classification types corresponding to the dictation content in the dictation content set;
determining a target sub dictation content library matched with the target classification category from all pre-stored sub dictation content libraries according to the mapping relation between the pre-constructed classification category and the sub dictation content library, wherein the target sub dictation content library comprises at least one dictation content;
And determining dictation content included in the target sub-dictation content library as dictation content required to be dictated by a user.
2. The method for determining dictation content according to claim 1, wherein before determining a target sub-dictation content library matching the target classification category from all sub-dictation content libraries stored in advance according to a mapping relation between a pre-constructed classification category and the sub-dictation content library, the method further comprises:
classifying dictation contents in a dictation content library acquired in advance according to a preset classification class set to obtain classification classes of each dictation content in the dictation content library;
and constructing a mapping relation between the classification category and the sub dictation content library based on the classification category of each dictation content in the dictation content library and the dictation content library, wherein the dictation content library consists of all the sub dictation content libraries.
3. The method of determining dictation as recited in claim 1, further comprising:
setting a weight value of a classification category included in a preset classification category set;
and after determining the dictation result of the user for the dictation content set according to the dictation parameters of the user for each dictation content included in the dictation content set, the method further comprises:
According to the dictation result, determining target dictation contents of which the dictation parameters do not meet preset dictation requirements from all dictation contents included in the dictation content set;
the target classification category specifically comprises a classification category corresponding to the target dictation content;
and after determining the dictation content included in the target sub-dictation content library as the dictation content required to be dictated by the user, the method further comprises:
detecting whether a user triggers a re-dictation operation, and screening a plurality of dictation contents matched with the target dictation contents from the target sub-dictation content library according to screening proportion corresponding to the weight value of the target classification class when the user triggers the re-dictation operation, wherein the plurality of dictation contents are used as dictation contents corresponding to the re-dictation operation;
and responding to the re-dictation operation, and outputting dictation content corresponding to the re-dictation operation by voice.
4. A method for determining dictation content according to claim 1 or 3, wherein the dictation parameters further comprise a dictation duration corresponding to the dictation content and/or a dictation number corresponding to the dictation content;
the number of dictation times of each dictation content included in the dictation content set by a user is equal to the sum of the number of output times of the dictation content and the number of output times of the prompt information corresponding to the dictation content, and the prompt information corresponding to each dictation content included in the dictation content set is used for prompting the corresponding dictation content to the user.
5. A dictation content determining apparatus, the apparatus comprising:
the detection module is used for detecting whether the dictation content included in the dictation content set is completely dictated or not;
the third determining module is used for determining the dictation result of the user for the dictation content set according to the dictation parameters of the user for each dictation content included in the dictation content set when the detecting module detects that the dictation content included in the dictation content set is completely dictated, wherein the dictation parameters at least comprise the dictation track of the corresponding dictation content;
the judging module is used for judging whether the grasping degree of the user on the dictation content included in the dictation content set reaches a preset degree threshold value or not according to the dictation result;
the first determining module is used for determining characteristic information of the dictation content set selected by a user when the judging module judges that the mastering degree does not reach the preset degree threshold, wherein the dictation content set comprises at least one dictation content, and the characteristic information of the dictation content set comprises target classification types corresponding to the dictation content in the dictation content set;
the second determining module is used for determining a target sub-dictation content library matched with the target classification category from all pre-stored sub-dictation content libraries according to the mapping relation between the pre-constructed classification category and the sub-dictation content library, wherein the target sub-dictation content library comprises at least one dictation content;
And the third determining module is further configured to determine dictation content included in the target sub-dictation content library as dictation content that needs to be dictated by a user.
6. The apparatus for determining dictation as recited in claim 5, further comprising:
the classification module is used for classifying the dictation content in the pre-acquired dictation content library according to a preset classification class set before the second determination module determines a target sub-dictation content library matched with the target classification class from all pre-stored sub-dictation content libraries according to the mapping relation between the pre-constructed classification class and the sub-dictation content library, so as to obtain the classification class of each dictation content in the dictation content library;
and the construction module is used for constructing a mapping relation between the classification category and the sub dictation content library based on the classification category of each dictation content in the dictation content library and the dictation content library, and the dictation content library consists of all the sub dictation content libraries.
7. The apparatus for determining dictation as recited in claim 5, further comprising:
the setting module is used for setting the weight value of the classification category included in the preset classification category set;
The third determining module is further configured to determine, according to the dictation result, target dictation content whose dictation parameter does not meet a preset dictation requirement from all the dictation contents included in the dictation content set after determining, according to the dictation result, a dictation result of a user for the dictation content set according to the dictation parameter of each dictation content included in the dictation content set by the user;
the target classification category specifically comprises a classification category corresponding to the target dictation content;
the detection module is further configured to detect whether a user triggers a re-dictation operation after the third determination module determines dictation content included in the target sub-dictation content library as dictation content that needs to be dictated by the user;
the screening module is used for screening a plurality of dictation contents matched with the target dictation content from the target sub-dictation content library according to a screening proportion corresponding to the weight value of the target classification class when the detection module detects the re-dictation operation triggered by the user, and the dictation contents are used as dictation contents corresponding to the re-dictation operation;
and the voice output module is used for responding to the re-dictation operation and outputting dictation content corresponding to the re-dictation operation by voice.
8. The apparatus according to claim 5 or 7, wherein the dictation parameters further include a dictation duration corresponding to the dictation content and/or a dictation number corresponding to the dictation content;
the number of dictation times of each dictation content included in the dictation content set by a user is equal to the sum of the number of output times of the dictation content and the number of output times of the prompt information corresponding to the dictation content, and the prompt information corresponding to each dictation content included in the dictation content set is used for prompting the corresponding dictation content to the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910335089.XA CN111026864B (en) | 2019-04-24 | 2019-04-24 | Dictation content determining method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910335089.XA CN111026864B (en) | 2019-04-24 | 2019-04-24 | Dictation content determining method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111026864A CN111026864A (en) | 2020-04-17 |
CN111026864B true CN111026864B (en) | 2024-02-20 |
Family
ID=70203694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910335089.XA Active CN111026864B (en) | 2019-04-24 | 2019-04-24 | Dictation content determining method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111026864B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5956298A (en) * | 1997-11-10 | 1999-09-21 | Gough; Jesse Lynn | Voice prompting and indexing dictation recorder |
CN103473959A (en) * | 2013-09-05 | 2013-12-25 | 无敌科技(西安)有限公司 | System and method for special dictation training study related to numbers in foreign language |
CN107481568A (en) * | 2017-09-19 | 2017-12-15 | 广东小天才科技有限公司 | Knowledge point consolidation method and user terminal |
CN107766482A (en) * | 2017-10-13 | 2018-03-06 | 北京猎户星空科技有限公司 | Information pushes and sending method, device, electronic equipment, storage medium |
CN109344397A (en) * | 2018-09-03 | 2019-02-15 | 东软集团股份有限公司 | The extracting method and device of text feature word, storage medium and program product |
CN109558511A (en) * | 2018-12-12 | 2019-04-02 | 广东小天才科技有限公司 | Dictation and reading method and device |
CN109635096A (en) * | 2018-12-20 | 2019-04-16 | 广东小天才科技有限公司 | Dictation prompting method and electronic equipment |
CN109635772A (en) * | 2018-12-20 | 2019-04-16 | 广东小天才科技有限公司 | Dictation content correcting method and electronic equipment |
CN109634552A (en) * | 2018-12-17 | 2019-04-16 | 广东小天才科技有限公司 | Report control method and terminal device applied to dictation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2227806A4 (en) * | 2007-12-21 | 2013-08-07 | Nvoq Inc | Distributed dictation/transcription system |
-
2019
- 2019-04-24 CN CN201910335089.XA patent/CN111026864B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5956298A (en) * | 1997-11-10 | 1999-09-21 | Gough; Jesse Lynn | Voice prompting and indexing dictation recorder |
CN103473959A (en) * | 2013-09-05 | 2013-12-25 | 无敌科技(西安)有限公司 | System and method for special dictation training study related to numbers in foreign language |
CN107481568A (en) * | 2017-09-19 | 2017-12-15 | 广东小天才科技有限公司 | Knowledge point consolidation method and user terminal |
CN107766482A (en) * | 2017-10-13 | 2018-03-06 | 北京猎户星空科技有限公司 | Information pushes and sending method, device, electronic equipment, storage medium |
CN109344397A (en) * | 2018-09-03 | 2019-02-15 | 东软集团股份有限公司 | The extracting method and device of text feature word, storage medium and program product |
CN109558511A (en) * | 2018-12-12 | 2019-04-02 | 广东小天才科技有限公司 | Dictation and reading method and device |
CN109634552A (en) * | 2018-12-17 | 2019-04-16 | 广东小天才科技有限公司 | Report control method and terminal device applied to dictation |
CN109635096A (en) * | 2018-12-20 | 2019-04-16 | 广东小天才科技有限公司 | Dictation prompting method and electronic equipment |
CN109635772A (en) * | 2018-12-20 | 2019-04-16 | 广东小天才科技有限公司 | Dictation content correcting method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111026864A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3611895B1 (en) | Method and device for user registration, and electronic device | |
JP6668501B2 (en) | Audio data processing method, apparatus and storage medium | |
CN107644638B (en) | Audio recognition method, device, terminal and computer readable storage medium | |
US11030886B2 (en) | Method and device for updating online self-learning event detection model | |
CN109446315B (en) | Question solving auxiliary method and question solving auxiliary client | |
CN111459448A (en) | Reading assisting method and device, storage medium and electronic equipment | |
CN110910900A (en) | Sound quality abnormal data detection method, sound quality abnormal data detection device, electronic equipment and storage medium | |
CN111026864B (en) | Dictation content determining method and device | |
CN109271480B (en) | Voice question searching method and electronic equipment | |
CN113035179B (en) | Voice recognition method, device, equipment and computer readable storage medium | |
CN112951219A (en) | Noise rejection method and device | |
CN107948149A (en) | Tactful self study and optimization method and device based on random forest | |
CN111816191A (en) | Voice processing method, device, system and storage medium | |
CN109087694B (en) | Method for helping students to exercise bodies and family education equipment | |
CN111539390A (en) | Small target image identification method, equipment and system based on Yolov3 | |
CN111081227B (en) | Recognition method of dictation content and electronic equipment | |
CN111582446B (en) | System for neural network pruning and neural network pruning processing method | |
CN110895691A (en) | Image processing method and device and electronic equipment | |
CN110660385A (en) | Command word detection method and electronic equipment | |
CN111275921A (en) | Behavior monitoring method and device and electronic equipment | |
CN111079504A (en) | Character recognition method and electronic equipment | |
CN110415688B (en) | Information interaction method and robot | |
CN111078098B (en) | Dictation control method and device | |
CN109273004A (en) | Predictive audio recognition method and device based on big data | |
CN108446403A (en) | Language exercise method, apparatus, intelligent vehicle mounted terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |