CN109343696B - Electronic book commenting method and device and computer readable storage medium - Google Patents
Electronic book commenting method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN109343696B CN109343696B CN201810955566.8A CN201810955566A CN109343696B CN 109343696 B CN109343696 B CN 109343696B CN 201810955566 A CN201810955566 A CN 201810955566A CN 109343696 B CN109343696 B CN 109343696B
- Authority
- CN
- China
- Prior art keywords
- user
- currently read
- content
- electronic book
- audio data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012544 monitoring process Methods 0.000 claims abstract description 31
- 238000004590 computer program Methods 0.000 claims description 9
- 210000005252 bulbus oculi Anatomy 0.000 description 28
- 238000012545 processing Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 6
- 230000006399 behavior Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000234314 Zingiber Species 0.000 description 1
- 235000006886 Zingiber officinale Nutrition 0.000 description 1
- 230000000692 anti-sense effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 235000008397 ginger Nutrition 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0483—Interaction with page-structured environments, e.g. book metaphor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Electrically Operated Instructional Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a comment method of an electronic book, which comprises the following steps: acquiring the watching duration of the current read electronic book content by a user; determining whether the gaze duration is greater than or equal to a target duration threshold; when the watching duration is determined to be greater than or equal to the target duration threshold, performing voice monitoring on the user; and commenting the e-book content currently read by the user based on the monitored audio data. The invention also discloses a comment device of the electronic book and a computer readable storage medium.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for commenting an electronic book, and a computer-readable storage medium.
Background
Currently, when a user reads an electronic book, the user can annotate or comment on the electronic book. When annotating or commenting on an electronic book, a user generally inputs comments or annotations manually through a physical keyboard or a virtual keyboard, and the manual input needs to consume a long time of the user, so that the efficiency of adding comments by the user is reduced, and the user experience is reduced.
Disclosure of Invention
In view of the above, embodiments of the present invention are intended to provide a method and an apparatus for commenting on an electronic book, and a computer-readable storage medium, which can improve efficiency of adding comments to a book.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a method for commenting an electronic book, which comprises the following steps:
acquiring the watching duration of the current read electronic book content by a user;
determining whether the gaze duration is greater than or equal to a target duration threshold;
when the watching duration is determined to be greater than or equal to the target duration threshold, performing voice monitoring on the user;
and commenting the e-book content currently read by the user based on the monitored audio data.
In the above solution, before the determining whether the gazing duration is greater than or equal to a target duration threshold, the method further includes:
determining the reading difficulty of the e-book content currently read by the user;
based on the determined reading difficulty, a target duration threshold is determined.
In the foregoing solution, the determining the reading difficulty of the e-book content currently read by the user includes:
determining the reading difficulty of the e-book content currently read by the user according to the length of sentences in the e-book content currently read by the user;
and/or determining the reading difficulty of the e-book content currently read by the user according to the number of professional words in the e-book content currently read by the user;
and/or determining the character category of the e-book content currently read by the user according to the label of the e-book content currently read by the user, and determining the reading difficulty of the e-book content currently read by the user based on the determined character category.
In the above scheme, the commenting the e-book content currently read by the user based on the monitored audio data includes:
determining whether the monitored audio data is a comment for the content of the e-book currently read by the user;
and when the monitored audio data is determined to be the comment aiming at the e-book content currently read by the user, commenting the e-book content currently read by the user according to the audio data.
In the above scheme, the determining whether the monitored audio data is a comment for the e-book content currently read by the user includes:
determining whether the monitored audio data is a comment aiming at the content of the electronic book currently read by the user or not according to whether the audio data contains preset specific content or not;
and/or determining published comments corresponding to the electronic book, content similarity between the identified contents obtained based on the audio data, and determining whether the monitored audio data is a comment aiming at the content of the electronic book currently read by the user based on the determined content similarity.
The embodiment of the invention provides a comment device of an electronic book, which comprises:
the acquisition module is used for acquiring the watching duration of the current read electronic book content;
a determination module to determine whether the gaze duration is greater than or equal to a target duration threshold;
the monitoring module is used for carrying out voice monitoring on the user when the watching duration is determined to be greater than or equal to the target duration threshold;
and the comment module is used for commenting the e-book content currently read by the user based on the monitored audio data.
In the above scheme, the comment module is specifically configured to determine whether the monitored audio data is a comment for the content of the electronic book currently read by the user;
and when the monitored audio data is determined to be the comment aiming at the e-book content currently read by the user, commenting the e-book content currently read by the user according to the audio data.
In the above scheme, the comment module is specifically configured to determine whether the monitored audio data is a comment for the content of the electronic book currently read by the user, according to whether the audio data includes preset specific content; and/or determining published comments corresponding to the electronic book, content similarity between the identified contents obtained based on the audio data, and determining whether the monitored audio data is a comment aiming at the content of the electronic book currently read by the user based on the determined content similarity.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the review method described in any one of the above.
The embodiment of the invention provides a comment device of an electronic book, which is characterized by comprising the following components: a memory, a processor, and a computer program stored on the memory and executable on the processor;
wherein the processor is configured to execute the steps of the review method of any of the above when running the computer program.
According to the method and the device for commenting the electronic book and the computer readable storage medium, provided by the embodiment of the invention, the watching duration of the user on the content of the currently read electronic book is obtained; determining whether the gaze duration is greater than or equal to a target duration threshold; when the watching duration is determined to be greater than or equal to the target duration threshold, performing voice monitoring on the user; and commenting the e-book content currently read by the user based on the monitored audio data. In the embodiment of the invention, when the watching time of the current read electronic book content by the user is greater than or equal to the target time length threshold, the audio data of the user are monitored, so that the electronic book content can be commented on the basis of the audio data. Obviously, the user can comment on the content of the currently read electronic book without manual operation, so that the comment on the content of the electronic book can be quickly realized, the efficiency of adding comments in the electronic book is improved, and the user experience is further improved.
Drawings
FIG. 1 is a schematic diagram of an implementation flow of a method for commenting an electronic book according to an embodiment of the present invention;
FIG. 2 is a first schematic structural diagram illustrating a composition of a comment device of an electronic book according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a composition structure of a comment device of an electronic book according to an embodiment of the present invention.
Detailed Description
In the embodiment of the invention, the watching duration of the current read electronic book content by a user is acquired; determining whether the gaze duration is greater than or equal to a target duration threshold; when the watching duration is determined to be greater than or equal to the target duration threshold, performing voice monitoring on the user; and commenting the e-book content currently read by the user based on the monitored audio data.
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
As shown in fig. 1, a method for commenting an electronic book according to an embodiment of the present invention is described in detail, and the method for commenting an electronic book according to the embodiment is applied to a terminal side, and includes the following steps:
step 101: and acquiring the watching duration of the current read electronic book content by the user.
The e-book content currently read by the user may be the e-book content watched by eyeballs of the user.
Here, the terminal may obtain the movement information of the user's eyeball by using the existing technology, such as eyeball tracking technology; determining a gazing position of the user on the currently read electronic book content based on the motion information; and acquiring the watching duration of the user to the watching position.
In practical application, the terminal can acquire the motion information of a user in an image capturing or scanning extraction mode; the motion information may include: position change information of the eyeball, movement direction information of the eyeball, and the like.
When the terminal determines the gaze position of the eyeball of the user on the content of the electronic book by adopting the prior art with different designated identification accuracy, the determined gaze position may be different, wherein the designated identification accuracy may be the identification accuracy of the gaze position of the eyeball of the user on the content of the electronic book.
In one embodiment, the more accurate the determined gaze location is when the assigned recognition accuracy of the prior art employed by the terminal is higher.
For example, when the terminal identifies the gaze position of the eyeball of the user with high accuracy based on the prior art, the determined gaze position may be a certain line of the currently read e-book content, and when the terminal identifies the gaze position of the eyeball of the user with low accuracy based on the prior art, the determined gaze position may be a certain segment of the currently read e-book content.
In one embodiment, the determined gaze location may be a line of the electronic book, a segment of the electronic book, or the like.
For example, the gaze position of the eyeball of the user on the currently read e-book content may be an nth line of an nth page currently read by the user, or an nth paragraph of the nth page currently read by the user. Wherein N is a positive integer.
In one embodiment, when the gaze position of the eyeball of the user on the content of the electronic book is determined more accurately based on the prior art with higher specified recognition accuracy, such as a certain line of the content of the electronic book, the gaze duration of the determined gaze position corresponding to the user may be shorter, and correspondingly, when the gaze position of the eyeball of the user on the content of the electronic book is determined less accurately based on the prior art with lower specified recognition accuracy, such as a certain section of the content of the electronic book, the gaze duration of the determined gaze position corresponding to the user may be longer.
Step 102: determining whether the gaze duration is greater than or equal to a target duration threshold.
In an embodiment, prior to the determining whether the gaze duration is greater than or equal to a target duration threshold, the method further comprises: determining the reading difficulty of the e-book content currently read by the user; based on the determined reading difficulty, a target duration threshold is determined.
In the present solution, the reading difficulty of the currently read e-book content may be determined based on a plurality of methods, and several of the determination methods are described below:
in an embodiment, when determining the reading difficulty of the currently read ebook content, the reading difficulty of the currently read ebook content may be determined according to the length of a sentence in the ebook content currently read by the user.
When the length of a sentence is longer, it usually includes more parts of speech, such as nouns, verbs, adjectives, or some more complex sentence patterns, such as an arrangement formula, a flip-chip formula, etc., so that when the length of the sentence is longer, the corresponding complexity is higher, the readability is also worse, and therefore, the corresponding reading difficulty is usually higher. Therefore, in this embodiment, the reading difficulty of the currently read e-book content can be determined according to the length of the sentence. When the sentence in the e-book content watched by the eyeballs of the user is long, the reading difficulty of the corresponding currently read e-book content is high; when the sentences in the e-book content watched by the eyeballs of the user are short, the reading difficulty of the corresponding currently read e-book content is low.
In an embodiment, the determining the reading difficulty of the currently read ebook content includes: according to a preset professional word bank, determining the number of professional words in the e-book content currently read by the user, and determining the reading difficulty of the e-book content currently read by the user.
The term base may include term words in various industries, such as fourier transform, maxwell's equations, and so on.
When the number of the professional words in the e-book content watched by the eyeballs of the user is large, the user cannot easily understand the e-book content currently read, and therefore, when the number of the professional words is large, the corresponding intelligibility is poor, and therefore, the corresponding reading difficulty is generally high. Therefore, in this embodiment, the reading difficulty of the e-book content currently being read can be determined according to the number of the professional words. When the number of the professional words in the electronic book content watched by the eyeballs of the user is large, the reading difficulty of the corresponding currently read electronic book content is high; and when the number of the professional words in the electronic book content watched by the eyeballs of the user is small, the reading difficulty of the corresponding currently read electronic book content is low.
In an embodiment, the determining the reading difficulty of the currently read ebook content includes: determining the professional field which is good for the user according to the behavior record of the user; determining whether professional words related to the e-book contents currently read by the user are professional words in a professional field which is good for the user; if so, the reading difficulty of the corresponding currently read electronic book content is lower, and if not, the reading difficulty of the corresponding currently read electronic book content is higher.
In an embodiment, the determining the reading difficulty of the currently read ebook content includes: determining the character category of the e-book content currently read by the user according to the label of the e-book content currently read by the user, and determining the reading difficulty of the e-book content currently read by the user based on the determined character category.
For example, whether the corresponding e-book content is written in the language may be determined according to the tag of the currently read e-book content, and when it is determined that the corresponding e-book content is written in the language, it is determined that the reading difficulty of the currently read e-book content is higher; and when the corresponding electronic book content is determined not to be written in the language, determining that the reading difficulty of the currently read electronic book content is low.
When determining whether the corresponding e-book content is written in a language according to the tag of the currently read e-book content, determining whether the tag of the currently read e-book content is a designated tag, such as ancient, ancient literature, ancient language, and the like, and if the tag of the currently read e-book content is the designated tag, determining that the reading difficulty of the currently read e-book content is high; and if the label of the currently read electronic book content is not the designated label, determining that the reading difficulty of the currently read electronic book content is lower.
In the above example, when determining whether the tag of the currently read ebook content is the designated tag, it may be determined whether the tag of the currently read ebook content is the designated tag according to a preset designated tag library. Specifically, when the determined tag of the currently read e-book content matches with a tag in a preset designated tag library, it may be determined that the tag of the currently read e-book content is a designated tag; when the determined tag of the currently read ebook content does not match a tag in the preset designated tag library, it may be determined that the tag of the currently read ebook content is not the designated tag. The tags in the preset specified tag library can be used for identifying the corresponding electronic book contents written by using the language.
In an embodiment, the determining the reading difficulty of the currently read ebook content includes: and determining the reading difficulty of the content of the currently read electronic book based on the age of the author of the currently read electronic book.
For example, if the year to which the author of the currently-read ebook belongs is earlier, such as down, it is determined that the reading difficulty of the currently-read ebook content is greater; if the year to which the author of the currently-read electronic book belongs is recent, such as the beginning of the 21 st century, it is determined that the reading difficulty of the currently-read electronic book content is small.
If the age of the author of the currently-read electronic book is earlier, the difference between the background environment of the author of the currently-read electronic book and the background environment of the user reading the electronic book is larger, so that the user can understand more difficultly. When the difference between the background environment in the electronic book currently read by the user and the background environment where the user reads the electronic book is larger, the reading difficulty of the corresponding electronic book content currently read is higher; and when the difference between the background environment in the electronic book currently read by the user and the background environment where the user reads the electronic book is smaller, the reading difficulty of the corresponding electronic book content currently read is lower.
In an embodiment, the determining the reading difficulty of the currently read ebook content includes: and determining the reading difficulty of the content of the currently read electronic book based on the acquired native language information of the user and the character category of the electronic book currently read by the user.
For example, the native language information of the user, such as english, korean, etc., may be obtained first; if the native language information of the user is determined to be English and the character category of the currently read electronic book is also English, determining that the reading difficulty of the currently read electronic book content is low; and if the native language information of the user is determined to be English and the character category of the currently read electronic book is Korean, determining that the reading difficulty of the currently read electronic book content is higher.
In an embodiment, the determining the reading difficulty of the currently read ebook content includes: the reading difficulty of the currently read content of the electronic book is determined based on the number of characters included in the content of the electronic book currently read by the user.
When the number of the characters included in the content of the electronic book currently read by the user is large, the user understands the content of the electronic book more difficultly, so in this embodiment, the reading difficulty of the content of the electronic book currently read by the user can be determined according to the number of the characters included in the content of the electronic book currently read by the user. If the number of characters included in the content of the currently read electronic book is large, determining that the reading difficulty of the content of the currently read electronic book is high; and if the number of characters included in the content of the currently read electronic book is small, determining that the reading difficulty of the content of the currently read electronic book is low.
In one embodiment, after determining the reading difficulty of the e-book content currently read by the user, the target duration threshold corresponding to the e-book content may be determined according to the reading difficulty.
When the reading difficulty of the e-book content currently read by the user is higher, the corresponding target duration threshold value is larger; and when the determined reading difficulty of the e-book content currently read by the user is lower, the corresponding target duration threshold is lower.
Specifically, a target duration threshold may be set for the reading difficulty of each page of the e-book content; or setting a target duration threshold value aiming at the reading difficulty of each paragraph in each page of the e-book content; or setting a target duration threshold value aiming at the line number contained in each paragraph in each page of the e-book content; or setting a target time length threshold value aiming at the reading difficulty of each line in each page of the e-book content. It should be noted that, the larger the number of rows included in each paragraph, the larger the target duration threshold is set.
The determined reading difficulty and the determined target duration threshold value can be in positive correlation, namely the larger the reading difficulty is, the larger the target duration threshold value is.
Step 103: and when the watching duration is determined to be greater than or equal to the target duration threshold, performing voice monitoring on the user.
In an embodiment, the terminal may determine whether to start voice monitoring according to a gazing duration of the user on the currently read e-book content and a target duration threshold, and specifically, when it is determined that the gazing duration is greater than or equal to the target duration threshold, start voice monitoring.
In the above embodiment, when the gazing duration is greater than the target duration threshold, it may be determined that the user may be interested in the section of content, and under the condition that the user is interested in the gazed e-book content, the tendency of the user to make a voice is relatively large, so at this time, corresponding voice monitoring may be started to monitor the voice made by the user. Because whether the user has a greater tendency of publishing voice can be determined according to the behavior of the user, and the voice monitoring of the user is started under the condition that the user has the greater tendency of publishing voice, the terminal can control the time for starting the voice monitoring according to the behavior of the user, thereby saving the hardware resources of the terminal.
In an embodiment, when it is determined that the gaze duration is greater than or equal to the target duration threshold, generating a prompt message; the prompt message is used for prompting a user to determine whether to start voice monitoring; receiving the operation of the user for the prompt message; the operation is a response operation for the prompt message; and when the operation representation indicates that the user determines to start voice monitoring, starting voice monitoring.
In the above embodiment, when the gazing duration is greater than the target duration threshold, a corresponding prompt message may be generated to prompt the user whether to start voice monitoring, and when the user determines to start voice monitoring according to the prompt message, the terminal may start corresponding voice monitoring to monitor the voice issued by the user; when the user determines not to start the voice monitoring according to the prompt message, the terminal may not start the corresponding voice monitoring and also may not monitor the voice published by the user.
In the above embodiment, the terminal may determine whether the user determines to start voice monitoring according to a received operation of the user for the prompt message, where the operation is a response operation for the prompt message, and specifically, the operation may be a click operation on a prompt button corresponding to the prompt message, or the like.
Step 104: and commenting the e-book content currently read by the user based on the monitored audio data.
Here, after the terminal acquires the audio data of the user, it may perform speech recognition processing on the audio data to obtain corresponding text content.
Specifically, the process of performing the speech recognition processing on the audio data may include: extracting the characteristics of the audio data, and extracting characteristic data; and obtaining the recognized text content by utilizing the characteristic data and the machine learning model. Wherein the machine learning Model includes, but is not limited to, a Hidden Markov Model (HMM) Model.
In one embodiment, the commenting the e-book content currently read by the user based on the monitored audio data includes: determining whether the monitored audio data is a comment for the content of the e-book currently read by the user; and when the monitored audio data is determined to be the comment aiming at the e-book content currently read by the user, commenting the e-book content currently read by the user according to the audio data.
Here, it is necessary to first determine whether the monitored audio data is a comment for the content of the electronic book currently read by the user. The method comprises the following specific steps:
in one embodiment, the determining whether the monitored audio data is a comment for the ebook content currently read by the user includes: determining whether the monitored audio data are comments aiming at the e-book content currently read by the user according to whether the content obtained by identifying the audio data contains first preset specific content; wherein the first preset specific content may include content related to an author of an e-book that the user is currently reading.
For example, the e-book currently read by the user may be "by sea" and when it is determined that the text content obtained by identifying the audio data includes a name of an author, a book written by the author, "happy answers", a talk show attended by the author, "yang lan talk record", a name of a family member of the author wushu, a name of a friend of the author laiai, and the like, it may be determined that the text content corresponding to the audio data is a comment on the e-book content watched by the eyeballs of the user.
In an embodiment, the determining whether the monitored audio data is a comment for the content of the electronic book currently read by the user includes: determining whether the monitored audio data are comments aiming at the e-book content currently read by the user according to whether the content obtained by identifying the audio data contains second preset specific content; wherein the second preset specific content includes content of an electronic book currently read by the user.
For example, when it is determined that a person in the sea before the wind, such as ginger, an article, such as a camera, a scene, such as a central television station, a proper noun, such as a host, and the like are included in the text content obtained by identifying the audio data, it may be determined that the text content corresponding to the audio data is a comment on the electronic book content watched by the eyeballs of the user.
In an embodiment, the determining whether the monitored audio data is a comment for the content of the electronic book currently read by the user includes: determining whether the monitored audio data is a comment aiming at the content of the electronic book currently read by the user or not according to whether the content obtained by identifying the audio data contains a third preset specific content or not; wherein the third preset specific content comprises content related to specific words; the specific word is a word for commenting on the electronic book.
For example, when the text content obtained by identifying the audio data is "Yang lan is really writing better and better", it is determined that the text content obtained by identifying the audio data includes specific words such as "write", "paragraph", "read", "author name", and the like, and it can be further determined that the text content corresponding to the audio data is a comment for the content of the electronic book watched by the eyeball of the user.
In an embodiment, a content similarity between a published comment corresponding to the electronic book and an identified content, such as a text content, obtained based on the audio data is determined, and based on the determined content similarity, it is determined whether the monitored audio data is a comment for the electronic book content currently read by the user. When determining the content similarity between the published comments corresponding to the electronic book and the recognized content obtained based on the audio data, the number of designated associated words included in the recognized content obtained based on the audio data may be determined, the designated associated words may be words having an association relationship with words in the published comments corresponding to the electronic book, such as a sense relationship, a near-sense relationship, an antisense relationship, and the like, and after determining the number of designated associated words included in the recognized content obtained based on the audio data, the published comments corresponding to the electronic book and the content similarity between the recognized content obtained based on the audio data may be determined based on the number of the determined designated associated words.
In actual application, whether the text content corresponding to the audio data is related to the comments already published by other users may be determined; when the text content corresponding to the audio data is determined to be related to comments already issued by other users, the text content corresponding to the audio data is determined to be the comment aiming at the e-book content watched by eyeballs of the users.
In the above example, when it is determined that the text content corresponding to the audio data is not related to the comments already posted by other users, the similarity between the text content corresponding to the audio data and the comment already posted by other users may be determined according to the similarity between the audio data or the corresponding text content of the user and the comment already posted by other users, where the similarity is determined according to the number of associated words contained in the two.
In one embodiment, in order to accurately add a user comment to an electronic book content, after audio data of the user is acquired, whether text content corresponding to the audio data is a comment for the electronic book content watched by eyeballs of the user may be determined; and when the text content corresponding to the audio data is determined to be a comment aiming at the electronic book content watched by the eyeballs of the user, generating a comment aiming at the electronic data content watched by the eyeballs of the user on the basis of the audio data. In this way, the accuracy of adding comments can be improved.
Here, after it is determined that the monitored audio data is a comment for the content of the electronic book currently read by the user, a comment needs to be generated based on the audio data. The method comprises the following specific steps:
in an embodiment, after the monitored audio data is judged to be a comment for the e-book content currently read by the user, a voice comment may be generated based on the audio data and displayed in the e-book content currently read.
For example, an audio compression technology may be used to compress the audio data to obtain a voice comment, and the voice comment is displayed in an annotation or comment form; the format of the voice commentary may be MP3, WAV format, and the like.
In an embodiment, after the monitored audio data is judged to be a comment for the e-book content currently read by the user, a text comment may be generated based on the text content and displayed in the e-book content currently read.
For example, at least one text comment may be generated based on the text content for display in an annotation or comment form; wherein the type of words in each text comment is different; the text type at least comprises: spoken language, etc.
Here, after the comments are displayed, the displayed comments may be processed accordingly according to user requirements. The method comprises the following specific steps:
in an embodiment, when a user clicks the voice comment in the currently read e-book content, a prompt message may be generated; the prompt information is used for indicating a user to determine whether to convert the voice comment into a text comment; receiving user operation aiming at prompt information; and when the user operation represents that the user determines to convert, converting the voice comment into a text comment and displaying the text comment.
In an embodiment, when a user clicks the text comment in the currently read e-book content, a prompt message may be generated; the prompt information is used for indicating a user to determine whether to convert the language type or the character type corresponding to the text comment; the language type includes at least: chinese, english, korean; the character types at least comprise a Chinese character and a traditional character; receiving user operation aiming at prompt information; and when the user operation represents that the user determines to convert the language type, converting the language in the text comment into the language of the corresponding language.
In one embodiment, after the comment is generated, a prompt message is generated; the prompt information is used to prompt the user whether to publish the comment, if it is determined that the user does not want to publish, the comment is not published, and if it is determined that the user wants to publish, an object capable of viewing the comment is determined, where the object capable of viewing the comment may include an object specified by the user and all users viewing the content of the electronic book, and in one example, the object specified by the user may include the user itself.
By adopting the technical scheme of the embodiment of the invention, when the watching time of the user on the currently read e-book content is greater than or equal to the target time length threshold, the audio data of the user is monitored, so that the e-book content can be commented on the basis of the audio data. Obviously, the user can comment on the content of the currently read electronic book without manual operation, so that the comment on the content of the electronic book can be quickly realized, the efficiency of adding comments in the electronic book is improved, and the user experience is further improved.
Based on the method for commenting the electronic book provided by the embodiments of the present application, the present application further provides a device for commenting the electronic book, as shown in fig. 2, the device includes:
an obtaining module 21, configured to obtain a gazing duration of a user on a content of an electronic book currently being read;
a determination module 22 for determining whether the gaze duration is greater than or equal to a target duration threshold;
the monitoring module 23 is configured to perform voice monitoring on the user when it is determined that the gazing duration is greater than or equal to the target duration threshold;
and the comment module 24 is configured to comment on the content of the electronic book currently read by the user based on the monitored audio data.
The obtaining module 21 is specifically configured to obtain the movement information of the user's eyeball by image capturing or scanning extraction using the existing technology, such as eyeball tracking technology; determining a gazing position of the user on the currently read electronic book content based on the motion information; and acquiring the watching duration of the user to the watching position. The motion information may include: position change information of the eyeball, movement direction information of the eyeball, and the like.
The comment module 24 is specifically configured to determine whether the monitored audio data is a comment for the content of the electronic book currently read by the user; and when the monitored audio data is determined to be the comment aiming at the e-book content currently read by the user, commenting the e-book content currently read by the user according to the audio data.
The comment module 24 is specifically configured to determine whether the monitored audio data is a comment for the content of the electronic book currently read by the user, according to whether the content obtained by identifying the audio data includes preset specific content. The preset specific content at least comprises the content of the electronic book currently read by the user, the content related to the author of the electronic book currently read by the user and the content related to specific words; the specific word is a word for commenting on the electronic book.
The comment module 24 is specifically configured to obtain, according to published comments corresponding to the electronic book, content similarity between the identified contents obtained based on the audio data; and judging whether the monitored audio data is a comment aiming at the e-book content currently read by the user or not based on the determined content similarity.
The content similarity is determined based on the monitored content of the audio data and the number of associated words of comments made on the electronic book currently read by the user, wherein the associated words at least comprise: synonyms, synonyms.
The monitoring module 23 is further configured to generate a prompt message when it is determined that the gazing duration is greater than or equal to the target duration threshold; the prompt message is used for prompting a user to determine whether to start voice monitoring; receiving the operation of the user for the prompt message; the operation is a response operation for the prompt message; and when the operation representation indicates that the user determines to start voice monitoring, starting voice monitoring.
The determining module 22 is specifically configured to determine a reading difficulty of the currently read e-book content; determining the target duration threshold based on the determined reading difficulty.
The determining module 22 is specifically configured to determine the reading difficulty of the currently read e-book content according to the length of the sentence in the currently read e-book content.
The determining module 22 is specifically configured to determine the number of professional words in the currently read e-book content according to a preset professional lexicon, and determine the reading difficulty of the currently read e-book content.
The determining module 22 is specifically configured to determine a professional field in which the user is skilled according to the behavior record of the user; determining whether professional words related to the e-book contents currently read by the user are professional words in a professional field which is good for the user; if so, the reading difficulty of the corresponding currently read electronic book content is lower, and if not, the reading difficulty of the corresponding currently read electronic book content is higher.
The determining module 22 is specifically configured to determine a text category of the content of the electronic book according to a tag of the content of the electronic book currently being read; and determining the reading difficulty of the currently read electronic book content based on the character category of the electronic book content.
The determining module 22 is specifically configured to determine the reading difficulty of the content of the currently read electronic book based on the age to which the author of the currently read electronic book belongs.
The determining module 22 is specifically configured to determine the reading difficulty of the currently read content of the electronic book based on the obtained native language information of the user and the character category of the electronic book currently read by the user.
The determining module 22 is specifically configured to determine the reading difficulty of the currently read content of the electronic book based on the number of characters included in the content of the electronic book currently read by the user.
It should be noted that: in the comment device for the electronic book provided in the above embodiment, only the division of the program modules is exemplified when performing comment, and in practical applications, the above processing may be distributed to different program modules as needed, that is, the internal structure of the device may be divided into different program modules to complete all or part of the above-described processing. In addition, the comment device of the electronic book provided by the above embodiment and the comment method embodiment of the electronic book belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
In practical applications, the obtaining module 21 is implemented by a network interface located on a comment device of an electronic book; the determination module 22, the monitoring module 23, and the comment module 24 may be implemented by a Processor located on a comment device of the electronic book, such as a Central Processing Unit (CPU), a Micro Processing Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 3 is a schematic structural diagram of a comment device for an electronic book according to the present invention, and a comment device 300 for an electronic book shown in fig. 3 is provided on a terminal, and includes: at least one processor 301, memory 302, user interface 303, at least one network interface 304. The various components in the commentary device 300 of an electronic book are coupled together by a bus system 305. It will be appreciated that the bus system 305 is used to enable communications among the components connected. The bus system 305 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 305 in fig. 3.
The user interface 303 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
The memory 302 in the embodiment of the present invention is used to store various types of data to support the operation of the comment apparatus 300 for an electronic book. Examples of such data include: any computer program for operating on the comment device 300 for an electronic book, such as an operating system 3021 and application programs 3022; operating system 3021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and for processing hardware-based tasks. The application programs 3022 may include various application programs for implementing various application services. A program implementing the method of an embodiment of the present invention may be included in the application program 3022.
The method disclosed in the above embodiments of the present invention may be applied to the processor 301, or implemented by the processor 301. The processor 301 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 301. The processor 301 described above may be a general purpose processor, a digital signal processor, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 301 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 302, and the processor 301 reads the information in the memory 302 and performs the steps of the aforementioned methods in conjunction with its hardware.
It will be appreciated that the memory 302 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 302 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
Based on the method for commenting an electronic book provided by the embodiments of the present application, the present application further provides a computer-readable storage medium, which, referring to fig. 3, may include: a memory 302 for storing a computer program executable by the processor 301 of the apparatus for commenting on electronic books 300 for performing the steps of the aforementioned method. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
It should be noted that: the technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (6)
1. A method of commenting an electronic book, the method comprising:
acquiring the watching duration of the current read electronic book content by a user;
determining whether the gaze duration is greater than or equal to a target duration threshold;
when the watching duration is determined to be greater than or equal to the target duration threshold, performing voice monitoring on the user;
determining whether the audio data is a comment aiming at the e-book content currently read by the user according to whether the monitored audio data contains preset specific content; and/or determining published comments corresponding to the electronic book, content similarity between the identified contents obtained based on the audio data, and determining whether the audio data is a comment aiming at the electronic book content currently read by the user based on the determined content similarity;
and when the audio data are determined to be the comments aiming at the e-book content currently read by the user, commenting the e-book content currently read by the user according to the audio data.
2. The method of claim 1, wherein prior to the determining whether the gaze duration is greater than or equal to a target duration threshold, the method further comprises:
determining the reading difficulty of the e-book content currently read by the user;
based on the determined reading difficulty, a target duration threshold is determined.
3. The method of claim 2, wherein the determining the reading difficulty of the e-book content currently being read by the user comprises:
determining the reading difficulty of the e-book content currently read by the user according to the length of sentences in the e-book content currently read by the user;
and/or determining the reading difficulty of the e-book content currently read by the user according to the number of professional words in the e-book content currently read by the user;
and/or determining the character category of the e-book content currently read by the user according to the label of the e-book content currently read by the user, and determining the reading difficulty of the e-book content currently read by the user based on the determined character category.
4. An apparatus for commenting an electronic book, the apparatus comprising:
the acquisition module is used for acquiring the watching duration of the current read electronic book content;
a determination module to determine whether the gaze duration is greater than or equal to a target duration threshold;
the monitoring module is used for carrying out voice monitoring on the user when the watching duration is determined to be greater than or equal to the target duration threshold;
the comment module is used for determining whether the audio data is a comment aiming at the e-book content currently read by the user according to whether the monitored audio data contains preset specific content; and/or determining published comments corresponding to the electronic book, content similarity between the identified contents obtained based on the audio data, and determining whether the audio data is a comment aiming at the electronic book content currently read by the user based on the determined content similarity;
and when the audio data are determined to be the comments aiming at the e-book content currently read by the user, commenting the e-book content currently read by the user according to the audio data.
5. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
6. An apparatus for commenting an electronic book, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor;
wherein the processor is adapted to perform the steps of the method of any one of claims 1 to 3 when running the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810955566.8A CN109343696B (en) | 2018-08-21 | 2018-08-21 | Electronic book commenting method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810955566.8A CN109343696B (en) | 2018-08-21 | 2018-08-21 | Electronic book commenting method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109343696A CN109343696A (en) | 2019-02-15 |
CN109343696B true CN109343696B (en) | 2022-03-25 |
Family
ID=65291845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810955566.8A Active CN109343696B (en) | 2018-08-21 | 2018-08-21 | Electronic book commenting method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109343696B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110267113B (en) * | 2019-06-14 | 2021-10-15 | 北京字节跳动网络技术有限公司 | Video file processing method, system, medium, and electronic device |
CN110377191A (en) * | 2019-06-14 | 2019-10-25 | 北京字节跳动网络技术有限公司 | Voice remark exchange method, system, medium and electronic equipment |
CN110244848B (en) * | 2019-06-17 | 2021-10-19 | Oppo广东移动通信有限公司 | Reading control method and related equipment |
CN110430127B (en) * | 2019-09-03 | 2021-11-09 | 深圳市沃特沃德软件技术有限公司 | Voice processing method and device based on picture book reading and storage medium |
CN111046639A (en) * | 2019-11-06 | 2020-04-21 | 上海擎感智能科技有限公司 | File review method, system and mobile terminal |
CN111694434B (en) * | 2020-06-15 | 2023-06-30 | 掌阅科技股份有限公司 | Interactive display method of comment information of electronic book, electronic equipment and storage medium |
CN113515210A (en) * | 2021-06-30 | 2021-10-19 | 北京百度网讯科技有限公司 | Display method, display device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102469363A (en) * | 2010-11-11 | 2012-05-23 | Tcl集团股份有限公司 | Television system with speech comment function and speech comment method |
US20160357253A1 (en) * | 2015-06-05 | 2016-12-08 | International Business Machines Corporation | Initiating actions responsive to user expressions of a user while reading media content |
CN106951093A (en) * | 2017-03-31 | 2017-07-14 | 联想(北京)有限公司 | A kind of data processing method and device |
CN107621882A (en) * | 2017-09-30 | 2018-01-23 | 咪咕互动娱乐有限公司 | A kind of switching method of control model, device and storage medium |
CN107967104A (en) * | 2017-12-20 | 2018-04-27 | 北京时代脉搏信息技术有限公司 | The method and electronic equipment of voice remark are carried out to information entity |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103631782B (en) * | 2012-08-21 | 2018-10-16 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus and system of the comment of processing e-book |
CN107918653B (en) * | 2017-11-16 | 2022-02-22 | 百度在线网络技术(北京)有限公司 | Intelligent playing method and device based on preference feedback |
-
2018
- 2018-08-21 CN CN201810955566.8A patent/CN109343696B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102469363A (en) * | 2010-11-11 | 2012-05-23 | Tcl集团股份有限公司 | Television system with speech comment function and speech comment method |
US20160357253A1 (en) * | 2015-06-05 | 2016-12-08 | International Business Machines Corporation | Initiating actions responsive to user expressions of a user while reading media content |
CN106951093A (en) * | 2017-03-31 | 2017-07-14 | 联想(北京)有限公司 | A kind of data processing method and device |
CN107621882A (en) * | 2017-09-30 | 2018-01-23 | 咪咕互动娱乐有限公司 | A kind of switching method of control model, device and storage medium |
CN107967104A (en) * | 2017-12-20 | 2018-04-27 | 北京时代脉搏信息技术有限公司 | The method and electronic equipment of voice remark are carried out to information entity |
Also Published As
Publication number | Publication date |
---|---|
CN109343696A (en) | 2019-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109343696B (en) | Electronic book commenting method and device and computer readable storage medium | |
US10282162B2 (en) | Audio book smart pause | |
US10114809B2 (en) | Method and apparatus for phonetically annotating text | |
JP6909832B2 (en) | Methods, devices, equipment and media for recognizing important words in audio | |
US10192544B2 (en) | Method and system for constructing a language model | |
JP6361351B2 (en) | Method, program and computing system for ranking spoken words | |
US20200134398A1 (en) | Determining intent from multimodal content embedded in a common geometric space | |
CN107608618B (en) | Interaction method and device for wearable equipment and wearable equipment | |
CN114449310A (en) | Video editing method and device, computer equipment and storage medium | |
CN116188250A (en) | Image processing method, device, electronic equipment and storage medium | |
US20200394258A1 (en) | Generation of edited transcription for speech audio | |
CN115134660A (en) | Video editing method and device, computer equipment and storage medium | |
WO2020238498A1 (en) | Question and answer information processing method and system, computer device and storage medium | |
WO2017211202A1 (en) | Method, device, and terminal device for extracting data | |
CN111126084A (en) | Data processing method and device, electronic equipment and storage medium | |
WO2020052060A1 (en) | Method and apparatus for generating correction statement | |
CN109657127A (en) | A kind of answer acquisition methods, device, server and storage medium | |
WO2021097629A1 (en) | Data processing method and apparatus, and electronic device and storage medium | |
CN112822506A (en) | Method and apparatus for analyzing video stream | |
CN107704153A (en) | A kind of methods of exhibiting, device and computer-readable recording medium for reading special efficacy | |
WO2019231635A1 (en) | Method and apparatus for generating digest for broadcasting | |
CN112802454B (en) | Method and device for recommending awakening words, terminal equipment and storage medium | |
CN110428668B (en) | Data extraction method and device, computer system and readable storage medium | |
CN111160001B (en) | Data processing method and device | |
US10678845B2 (en) | Juxtaposing contextually similar cross-generation images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |