[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110099360A - Voice message processing method and device - Google Patents

Voice message processing method and device Download PDF

Info

Publication number
CN110099360A
CN110099360A CN201810088402.XA CN201810088402A CN110099360A CN 110099360 A CN110099360 A CN 110099360A CN 201810088402 A CN201810088402 A CN 201810088402A CN 110099360 A CN110099360 A CN 110099360A
Authority
CN
China
Prior art keywords
voice message
account
keywords
keyword
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810088402.XA
Other languages
Chinese (zh)
Inventor
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810088402.XA priority Critical patent/CN110099360A/en
Publication of CN110099360A publication Critical patent/CN110099360A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a kind of voice message processing method and devices, belong to field of communication technology.Request is sent based on the speech message that the first account is sent the described method includes: receiving first terminal, which sends request and carry speech message and the second account information;Obtain the keyword of speech message;Voice messaging and keyword are sent to the second terminal for logging in second account, show the keyword of voice message icon He the speech message in specified session interface by the second terminal.The invention enables users can directly pass through keyword timely learning voice messages content, avoids missing information, and the search efficiency of speech message can be improved.

Description

Voice message processing method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for processing a voice message.
Background
With the development of communication technology, voice messages can be conveniently transmitted between terminals, and received voice messages can be displayed to remind users to read. For example, the terminal may send and receive a voice message through an installed application such as an instant messaging application or a social software application, and may present the received voice message in an application interface.
In the related art, a method for processing a voice message is provided, which includes: the first terminal sends a voice message sending request to the server based on the logged first account, wherein the voice message sending request carries the voice message and the second account information. And the server receives the voice sending request and sends the voice message to a second terminal logging in the second account according to the second account information. And after receiving the voice message, the second terminal displays a voice message icon in a specified session interface, wherein the voice message icon is used for indicating the voice message and can trigger the playing of the voice message, and the specified session interface is a session interface comprising the first account and the second account.
In the related art, for a voice message icon displayed on a session interface, a user is required to listen to corresponding voice message content by clicking and triggering playing, but in some occasions such as a conference, playing and listening may not be suitable, and therefore, the user may not be able to know important messages in time. Moreover, for a plurality of read voice messages displayed by the same voice message icon on the session interface, if the user wants to read a specific voice message again, the user needs to listen to the voice messages in sequence to accurately find the specific voice message, and the operation of searching for the voice message is complicated and the efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a voice message processing method and a voice message processing device, which can be used for solving the problems that important messages cannot be obtained in time, the operation of searching voice messages is complicated, the efficiency is low and the like in the related technology. The technical scheme is as follows:
in one aspect, a method for processing a voice message is provided, where the method includes:
receiving a voice message sending request sent by a first terminal based on a logged-in first account, wherein the voice message sending request carries voice messages and second account information and is used for requesting the server to forward the voice messages to a second account;
acquiring keywords of the voice message;
and sending the voice information and the keywords of the voice message to a second terminal logging in the second account based on the second account information, and displaying a voice message icon and the keywords of the voice message in a specified session interface by the second terminal, wherein the specified session interface is the session interface comprising the first account and the second account, and the voice message icon is used for triggering and playing the voice message.
In one aspect, a voice message processing apparatus is provided, and is applied in a server, the apparatus includes:
the receiving module is used for receiving a voice message sending request sent by a first terminal based on a logged first account, wherein the voice message sending request carries voice messages and second account information;
the acquisition module is used for acquiring keywords of the voice message;
and the sending module is used for sending the voice information and the keywords of the voice message to a second terminal logging in the second account based on the second account information, the second terminal displays a voice message icon and the keywords of the voice message in a specified session interface, the specified session interface is the session interface comprising the first account and the second account, and the voice message icon is used for triggering and playing the voice message.
In one aspect, a voice message processing apparatus is provided, the apparatus including a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the above-mentioned voice message processing method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above-mentioned voice message processing method.
In one aspect, a method for processing a voice message is provided, where the method is applied to a second terminal, and the method includes:
receiving voice information sent by a server and keywords of the voice message based on a logged second account, wherein the voice message is sent to the server by a first terminal logged in a first account and indicates the server to forward to the second account, and the keywords of the voice message are obtained by the server based on the voice message;
displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface comprises the first account and the second account, and the voice message icon is used for triggering the voice message to be played.
In one aspect, an apparatus for processing a voice message is provided, where the apparatus is applied in a second terminal, and the apparatus includes:
the receiving module is used for receiving voice information sent by a server and keywords of the voice information based on a logged second account, the voice information is sent to the server by a first terminal logging in a first account, the server is indicated to forward the voice information to the second account, and the keywords of the voice information are obtained by the server based on the voice information;
the display module is used for displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface comprises the first account and the second account, and the voice message icon is used for triggering the voice message to be played.
In one aspect, a voice message processing apparatus is provided, the apparatus including a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the above-mentioned voice message processing method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above-mentioned voice message processing method.
In one aspect, a method for processing a voice message is provided, where the method is applied to a second terminal, and the method includes:
receiving a voice message sent by a server based on a logged second account, wherein the voice message is sent to the server by a first terminal logged in with a first account and indicates the server to forward to the second account;
acquiring keywords of the voice message;
displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface comprises the first account and the second account, and the voice message icon is used for triggering the voice message to be played.
In one aspect, an apparatus for processing a voice message is provided, where the apparatus is applied in a second terminal, and the apparatus includes:
the receiving module is used for receiving a voice message sent by a server based on a logged second account, wherein the voice message is sent to the server by a first terminal logged in a first account and indicates the server to forward the voice message to the second account;
the acquisition module is used for acquiring keywords of the voice message;
the display module is used for displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface comprises the first account and the second account, and the voice message icon is used for triggering the voice message to be played.
In one aspect, a voice message processing apparatus is provided, the apparatus including a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the above-mentioned voice message processing method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above-mentioned voice message processing method.
In one aspect, a method for processing a voice message is provided, where the method is applied to a second terminal, and the method includes:
receiving a voice message sent by a first terminal;
acquiring keywords of the voice message;
displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface refers to a session interface where the first terminal and the second terminal are located, and the voice message icon is used for triggering and playing of the voice message.
In one aspect, an apparatus for processing a voice message is provided, where the apparatus is applied in a second terminal, and the apparatus includes:
the receiving module is used for receiving the voice message sent by the first terminal;
the acquisition module is used for acquiring keywords of the voice message;
and the display module is used for displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface refers to a session interface where the first terminal and the second terminal are located, and the voice message icon is used for triggering the voice message to be played.
In one aspect, a voice message processing apparatus is provided, the apparatus including a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the above-mentioned voice message processing method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above-mentioned voice message processing method.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, after receiving the voice message sent by the first terminal, the server can firstly acquire the keyword of the voice message, and then send the voice message and the keyword of the voice message to the second terminal together, so that the second terminal can display the keyword of the voice message when displaying the icon of the voice message on the conversation interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a specific voice message can be quickly searched from a plurality of voice messages through the displayed keywords, and the searching efficiency of the voice message is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1A is a schematic diagram of a voice message processing system according to an embodiment of the present invention;
FIG. 1B is a schematic diagram of another voice message processing system provided by an embodiment of the present invention;
fig. 1C is a flowchart of a voice message processing method according to an embodiment of the present invention;
fig. 1D is a schematic diagram of a session interface of a first terminal according to an embodiment of the present invention;
FIG. 1E is a schematic diagram of another voice message processing system provided by an embodiment of the present invention;
fig. 1F is a schematic diagram of a session interface of a second terminal according to an embodiment of the present invention;
fig. 1G is a schematic diagram of a general setting menu of an instant messaging application according to an embodiment of the present invention;
fig. 1H is a flowchart illustrating a voice message processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of another voice message processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of another voice message processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another message processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another message processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server 700 according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal 800 according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Before explaining the embodiments of the present invention in detail, an application scenario of the embodiments of the present invention will be described.
The embodiment of the invention is applied to the scene in which the terminal displays the received voice message, and is particularly suitable for the scene in which a quiet state needs to be kept, such as a conference, or the scene in which people are noisy, or the scene in which the voice message needs to be searched, and the like, and certainly can also be suitable for other scenes in which the voice message needs to be displayed, and the embodiment of the invention does not limit the scene.
Scenes such as conferences requiring silence
For a voice message received by a terminal, a user generally needs to click a voice message icon displayed on a session interface of the terminal, trigger the terminal to play a voice message corresponding to the voice message icon, and then place the terminal at the ear for listening. However, in some occasions such as conferences or movie theaters where silence needs to be kept, users are not convenient to play voice messages, and thus important messages, especially important messages, may not be known in time by the users, and important messages are missed. In the embodiment of the invention, when the user is in a conference or a movie theater and the like which needs to be kept quiet, the terminal can be set to display the keywords of the voice message near the voice message icon when the terminal displays the voice message icon, so that the user can know the approximate meaning of the voice message in time through the displayed keywords without clicking and playing the voice message, and the omission of important messages is avoided.
Noisy scene of people
In noisy occasions with more people, such as a downtown city or a concert, even if a user triggers a terminal to play a voice message, the user may not hear specific message content, and thus the user may not know the message in time, and important messages are omitted. In the embodiment of the invention, when the user is in a noisy situation with more people, the terminal can be set to display the keywords of the voice message near the voice message icon when the terminal displays the voice message icon, so that the user can click and play the voice message and can timely know the approximate meaning of the voice message through the displayed keywords, and the omission of important messages is avoided.
Finding scenes of voice messages
For a plurality of voice messages read by a user, if the user wants to repeatedly read a specific voice message, since the voice messages are all displayed by the same voice message icon, the user cannot distinguish which voice message icon corresponds to the voice message that the user wants to repeatedly read, and therefore the user needs to sequentially listen to the voice messages to accurately find the specific voice message. In the embodiment of the invention, when the terminal displays the voice message icon, the keyword of the voice message can be displayed near the voice message icon, so that a user can directly and quickly search the message to be repeatedly listened through the displayed keyword without listening in sequence, and the searching efficiency of the voice message is improved.
Or, when the terminal receives a large number of voice messages, because the voice messages are all displayed through the same voice message icon, the user cannot distinguish which voice messages are important messages and which voice messages are unimportant messages, so that the user needs to listen to each voice message from beginning to end to screen out the required message content. In the embodiment of the invention, when the terminal displays the voice message icon, the keyword of the voice message can be displayed near the voice message icon, so that the user can quickly know which voice messages are important messages which need to be concerned by the user and which voice messages are unimportant messages which do not need to be concerned by the user through the displayed keyword, and then preferentially listen to the important messages.
The system architecture of the embodiments of the present invention is described next.
Fig. 1A is a schematic diagram of a voice message processing system according to an embodiment of the present invention, and as shown in fig. 1A, the system includes a first terminal 10, a server 20, and a second terminal 30. The first terminal 10 and the server 20 may be connected via a network, and the server 20 and the second terminal 30 may also be connected via a network.
The first terminal 10 is a sending end of a voice message, and the second terminal 30 is a receiving end of the voice message. In practical implementation, in a communication system, a server is generally required to support the transmission of messages among a plurality of terminals, so when the first terminal 10 is to send a voice message to the second terminal 30, forwarding is generally required to be performed through the server 20. That is, the first terminal 10 transmits the voice message to the server 20, and the server 20 forwards the voice message to the second terminal 30.
Further, in order to distinguish the user, the terminal generally transmits or receives a voice message based on the account to which the user logs in. For convenience of description, in the embodiment of the present invention, the account registered by the first terminal 10 is referred to as a first account, and the account registered by the second terminal 30 is referred to as a second account.
Further, voice messages can be transmitted between the first terminal 10 and the second terminal 30 through the installed message application, and accordingly, the first account and the second account are accounts in which the user logs in the message application. The messaging application refers to an application capable of supporting sending and receiving voice messages, and for example, the messaging application may be an instant messaging application or a social software application.
In the embodiment of the present invention, based on the voice message processing system, the voice message sent from the first terminal 10 to the second terminal 30 can be processed, so that the second terminal 30 displays the keyword of the voice message near the voice message icon when displaying the voice message icon, thereby solving the problems in the related art that the important message cannot be obtained in time, the operation for searching the voice message is complicated, and the efficiency is low.
Specifically, based on the voice message processing system, the following two implementation manners can be adopted to process the voice message:
the first implementation mode comprises the following steps: and extracting the keywords by the server.
The first terminal 10 is configured to send a voice message sending request to the server 20 based on the logged first account, where the voice message sending request carries a voice message and second account information;
a server 20 for receiving a voice message transmission request transmitted by the first terminal 10; acquiring a keyword of the voice message; based on the second account information, sending the voice information and the keywords of the voice message to the second terminal 30 logging in the second account;
a second terminal 30 for receiving the voice message sent by the server 10 and the keyword of the voice message; and displaying a voice message icon and a keyword of the voice message in a specified session interface, wherein the specified session interface refers to a session interface comprising a first account and a second account, and the voice message icon is used for triggering the playing of the voice message.
That is, in the first implementation, the server 20 processes the voice message sent by the first terminal 10 to the second terminal 30 to obtain the keyword of the voice message, and sends the voice message and the keyword of the voice message to the second terminal 30 together, so that the second terminal 30 displays the keyword of the voice message in the vicinity of the voice message icon while displaying the voice message icon.
The second implementation mode comprises the following steps: and extracting the keywords by the second terminal.
The first terminal 10 is configured to send a voice message sending request to the server 20 based on the logged first account, where the voice message sending request carries a voice message and second account information;
a server 20 for receiving a voice message transmission request transmitted by the first terminal 10; sending the voice information to the second terminal 30 logging in the second account based on the second account information;
a second terminal 30, configured to receive the voice information sent by the server 10; acquiring a keyword of the voice message; and displaying a voice message icon and a keyword of the voice message in a specified session interface, wherein the specified session interface refers to a session interface comprising a first account and a second account, and the voice message icon is used for triggering the playing of the voice message.
That is, in the second implementation manner, the server does not process the voice message sent by the first terminal 10 to the second terminal 30, directly forwards the voice message to the second terminal 30, and the second terminal 30 processes the voice message to obtain the keyword of the voice message, and then displays the keyword of the voice message near the voice message icon when the voice message icon is displayed.
It should be noted that, the above two implementation manners are only described by taking the case that the terminal needs to forward the voice message through the server as an example, and in another embodiment, the voice message may also be directly transmitted between the terminals without being forwarded through the server, that is, the first terminal 10 may directly send the voice message to the second terminal 30.
Fig. 1B is a schematic diagram of another voice message processing system according to an embodiment of the present invention, as shown in fig. 1B, the system includes a first terminal 10 and a second terminal 30, and the first terminal 10 and the second terminal 30 may communicate through a network, for example, may communicate through a WIreless local area network (wlan) such as WIFI (WIreless-Fidelity) or bluetooth.
A first terminal 10 for transmitting a voice message to a second terminal 30.
A second terminal 30 for receiving the voice message sent by the first terminal 10; acquiring a keyword of the voice message; and displaying a voice message icon and a keyword of the voice message in a specified session interface, wherein the specified session interface refers to the session interface where the first terminal 10 and the second terminal 30 are located, and the voice message icon is used for triggering the voice message to be played.
Next, a voice message processing method provided in an embodiment of the present invention is described in detail.
Fig. 1C is a flowchart of a voice message processing method according to an embodiment of the present invention, where the method is applied to the voice message processing system shown in fig. 1A. Referring to fig. 1C, the method includes:
step 101: the first terminal sends a voice message sending request to the server based on the logged first account, wherein the voice message sending request carries the voice message and the second account information.
The voice message sending request is used for requesting the server to forward the voice message to the second account. The first account refers to an account which is logged in by the first terminal and used for sending the voice message, and the second account refers to an account which is logged in by the second terminal and used for receiving the voice message.
The second account information is used for indicating a second account, and the second account refers to a receiving account of the voice message. Specifically, the second account information may be an identifier of the second account, or an identifier of a group in which the first account is located. The identifier of the second account may be a name or an ID (identification number) of the second account, and the identifier of the group may be a name or an ID of the group.
For example, when the first account sends a voice message in a single-person conversation, the second account information may be an identification of the account of the recipient in the single-person conversation, i.e., an identification of the second account. When the first account sends a voice message in a group session, the second account information may be an identifier of a group where the first account is located, so as to indicate the second account through the identifier of the group, and correspondingly, the second account refers to other accounts in group members of the group except for the first account.
The first terminal may send a voice message sending request to the server when detecting a voice sending instruction based on the logged-in first account. The voice sending instruction can be triggered by a user through a specified operation, and the specified operation can be an operation of clicking a voice sending option in a conversation interface of the first terminal, or an operation of releasing a recording option after recording voice for pressing the recording option, and the like. The recording option is used for recording the voice of the user and triggering and sending the recorded voice message. In practical applications, the name of the recording option may be a "hold-and-talk" option, or may be other names, which is not limited in the embodiment of the present invention.
Further, the first terminal may also send a voice message sending request to the server based on the installed message application, and accordingly, the first account refers to an account that the user logs in the message application. The messaging application refers to an application capable of supporting sending and receiving voice messages, and may be, for example, an instant messaging application or a social software application.
Taking the example that the first terminal installs the instant messaging application and logs in the first account in the instant messaging application, referring to fig. 1D, the first terminal may enter a session interface between the first account and the second account as shown in fig. 1D according to an operation of the user in the instant messaging application. The conversation interface can be used for sending and receiving voice messages or text messages. When the conversation interface is switched to send and receive voice messages, a 'press-and-talk' option can be displayed at the bottom of the conversation interface. When the user wants to send the voice message, the user can press the button-press-and-talk option and speak towards the microphone, and after speaking, the user can release the button-press-and-talk option to trigger the first terminal to send a voice message sending request to the server so as to forward the voice message to the second account through the server.
Step 102: the server receives a voice message sending request sent by the first terminal and acquires keywords of the voice message.
After receiving a voice message sending request sent by the first terminal, the server can acquire the voice message and the second account information carried by the voice sending request, and then process the voice message to obtain the keyword of the voice message.
Wherein, the keyword of the voice message is used for describing the voice message in a summary way, and can indicate the key content of the voice message. For example, if the voice message is "go to a dining bar together at night", the keyword of the voice message may be "eat at night".
Specifically, acquiring the keyword of the voice message includes: converting the voice message into text; and extracting keywords from the voice message to obtain the keywords of the voice message.
Converting the voice message into text means converting the message content in the form of voice into the message content in the form of text. Specifically, the voice message may be converted into text through a preset voice conversion model, which refers to a model trained in advance and capable of converting voice into text. For example, the voice message may be used as an input of the voice conversion model, and the voice message may be processed by the voice conversion model and output to obtain a corresponding text.
In the embodiment of the invention, the preset keyword extraction strategy can be adopted to extract the keywords of the text to obtain the keywords of the voice message. The keyword extraction strategy can be a keyword extraction strategy based on a training set, or a keyword extraction strategy without a training set.
The keyword extraction strategy based on the training set refers to that keyword extraction is regarded as a classification problem, words appearing in a text are divided into keyword categories or non-keyword categories, and then a plurality of words are selected from the words belonging to the keyword categories to serve as keywords. For example, the text may be subjected to word segmentation processing to obtain a plurality of segmented words, then the plurality of segmented words are classified by specifying a classification model, it is determined whether each word in the plurality of segmented words belongs to a keyword category or a non-keyword category, and the segmented words belonging to the keyword category in the plurality of segmented words are determined as the keywords of the voice message. The specified classification model can be obtained by training in advance according to a plurality of keywords and a plurality of non-keywords.
Keyword extraction strategies that do not require a training set may include: statistical-based algorithms, such as frequency statistics; an algorithm based on word co-occurrence graphs, such as Key Graph (word Graph); a term network-based algorithm, such as a term network keyword extraction algorithm based on a mesopic index; and SWN (Small World Network) based algorithms, and the like.
The statistical-based algorithm is used for counting the occurrence frequency of each word in the text, and selecting the word with the frequency exceeding a certain threshold value as a keyword. The term co-occurrence graph-based algorithm is to map terms of a text and corresponding semantic relations to a term co-occurrence graph, wherein the term co-occurrence graph may include n vertexes, then calculate a key value of each vertex by using the term co-occurrence graph, select a vertex with a key value larger than a certain threshold value from the n vertexes, or select the first m vertexes sorted according to the key value, determine the term corresponding to the selected vertex as a keyword, and the key value represents the importance of the corresponding vertex. The algorithm based on the word network is that words in a text are mapped to vertexes, semantics of the text are mapped to edges, a undirected word network comprising n vertexes is established, then the importance degree of each vertex is quantized by using a measurement index of the importance of the vertexes, a plurality of important vertexes are selected from the n vertexes, and the words corresponding to the selected important vertexes are used as keywords. The SWN-based algorithm is that words and corresponding semantic relations of a text are mapped to a document structure diagram, then words which play a key role in the small world features of the document structure diagram are extracted from the words of the text, and the extracted words are used as keywords.
In one embodiment, performing keyword extraction on the voice message, and obtaining the keyword of the voice message may include: and extracting keywords of the text by performing word segmentation and semantic analysis on the text, and then determining the extracted keywords as the keywords of the voice message.
Further, since the subject, predicate and object of the text can generally indicate the key content of the text and can generally describe the text, in order to simplify the keyword extraction algorithm, the subject-predicate keyword may be extracted from the text, and the extracted subject-predicate keyword may be used as the keyword of the voice message.
Specifically, extracting keywords from the text to obtain the keywords of the voice message may further include: extracting the key words of the main predicate object of the text by performing word segmentation and semantic analysis on the text; and determining the extracted key words of the main predicate object as the key words of the voice message.
The extracting of the key words of the main predicate object of the text by performing word segmentation and semantic analysis on the text may include: performing word segmentation and semantic analysis on the text to obtain an analysis result, wherein the analysis result comprises part-of-speech labels of each word in the text, and the part-of-speech labels are used for indicating the part-of-speech of each word, such as nouns, pronouns or verbs; and determining the key words of the main predicate object of the text according to the analysis result.
The host-predicate key refers to a subject, a predicate and an object included in the text, and may specifically include at least one of the subject, the predicate and the object. It will be understood by those skilled in the art that when the text includes only a subject and a predicate, and no object, the subject-predicate key refers to the subject and predicate of the text; when the text only comprises the subject and the object and does not comprise the predicate, the subject-predicate key word refers to the subject and the object of the text; and the same reasoning can be analogized. Specifically, the text may be subjected to word segmentation processing to obtain a plurality of words, and then semantic analysis is performed on the text to extract the key words of the principal and predicate object from the plurality of words, and the extracted key words of the principal and predicate object are determined as the key words of the voice message.
Further, in order to reduce the processing load of the server, the server may also send the voice message to a special language processing server, the language processing server processes the voice message to obtain the keyword of the voice message, and then the keyword of the voice message is returned to the server. Or the server firstly converts the voice message into a text, then sends the text to a special language processing server, the language processing server extracts keywords of the text to obtain the keywords of the voice message, and then returns the keywords of the voice message to the server. Or the server firstly converts the voice message into a text, then sends the text to a special language processing server, the language processing server carries out word segmentation and semantic analysis on the text to obtain an analysis result, then the analysis result is returned to the server, and the server determines the keywords of the voice message according to the analysis result.
In practical application, the language processing server can provide language cloud services for a third party, and the language cloud services are cloud services which are based on a language technology platform and can provide efficient and accurate Chinese natural language processing for users.
For example, fig. 1E is a schematic diagram of another voice message processing system provided by an embodiment of the present invention, which may include a first terminal 10, a server 20, a second terminal 30, and a language cloud service 40 provided by a third party. After receiving the voice message sent by the first terminal 10, the server 20 may convert the voice message into a text, send the text to the language cloud service 40, perform word segmentation and semantic analysis on the text by the language cloud service 40 to obtain an analysis result, and then return the analysis result to the server 20. After receiving the analysis result, the server 20 may determine a keyword of the voice message according to the analysis result and then transmit the voice message and the keyword of the voice message to the second terminal 30 together.
Further, before acquiring the keywords of the voice message, the keyword configuration information of the second account may also be acquired, where the keyword configuration information is used to indicate whether to allow keyword extraction on the voice message to be forwarded to the second account; and when determining that the keyword configuration information of the second account indicates that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account, executing a step of acquiring keywords of the voice message.
In addition, when it is determined that the keyword configuration information of the second account indicates that keyword extraction is not allowed for the voice message to be forwarded to the second account, the server may also not obtain the keyword of the voice message, and only sends the voice message to the second terminal logging in the second account.
By acquiring the keyword configuration information of the second account and then judging whether keyword extraction needs to be performed on the voice message to be forwarded to the second account according to the keyword configuration information, whether the keyword extraction needs to be performed on the voice message can be flexibly controlled by using the configuration information of the second account, so that the flexibility of voice message processing can be improved, and the processing load of a server can be reduced.
The keyword configuration information of the second account may include a keyword configuration flag, and the keyword configuration flag may be a first configuration flag or a second configuration flag. The first configuration mark is used for indicating that keyword extraction is allowed to be carried out on the voice message to be forwarded to the second account, and the second configuration mark is used for indicating that keyword extraction is not allowed to be carried out on the voice message to be forwarded to the second account. Illustratively, the first configuration flag is 0 and the second configuration flag is 1; alternatively, the first configuration flag is 1 and the second configuration flag is 0.
The keyword configuration information of the second account can be obtained by sending the keyword configuration information of the second account to the server by the second terminal logging in the second account. After receiving the keyword configuration information of the second account, the server may first store the keyword configuration information of the second account in the database, and when receiving a voice message to be sent to the second account, read the keyword configuration information of the second account from the database, and determine whether keyword extraction needs to be performed on the voice message according to the keyword configuration information of the second account.
In practical application, the server may store the keyword configuration information of each account in the database, and may update the keyword information of the account according to a keyword configuration information update message sent by any account. For example, the server may store a setting table (setting.db) of each account in a database, where the setting table includes keyword configuration information of the corresponding account. In practical application, the server may update the keyword configuration information stored in the setting table of any account according to the keyword configuration information update message sent by the account.
The keyword configuration information of the second account may be set by the second terminal, and may be set according to an operation of the user at the second terminal, and the specific setting method refers to the following related description in step 104, which is not described in detail herein.
Step 103: and the server sends the voice information and the keywords of the voice message to a second terminal logging in a second account based on the second account information.
The server may determine, according to the second account information, a second account indicated by the second account information, so as to use the second account as a receiving account of a voice message, and send the voice message and a keyword of the voice message together to a second terminal logging in the second account.
For example, when the second account information is an identifier of a group in which the first account is located, other accounts in the group members of the group except the first account may be determined as the second account, and the voice information and the keyword of the voice message may be respectively sent to each account in the group members except the first account.
Step 104: and the second terminal receives the voice message sent by the server and the keyword of the voice message, and displays a voice message icon and the keyword of the voice message in a specified session interface.
The designated session interface refers to a session interface including a first account and a second account, and specifically may be a single-person session interface between the first account and the second account, or may be a group session interface where the first account and the second account are located.
The voice message icon is used for indicating the voice message and can be used for triggering the voice message to be played. Specifically, the voice message icon may be in the form of a message frame or a message bubble, and the display form of the voice message icon is not limited in the embodiment of the present invention.
Further, the duration of the voice message may also be displayed after the voice message icon. For example, the voice message icon may be the voice message box shown in fig. 1F, and the duration of the corresponding voice message is also displayed after the voice message box. In addition, the voice message indicated by the voice message icon may be a read voice message or an unread voice message, that is, a keyword of the read voice message may be displayed, and a keyword of the unread voice message may also be displayed.
Further, the voice message icon and the keyword of the voice message may be correspondingly displayed in a designated conversation interface. That is, the voice message icon and the keyword of each received voice message may be correspondingly displayed in the session interface, so that the user can intuitively know that the displayed keyword belongs to the voice message.
Specifically, the voice message icon and the keyword of the voice message may be correspondingly displayed by displaying the keyword of the voice message in a designated area of the voice message icon. The designated area of the voice message icon may be preset, and specifically may be a nearby area that is not far away from the voice message icon. For example, the designated area may be above the voice message icon, below the voice message icon, behind the voice message icon, or on the voice message icon (the surface of the voice icon), and the like, which is not limited in the embodiment of the present invention. For example, referring to fig. 1F, a keyword of a corresponding voice message may be displayed on each voice message icon.
Further, the keywords of the voice message may also be scroll-displayed in the designated area of the voice message icon, specifically, the keywords may be scroll-displayed from left to right or scroll-displayed from right to left in the designated area. For example, the keywords of the voice message may be scroll displayed from left to right or from right to left on the voice message icon. Of course, other forms may be used for displaying according to actual needs, and the embodiment of the present invention is not limited thereto.
Further, after displaying the keyword of the voice message, when an instruction to cancel the display of the keyword is received based on the voice message icon, the display of the keyword of the voice message may also be stopped.
Wherein stopping displaying the keywords of the voice message may include deleting the keywords of the voice message or hiding the keywords of the voice message. The instruction for canceling the display of the keyword may be triggered by a user through a specified operation, where the specified operation may be an operation for turning off a keyword display switch of a single voice message, or an operation for turning off a keyword display switch of all voice messages.
For example, a first keyword display switch may be provided in a designated area of the voice message icon, and the user may trigger an instruction to cancel the displayed keyword by turning off the first keyword display switch. For example, a first keyword display switch may be displayed behind the voice message icon.
In the embodiment of the present invention, a corresponding first keyword display switch may be provided in the designated area of each voice message icon, and the first keyword display switch is used to control the display and the closing of the keyword corresponding to the voice message. When a first keyword display switch of a certain voice message icon is turned on, displaying a keyword which allows a corresponding voice message to be displayed; when the first keyword display switch of a certain voice message icon is turned off, it indicates that the keyword corresponding to the voice message is not allowed to be displayed.
Further, before displaying the keyword of the voice message, it may be determined whether the first keyword display switch corresponding to the voice message icon is turned on, when turned on, the keyword of the voice message is displayed, and when not turned on, the keyword of the voice message is not displayed.
For another example, a second keyword display switch may be provided in the specified session interface or in a setting menu of the specified session interface, where the second keyword display switch is used to control display and closing of keywords of voice messages corresponding to all voice message icons on the specified session interface. When the second keyword display switch is turned on, the keywords of the voice message corresponding to each voice message icon are allowed to be displayed; when the second keyword display switch is turned on, the keyword indicating that the voice message corresponding to each voice message icon is not allowed to be displayed is displayed.
Further, before displaying the keyword of the voice message, it may be determined whether a second keyword display switch of the specified session interface is turned on, and when turned on, the keyword of the voice message is displayed, and when not turned on, the keyword of the voice message is not displayed.
Further, after displaying the keyword of the voice message, when a play instruction is received based on the voice message icon, the play instruction may not be responded, that is, the voice message is not played. In this way, when the keywords of the voice message are displayed, the voice message icon can be prohibited from triggering the voice message to be played, so that the display mode of the voice message is more suitable for the scene needing to be kept quiet. Of course, when the keyword of the voice message is displayed, the voice message icon may not be prohibited from triggering the voice message to be played.
Further, after displaying the keyword of the voice message, the display duration of the keyword of the voice message may also be determined, and when the display duration is greater than or equal to a preset duration, the displaying of the keyword of the voice message is stopped. The preset duration can be preset, can be set by a terminal in a default mode, and can also be set by a user.
Further, a voice keyword extraction switch may be provided in the specified session interface or in a setting menu of the specified session interface, where the voice keyword extraction switch is used to set keyword configuration information of the second account. For example, when the second terminal detects that the voice keyword extraction switch is turned on, first configuration information may be sent to the server, where the first configuration information carries the second account information and is used to indicate that a keyword configuration flag in the keyword configuration information of the second account is updated to be the first configuration flag, so as to indicate that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account. When the second terminal detects that the voice keyword extraction switch is turned off, second configuration information can be sent to the server, wherein the second configuration information carries the second account information and is used for indicating that a keyword configuration mark in the keyword configuration information of the second account is updated to be a second configuration mark so as to indicate that keyword extraction is not allowed to be performed on the voice message to be forwarded to the second account.
When the specified session interface is used as a session interface of the instant messaging application, as shown in fig. 1G, a voice keyword extraction switch may be added to a general setting menu of the instant messaging application, where the voice keyword extraction switch is used to set keyword configuration information of a second account registered to the instant messaging application.
Fig. 1H is a schematic flow chart of a voice message processing method according to an embodiment of the present invention, and it is assumed that a voice message is transmitted between a first terminal and a second terminal through an installed instant messaging application, a setting menu of the instant messaging application includes a voice keyword extraction switch, and a first keyword display switch is displayed behind each voice message icon in a session interface of the instant messaging application, as shown in fig. 1H, an implementation flow of the method may include the following steps 1) to 10):
1) and the user opens the instant messaging application installed on the second terminal, enters a setting menu of the instant messaging application and starts a voice keyword extraction switch in the setting menu.
2) When the second terminal detects that the voice keyword extraction switch is turned on, first configuration information is sent to the server, and the first configuration information carries second account information and is used for indicating the server to update the stored keyword configuration information of the second account according to the first configuration information.
The second account information is used for indicating a second account for logging in the instant messaging application.
Specifically, the server may set a keyword configuration flag in the stored keyword configuration information of the second account to a first configuration flag according to the first configuration information, where the first configuration flag is used to indicate that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account.
3) And the server receives the voice message to be forwarded to the second account.
4) And the server reads the keyword configuration information of the second account, and judges whether to allow keyword extraction on the voice message to be forwarded to the second account according to the keyword configuration information.
5) And when determining that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account according to the keyword configuration information, the server acquires the keywords of the voice message.
6) And the server sends the voice message and the keyword of the voice message to a second terminal which logs in the second account.
7) And the second terminal receives the voice message sent by the server and the keywords of the voice message, displays a voice message icon in the designated session interface and displays the keywords of the voice message on the voice message icon.
8) The user views the keywords of the voice message.
9) And the user controls the display or the closing of the keywords of the voice message by opening or closing the first keyword display switch behind the voice message icon.
In the embodiment of the invention, after receiving the voice message sent by the first terminal, the server can firstly acquire the keyword of the voice message, and then send the voice message and the keyword of the voice message to the second terminal together, so that the second terminal can display the keyword of the voice message when displaying the icon of the voice message on the conversation interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a specific voice message can be quickly searched from a plurality of voice messages through the displayed keywords, and the searching efficiency of the voice message is improved.
Fig. 2 is a flowchart of another voice message processing method according to an embodiment of the present invention, which is applied to the voice message processing system shown in fig. 1A. Referring to fig. 2, the method includes:
step 201: the first terminal sends a voice message sending request to the server based on the logged first account, wherein the voice message sending request carries the voice message and the second account information.
The specific implementation process of step 201 may refer to the related description of step 101 in the embodiment of fig. 1C, and is not described herein again in this embodiment of the present invention.
Step 202: and the server receives a voice message sending request sent by the first terminal, and sends the voice message to a second terminal logging in a second account based on the second account information.
The method for forwarding the voice message to the second terminal by the server is basically the same as the forwarding method described in step 103 in the above embodiment of fig. 1C, except that, before the server in the above embodiment of fig. 1C sends the voice message to the second terminal, the server also needs to extract the keyword of the voice message, and needs to send the keyword of the voice message and the voice message to the second terminal together, and in this step, the server only needs to forward the voice message to the second terminal, and does not need to extract the keyword of the voice message.
Step 203: and the second terminal receives the voice message sent by the server and acquires the keywords of the voice message.
The implementation of step 203 may refer to the implementation method of step 102 in the above embodiment of fig. 1C, except that the execution subject of step 203 is the second terminal, and the execution subject of step 102 is the server.
Specifically, acquiring the keyword of the voice message may include: converting the voice message into text; extracting keywords of the text by performing word segmentation and semantic analysis on the text; the extracted keyword is determined as a keyword of the voice message.
Further, before acquiring the keywords of the voice message, the keyword configuration information of the second account may also be acquired, where the keyword configuration information is used to indicate whether to allow keyword extraction on the voice message to be forwarded to the second account; and when determining that the keyword configuration information of the second account indicates that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account, executing a step of acquiring keywords of the voice message.
Step 204: the second terminal displays a voice message icon and a keyword of the voice message in a designated session interface.
The designated session interface is a session interface comprising a first account and a second account, and the voice message icon is used for triggering the voice message to be played.
The specific implementation process of step 204 may refer to the related description of step 104 in the embodiment of fig. 1C, and is not described herein again in this embodiment of the present invention.
Specifically, displaying the keywords of the voice message may include: and scrolling and displaying the keywords of the voice message in the designated area of the voice message icon.
Further, after displaying the keyword of the voice message, when an instruction to cancel the display of the keyword is received based on the voice message icon, the display of the keyword of the voice message may also be stopped.
Further, after displaying the keyword of the voice message, the display duration of the keyword of the voice message may also be determined, and when the display duration is greater than or equal to the preset duration, the displaying of the keyword of the voice message is stopped.
In the embodiment of the invention, after receiving the voice message forwarded by the first terminal through the server, the second terminal may first acquire the keyword of the voice message, and then display the voice message icon and the keyword of the voice message on the session interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a specific voice message can be quickly searched from a plurality of voice messages through the displayed keywords, and the searching efficiency of the voice message is improved.
Fig. 3 is a flowchart of another voice message processing method according to an embodiment of the present invention, which is applied to the voice message processing system shown in fig. 1B. Referring to fig. 3, the method includes:
step 301: the first terminal sends a voice message to the second terminal.
The first terminal may send the voice message to the second terminal through a network, for example, the voice message may be sent to the second terminal through a wireless local area network such as WIFI or bluetooth.
Step 302: and the second terminal receives the voice message and acquires the keywords of the voice message.
The implementation manner of step 302 may refer to the implementation method of step 102 in the above-described fig. 1C embodiment, except that the execution main body of step 302 is the second terminal, and the execution main body of step 102 is the server.
Specifically, acquiring the keyword of the voice message may include: converting the voice message into text; extracting keywords of the text by performing word segmentation and semantic analysis on the text; the extracted keyword is determined as a keyword of the voice message.
Further, before acquiring the keyword of the voice message, keyword configuration information of the second terminal may also be acquired, where the keyword configuration information is used to indicate whether keyword extraction is allowed for the voice message received by the second terminal; and when determining that the keyword configuration information of the second terminal indicates that keyword extraction of the voice message received by the second terminal is allowed, executing the step of acquiring the keywords of the voice message.
Step 303: the second terminal displays a voice message icon and a keyword of the voice message in a designated session interface.
The appointed conversation interface refers to a conversation interface where the first terminal and the second terminal are located, and the voice message icon is used for triggering and playing the voice message.
The specific implementation process of step 303 may refer to the related description of step 104 in the embodiment of fig. 1C, and is not described herein again in this embodiment of the present invention.
Specifically, displaying the keywords of the voice message may include: and scrolling and displaying the keywords of the voice message in the designated area of the voice message icon.
Further, after displaying the keyword of the voice message, when an instruction to cancel the display of the keyword is received based on the voice message icon, the display of the keyword of the voice message may also be stopped.
Further, after displaying the keyword of the voice message, the display duration of the keyword of the voice message may also be determined, and when the display duration is greater than or equal to the preset duration, the displaying of the keyword of the voice message is stopped.
In the embodiment of the invention, after receiving the voice message forwarded by the first terminal through the server, the second terminal may first acquire the keyword of the voice message, and then display the voice message icon and the keyword of the voice message on the session interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a specific voice message can be quickly searched from a plurality of voice messages through the displayed keywords, and the searching efficiency of the voice message is improved.
Fig. 4 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present invention, which is applied in a server, and as shown in fig. 4, the apparatus includes a receiving module 401, a voice processing module 402, and a sending module 403.
A receiving module 401, configured to receive a voice message sending request sent by a first terminal based on a logged-in first account, where the voice message sending request carries a voice message and second account information;
an obtaining module 402, configured to obtain a keyword of the voice message;
a sending module 403, configured to send the voice information and the keywords of the voice message to a second terminal that logs in the second account based on the second account information, where the second terminal displays a voice message icon and the keywords of the voice message in a specified session interface, where the specified session interface is a session interface including the first account and the second account, and the voice message icon is used to trigger playing of the voice message.
Optionally, the obtaining module 402 is specifically configured to:
converting the voice message into text;
extracting keywords of the text by performing word segmentation and semantic analysis on the text;
and determining the extracted keywords as the keywords of the voice message.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring keyword configuration information of the second account, wherein the keyword configuration information is used for indicating whether keyword extraction is allowed to be carried out on the voice message to be forwarded to the second account;
a triggering module, configured to trigger the obtaining module 302 to obtain the keywords of the voice message when it is determined that the keyword configuration information of the second account indicates that keyword extraction is allowed for the voice message to be forwarded to the second account.
In the embodiment of the invention, after receiving the voice message sent by the first terminal, the server can firstly acquire the keyword of the voice message, and then send the voice message and the keyword of the voice message to the second terminal together, so that the second terminal can display the keyword of the voice message when displaying the icon of the voice message on the conversation interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a specific voice message can be quickly searched from a plurality of voice messages through the displayed keywords, and the searching efficiency of the voice message is improved.
Fig. 5 is a schematic structural diagram of another message processing apparatus according to an embodiment of the present invention, which is applied to a second terminal, and as shown in fig. 5, the apparatus includes a receiving module 501 and a display module 502.
A receiving module 501, configured to receive, based on a logged-in second account, voice information and a keyword of the voice message, where the voice message is sent to a server by a first terminal that logs in a first account, and instructs the server to forward the voice message to the second account, and the keyword of the voice message is obtained by the server based on the voice message;
a display module 502, configured to display a voice message icon and a keyword of the voice message in a specified session interface, where the specified session interface is a session interface that includes the first account and the second account, and the voice message icon is used to trigger playing of the voice message.
Optionally, the display module 502 is specifically configured to:
and scrolling and displaying the keywords of the voice message in the designated area of the voice message icon.
Optionally, the apparatus further comprises a stop display module, wherein the stop display module is configured to:
stopping displaying the keywords of the voice message when an instruction to cancel displaying the keywords is received based on the voice message icon; or,
and determining the display duration of the keywords of the voice message, and stopping displaying the keywords of the voice message when the display duration is greater than or equal to the preset duration.
In the embodiment of the invention, the second terminal can receive the voice message sent by the server and the keyword of the voice message, and can display the voice message icon and the keyword of the voice message on the conversation interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a specific voice message can be quickly searched from a plurality of voice messages through the displayed keywords, and the searching efficiency of the voice message is improved.
Fig. 6 is a schematic structural diagram of another message processing apparatus according to an embodiment of the present invention, which is applied to a second terminal, and as shown in fig. 6, the apparatus includes a receiving module 601, a voice processing module 602, and a display module 603.
A receiving module 601, configured to receive, based on a logged-in second account, a voice message sent by a server, where the voice message is sent to the server by a first terminal that logs in a first account, and indicates the server to forward the voice message to the second account;
an obtaining module 602, configured to obtain a keyword of the voice message;
a display module 603, configured to display a voice message icon and a keyword of the voice message in a specified session interface, where the specified session interface is a session interface that includes the first account and the second account, and the voice message icon is used to trigger playing of the voice message.
Optionally, the obtaining module 602 is specifically configured to:
converting the voice message into text;
extracting keywords of the text by performing word segmentation and semantic analysis on the text;
and determining the extracted keywords as the keywords of the voice message.
Optionally, the display module 603 is specifically configured to:
and scrolling and displaying the keywords of the voice message in the designated area of the voice message icon.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring keyword configuration information of the second account, wherein the keyword configuration information is used for indicating whether keyword extraction is allowed to be carried out on the voice message to be forwarded to the second account;
a triggering module, configured to trigger the obtaining module 602 to obtain the keywords of the voice message when it is determined that the keyword configuration information of the second account indicates that keyword extraction is allowed for the voice message to be forwarded to the second account.
Optionally, the apparatus further comprises a stop display module, wherein the stop display module is configured to:
stopping displaying the keywords of the voice message when an instruction to cancel displaying the keywords is received based on the voice message icon; or,
and determining the display duration of the keywords of the voice message, and stopping displaying the keywords of the voice message when the display duration is greater than or equal to the preset duration.
In the embodiment of the invention, after receiving the voice message forwarded by the first terminal through the server, the second terminal may first acquire the keyword of the voice message, and then display the voice message icon and the keyword of the voice message on the session interface. Therefore, when the user is in a situation that the user is not suitable for listening to the voice message, the user can directly obtain the content of the voice message in time through the displayed keywords, and the problem that the user cannot obtain the message in time is avoided. Moreover, a certain specific voice message can be quickly searched from a plurality of voice messages through the keywords displayed near the voice message icon, and the searching efficiency of the voice message is improved.
It should be noted that: in the voice message processing apparatus provided in the foregoing embodiment, when processing a voice message, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the voice message processing apparatus and the voice message processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 7 is a schematic structural diagram of a server 700 according to an embodiment of the present invention. The server may be a server in a cluster of background servers. Specifically, the method comprises the following steps:
the server 700 includes a Central Processing Unit (CPU)701, a system memory 704 of a Random Access Memory (RAM)702 and a Read Only Memory (ROM)703, and a system bus 705 connecting the system memory 704 and the central processing unit 701. The server 700 also includes a basic input/output system (I/O system) 706, which facilitates transfer of information between devices within the computer, and a mass storage device 707 for storing an operating system 713, application programs 714, and other program modules 715.
The basic input/output system 706 includes a display 708 for displaying information and an input device 709, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 708 and the input device 709 are connected to the central processing unit 701 through an input output controller 710 connected to the system bus 705. The basic input/output system 706 may also include an input/output controller 710 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 710 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 707 is connected to the central processing unit 701 through a mass storage controller (not shown) connected to the system bus 705. The mass storage device 707 and its associated computer-readable media provide non-volatile storage for the server 700. That is, the mass storage device 707 may include a computer-readable medium (not shown), such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 704 and mass storage device 707 described above may be collectively referred to as memory.
According to various embodiments of the invention, server 700 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 700 may be connected to the network 712 through a network interface unit 711 connected to the system bus 705, or the network interface unit 711 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing the server-executed method of the fig. 1C or fig. 2 embodiment described above.
In another embodiment, a computer-readable storage medium is provided, in which at least one instruction, at least one program, code set, or set of instructions is stored, and the instruction, the program, the code set, or the set of instructions is loaded and executed by a processor to implement the voice message processing method executed by the server in the embodiment of fig. 1C or fig. 2.
Fig. 8 is a schematic structural diagram of a terminal 800 according to an embodiment of the present invention. The terminal 800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 802 is used to store at least one instruction for execution by the processor 801 to implement the voice message processing method performed by the first terminal or the second terminal in the embodiments of fig. 1C, fig. 2, or fig. 3 described above in this application.
In some embodiments, the terminal 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (location based Service). The positioning component 808 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 809 is used to provide power to various components in terminal 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power source 809 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the terminal 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal 800 and/or underneath touch display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user, and the processor 801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of terminal 800. When a physical button or a vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the processor 801 controls the touch display 805 to switch from the screen-on state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In another embodiment, a computer-readable storage medium is provided, in which at least one instruction, at least one program, code set, or instruction set is stored, and the instruction, the program, the code set, or the instruction set is loaded and executed by a processor to implement the voice message processing method executed by the first terminal or the second terminal in the embodiments of fig. 1C, fig. 2, or fig. 3.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. A voice message processing method is applied to a server, and the method comprises the following steps:
receiving a voice message sending request sent by a first terminal based on a logged first account, wherein the voice message sending request carries voice messages and second account information;
acquiring keywords of the voice message;
and sending the voice information and the keywords of the voice message to a second terminal logging in the second account based on the second account information, and displaying a voice message icon and the keywords of the voice message in a specified session interface by the second terminal, wherein the specified session interface is the session interface comprising the first account and the second account, and the voice message icon is used for triggering and playing the voice message.
2. The method of claim 1, wherein the obtaining the keyword of the voice message comprises:
converting the voice message into text;
extracting keywords of the text by performing word segmentation and semantic analysis on the text;
and determining the extracted keywords as the keywords of the voice message.
3. The method of claim 1 or 2, wherein before the obtaining the keyword of the voice message, further comprising:
acquiring keyword configuration information of the second account, wherein the keyword configuration information is used for indicating whether keyword extraction is allowed to be carried out on the voice message to be forwarded to the second account;
and when determining that the keyword configuration information of the second account indicates that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account, executing a step of acquiring keywords of the voice message.
4. A voice message processing method is applied to a second terminal, and the method comprises the following steps:
receiving voice information sent by a server and keywords of the voice message based on a logged second account, wherein the voice message is sent to the server by a first terminal logged in a first account and indicates the server to forward to the second account, and the keywords of the voice message are obtained by the server based on the voice message;
displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface comprises the first account and the second account, and the voice message icon is used for triggering the voice message to be played.
5. The method of claim 4, wherein said displaying keywords of said voice message comprises:
and scrolling and displaying the keywords of the voice message in the designated area of the voice message icon.
6. The method of claim 4 or 5, wherein after displaying the keyword of the voice message, further comprising:
stopping displaying the keywords of the voice message when an instruction to cancel displaying the keywords is received based on the voice message icon; or,
and determining the display duration of the keywords of the voice message, and stopping displaying the keywords of the voice message when the display duration is greater than or equal to the preset duration.
7. A voice message processing method is applied to a second terminal, and the method comprises the following steps:
receiving a voice message sent by a server based on a logged second account, wherein the voice message is sent to the server by a first terminal logged in with a first account and indicates the server to forward to the second account;
acquiring keywords of the voice message;
displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface comprises the first account and the second account, and the voice message icon is used for triggering the voice message to be played.
8. The method of claim 7, wherein the obtaining the keyword of the voice message comprises:
converting the voice message into text;
extracting keywords of the text by performing word segmentation and semantic analysis on the text;
and determining the extracted keywords as the keywords of the voice message.
9. The method of claim 7, wherein said displaying keywords of said voice message comprises:
and scrolling and displaying the keywords of the voice message in the designated area of the voice message icon.
10. The method of any of claims 7-9, wherein prior to obtaining the keyword for the voice message, further comprising:
acquiring keyword configuration information of the second account, wherein the keyword configuration information is used for indicating whether keyword extraction is allowed to be carried out on the voice message to be forwarded to the second account;
and when determining that the keyword configuration information of the second account indicates that keyword extraction is allowed to be performed on the voice message to be forwarded to the second account, executing a step of acquiring keywords of the voice message.
11. The method of any of claims 7-9, wherein after displaying the keywords of the voice message, further comprising:
stopping displaying the keywords of the voice message when an instruction to cancel displaying the keywords is received based on the voice message icon; or,
and determining the display duration of the keywords of the voice message, and stopping displaying the keywords of the voice message when the display duration is greater than or equal to the preset duration.
12. A voice message processing method is applied to a second terminal, and the method comprises the following steps:
receiving a voice message sent by a first terminal;
acquiring keywords of the voice message;
displaying a voice message icon and keywords of the voice message in a specified session interface, wherein the specified session interface refers to a session interface where the first terminal and the second terminal are located, and the voice message icon is used for triggering and playing of the voice message.
13. A server, comprising a processor and a memory, wherein the memory has stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the voice message processing method according to any one of claims 1-3.
14. A terminal, characterized in that the terminal comprises a processor and a memory, in which at least one instruction, at least one program, set of codes, or set of instructions is stored, which is loaded and executed by the processor to implement the voice message processing method according to any of claims 4-6 or claims 7-11 or claim 12.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the voice message processing method of any one of claims 1-3 or claims 4-6 or claims 7-11 or claim 12.
CN201810088402.XA 2018-01-30 2018-01-30 Voice message processing method and device Pending CN110099360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810088402.XA CN110099360A (en) 2018-01-30 2018-01-30 Voice message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810088402.XA CN110099360A (en) 2018-01-30 2018-01-30 Voice message processing method and device

Publications (1)

Publication Number Publication Date
CN110099360A true CN110099360A (en) 2019-08-06

Family

ID=67442566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810088402.XA Pending CN110099360A (en) 2018-01-30 2018-01-30 Voice message processing method and device

Country Status (1)

Country Link
CN (1) CN110099360A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111381800A (en) * 2020-03-02 2020-07-07 北京达佳互联信息技术有限公司 Voice message display method and device, electronic equipment and storage medium
CN111859900A (en) * 2020-07-14 2020-10-30 维沃移动通信有限公司 Message display method and device and electronic equipment
CN115334051A (en) * 2022-07-18 2022-11-11 北京达佳互联信息技术有限公司 Information display method, device, terminal and storage medium
CN115334030A (en) * 2022-08-08 2022-11-11 阿里健康科技(中国)有限公司 Voice message display method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102347913A (en) * 2011-07-08 2012-02-08 个信互动(北京)网络科技有限公司 Method for realizing voice and text content mixed message
CN103379460A (en) * 2012-04-20 2013-10-30 华为终端有限公司 Method and terminal for processing voice message
CN104125334A (en) * 2014-06-16 2014-10-29 联想(北京)有限公司 Information processing method and electronic equipment
CN104714981A (en) * 2013-12-17 2015-06-17 腾讯科技(深圳)有限公司 Voice message search method, device and system
CN105072015A (en) * 2015-06-30 2015-11-18 网易(杭州)网络有限公司 Voice information processing method, server, and terminal
CN107124352A (en) * 2017-05-26 2017-09-01 维沃移动通信有限公司 The processing method and mobile terminal of a kind of voice messaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102347913A (en) * 2011-07-08 2012-02-08 个信互动(北京)网络科技有限公司 Method for realizing voice and text content mixed message
CN103379460A (en) * 2012-04-20 2013-10-30 华为终端有限公司 Method and terminal for processing voice message
CN104714981A (en) * 2013-12-17 2015-06-17 腾讯科技(深圳)有限公司 Voice message search method, device and system
CN104125334A (en) * 2014-06-16 2014-10-29 联想(北京)有限公司 Information processing method and electronic equipment
CN105072015A (en) * 2015-06-30 2015-11-18 网易(杭州)网络有限公司 Voice information processing method, server, and terminal
CN107124352A (en) * 2017-05-26 2017-09-01 维沃移动通信有限公司 The processing method and mobile terminal of a kind of voice messaging

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111381800A (en) * 2020-03-02 2020-07-07 北京达佳互联信息技术有限公司 Voice message display method and device, electronic equipment and storage medium
CN111859900A (en) * 2020-07-14 2020-10-30 维沃移动通信有限公司 Message display method and device and electronic equipment
CN111859900B (en) * 2020-07-14 2023-09-08 维沃移动通信有限公司 Message display method and device and electronic equipment
CN115334051A (en) * 2022-07-18 2022-11-11 北京达佳互联信息技术有限公司 Information display method, device, terminal and storage medium
CN115334051B (en) * 2022-07-18 2023-10-24 北京达佳互联信息技术有限公司 Information display method, device, terminal and storage medium
CN115334030A (en) * 2022-08-08 2022-11-11 阿里健康科技(中国)有限公司 Voice message display method and device
CN115334030B (en) * 2022-08-08 2023-09-19 阿里健康科技(中国)有限公司 Voice message display method and device

Similar Documents

Publication Publication Date Title
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN110061900B (en) Message display method, device, terminal and computer readable storage medium
CN111343346B (en) Incoming call pickup method and device based on man-machine conversation, storage medium and equipment
CN110798327B (en) Message processing method, device and storage medium
CN110572716B (en) Multimedia data playing method, device and storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN112004134B (en) Multimedia data display method, device, equipment and storage medium
CN110109608B (en) Text display method, text display device, text display terminal and storage medium
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
CN112751679A (en) Instant messaging message processing method, terminal and server
CN111628925B (en) Song interaction method, device, terminal and storage medium
CN112163406A (en) Interactive message display method and device, computer equipment and storage medium
CN112764608A (en) Message processing method, device, equipment and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN110099360A (en) Voice message processing method and device
CN111681655A (en) Voice control method and device, electronic equipment and storage medium
CN108831513A (en) Method, terminal, server and the system of recording audio data
CN111459466A (en) Code generation method, device, equipment and storage medium
CN111935516B (en) Audio file playing method, device, terminal, server and storage medium
CN111444289A (en) Incidence relation establishing method
CN112423011B (en) Message reply method, device, equipment and storage medium
CN111245629B (en) Conference control method, device, equipment and storage medium
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium
CN112311661A (en) Message processing method, device, equipment and storage medium
CN112764600A (en) Resource processing method, device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190806