CN109461462B - Audio sharing method and device - Google Patents
Audio sharing method and device Download PDFInfo
- Publication number
- CN109461462B CN109461462B CN201811300693.0A CN201811300693A CN109461462B CN 109461462 B CN109461462 B CN 109461462B CN 201811300693 A CN201811300693 A CN 201811300693A CN 109461462 B CN109461462 B CN 109461462B
- Authority
- CN
- China
- Prior art keywords
- audio
- audio file
- account
- file
- media information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 112
- 230000001960 triggered effect Effects 0.000 claims abstract description 33
- 238000003860 storage Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 15
- 239000000463 material Substances 0.000 claims description 15
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000003780 insertion Methods 0.000 claims description 4
- 230000037431 insertion Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000003993 interaction Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000036651 mood Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephonic Communication Services (AREA)
Abstract
The disclosure relates to an audio sharing method and device. The method comprises the following steps: the method comprises the steps that when a first terminal detects a first instruction which runs on a first account of a social application of the first terminal and is used for obtaining audio, an audio source selection interface is displayed, when a first obtaining mode is triggered, a first audio file is obtained according to the first obtaining mode, and a second audio file to be shared is determined according to the first audio file. When a publishing instruction for the second audio file is detected, a first publishing request carrying the second audio file is sent to the server, so that the server displays the second audio file in a concern dynamic list of a second account concerning the first account in the social application. According to the audio sharing method, the audio can be collected, edited, shared and the like, other social applications do not need to be additionally called to share the audio file, the operation process is simple and smooth, and the efficiency of collecting, editing and sharing the audio file by a user is greatly improved.
Description
Technical Field
The present disclosure relates to the field of mobile internet technologies, and in particular, to an audio sharing method and apparatus.
Background
With the development of mobile terminal technology, audio editing technology has been widely applied to various mobile terminal products. However, in the related art, the audio editing application of the mobile terminal can only implement simple editing and local storage of an audio file.
Disclosure of Invention
In view of this, the present disclosure provides an audio sharing method and an audio sharing device, which can solve the problem that an audio file cannot be shared and stored in a cloud.
According to an aspect of the present disclosure, an audio sharing method is provided, where the method is applied to a first terminal, and includes:
when a first instruction for acquiring audio of a first account is detected, displaying an audio source selection interface, wherein the audio source selection interface comprises an audio acquisition mode, and the first account is a social contact account of a social contact application currently running on a first terminal;
when detecting that a first acquisition mode is triggered, acquiring a first audio file according to the first acquisition mode;
determining a second audio file to be shared according to the first audio file;
when a publishing instruction for the second audio file is detected, sending a first publishing request to a server, wherein the first publishing request carries the second audio file, so that the server displays the second audio file in a concerned dynamic list of a second account;
wherein the second account is a social account that focuses on the first account in the social application.
In one possible implementation, the first issue request is used to control the server to generate first media information, the first media information includes an audio play bar of the second audio file,
the method further comprises the following steps: receiving first media information returned by the server;
and displaying the first media information in an attention dynamic list of a first account in the social application of the first terminal.
In one possible implementation manner, causing the server to present the second audio file in a dynamic list of interest of a second account includes:
the first publishing request is further used for controlling the server to generate second media information and sending the second media information to a second terminal operated by a second account so as to display the second media information in an attention dynamic list of the second account in the social application of the second terminal, wherein the second media information comprises an audio play bar of the second audio file.
In one possible implementation, the method further includes:
receiving third media information sent by the server, wherein the third media information is generated by the server in response to a second issuing request sent by a second account for a third audio file, and the third media information comprises an audio play bar of the third audio file;
and displaying the third media information in an attention dynamic list of a first account in the social application of the first terminal.
In one possible implementation, the first obtaining means includes selecting a local audio;
acquiring a first audio file according to the first acquisition mode, comprising:
displaying a local audio list, wherein the audio list comprises a plurality of audio materials;
and when the selection operation of selecting a certain audio material is detected, using the selected audio material as a first audio file.
In one possible implementation, the first obtaining means includes recording audio;
acquiring a first audio file according to the first acquisition mode, comprising:
starting the recording equipment;
when an acquisition instruction for starting to acquire audio is detected, acquiring the audio through a recording device;
and when an ending instruction for ending the audio acquisition is detected, the acquired audio is taken as a first audio file.
In a possible implementation manner, determining a second audio file to be shared according to the first audio file includes:
and when a second instruction for editing the first audio file is detected, editing the first audio file according to the second instruction to obtain a second audio file.
In one possible implementation, the second instruction includes: intercepting an audio clip, wherein the audio clip intercepting instruction comprises one or more sub-time periods in a time period corresponding to a first audio file;
editing the first audio file according to the second instruction to obtain a second audio file, wherein the method comprises the following steps:
and intercepting the audio clips corresponding to the one or more sub-time periods according to the instruction for intercepting the audio clips, and synthesizing each audio clip into a second audio file.
In one possible implementation, the second instruction includes: instructions for inserting an audio clip, the instructions for inserting an audio clip comprising: one or more time points in a time period corresponding to the first audio file, and a fourth audio file to be inserted corresponding to the one or more time points;
editing the first audio file according to the second instruction to obtain a second audio file, wherein the method comprises the following steps:
and according to the instruction for inserting the audio clip, inserting the fourth audio file corresponding to the time point at each time point to obtain a second audio file.
In one possible implementation, the method further includes: and the first terminal obtains a fourth audio file in an audio recording mode.
In a possible implementation manner, the audio play bar includes a simulated sound wave pattern with a preset color, and the simulated sound wave pattern is a pattern determined according to a corresponding audio file.
In a possible implementation manner, the simulated sound wave pattern is determined by:
wherein y represents the amplitude of the simulated sound wave graph, w represents the width of the attention dynamic list display interface of the social application, r represents the screen occupation ratio of the first terminal, s represents the playing time of the audio file, a represents the shortest time for which the audio file can be played, b represents the longest time for which the audio file can be played, and k is a constant.
In one possible implementation, the method further includes:
and if any audio play bar in the attention dynamic list is detected to be triggered, acquiring and playing the triggered audio play bar in the attention dynamic list and the audio corresponding to the subsequent audio play bar according to a preset playing sequence.
In a possible implementation manner, the preset playing sequence includes any one of the following: according to the sequence of the shared time from first to last, the sequence of the shared time from last to first and the random sequence.
In one possible implementation, the second instructions further include: the method comprises the steps of obtaining a first multimedia file, wherein the first multimedia file comprises one or more of videos, images, expressions and text information;
editing the first audio file according to the second instruction to obtain a second audio file, wherein the method comprises the following steps:
determining a display mode of the first multimedia file relative to the first audio file according to the second instruction to obtain a second audio file;
the first media information further comprises: the first multimedia file.
In one possible implementation, the method further includes:
when a storage instruction for storing the first audio file and/or the second audio file is detected, storing the first audio file and/or the second audio file to a cloud or locally.
According to another aspect of the present disclosure, there is provided an audio sharing method, applied to a server, including:
receiving a first publishing request sent by a first account running in a first terminal social application, wherein the first publishing request carries a second audio file;
displaying the second audio file in a dynamic list of interest of a second account, wherein the second account is a social account that is interested in the first account in the social application.
In one possible implementation manner, the method further includes:
generating first media information according to the first publishing request, wherein the first media information comprises an audio play bar of the second audio file;
and sending the first media information to the first terminal so as to enable the first media information to be displayed in a focus dynamic list of a first account in the social application of the first terminal.
In one possible implementation, the displaying the second audio file in a dynamic list of interest of a second account includes:
generating second media information according to the first publishing request, wherein the second media information comprises an audio playing bar of the second audio file;
and sending the second media information to a second terminal so as to enable the second media information to be displayed in a concerned dynamic list of a second account in the social application of the second terminal.
In a possible implementation manner, the audio play bar includes a simulated sound wave pattern with a preset color, and the simulated sound wave pattern is a pattern determined according to a corresponding audio file.
In a possible implementation manner, the simulated sound wave pattern is determined by:
wherein y represents the amplitude of the simulated sound wave graph, w represents the width of the attention dynamic list display interface of the social application, r represents the screen occupation ratio of the first terminal, s represents the playing time of the audio file, a represents the shortest time for which the audio file can be played, b represents the longest time for which the audio file can be played, and k is a constant.
In one possible implementation, the second audio file further includes: the display method comprises the following steps of a first multimedia file and a display mode of the first multimedia file relative to a first audio file, wherein the first multimedia file comprises one or more of video, images, expressions and text information;
the first media information further comprises: the first multimedia file.
According to another aspect of the present disclosure, an audio sharing apparatus is provided, where the apparatus is applied to a first terminal, and includes:
the audio source selection interface comprises an audio acquisition mode, and the first account is a social account of a social application currently running on the first terminal;
the acquisition module is used for acquiring a first audio file according to a first acquisition mode when the first acquisition mode is triggered;
the determining module is used for determining a second audio file to be shared according to the first audio file;
the sending module is used for sending a first publishing request to a server when a publishing instruction for the second audio file is detected, wherein the first publishing request carries the second audio file, so that the server displays the second audio file in a concerned dynamic list of a second account;
wherein the second account is a social account that focuses on the first account in the social application.
According to another aspect of the present disclosure, there is provided an audio sharing apparatus applied to a server, including:
the receiving module is used for receiving a first publishing request sent by a first account running in a first terminal social application, wherein the first publishing request carries a second audio file;
a presentation module, configured to present the second audio file in a follow-up dynamic list of a second account, where the second account is a social account that follows the first account in the social application.
According to another aspect of the present disclosure, there is provided an audio sharing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
According to another aspect of the present disclosure, there is provided an audio sharing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
According to the method and the device, when the first terminal detects that a first instruction for acquiring the audio is currently operated in a first account of the social application of the first terminal, an audio source selection interface is displayed, when the first acquisition mode is triggered, a first audio file is acquired according to the first acquisition mode, and a second audio file to be shared is determined according to the first audio file. When a publishing instruction for the second audio file is detected, a first publishing request carrying the second audio file is sent to the server, so that the server displays the second audio file in a concern dynamic list of a second account concerning the first account in the social application.
Therefore, the audio sharing method can realize the collection, editing, sharing and the like of the audio without additionally calling other social applications to share the audio file, the operation process is simple and smooth, and the efficiency of collecting, editing and sharing the audio file by a user is greatly improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating an audio sharing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating an audio sharing method according to an example embodiment.
Fig. 3 is a flow chart illustrating an audio sharing method according to an example embodiment.
Fig. 4 is a flow chart illustrating an audio sharing method according to an example embodiment.
Fig. 5 is a flowchart illustrating step 101 of an audio sharing method according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating step 101 of an audio sharing method according to an exemplary embodiment.
Fig. 7 is a flow chart illustrating an audio sharing method according to an example embodiment.
Fig. 8 is a flow chart illustrating an audio sharing method according to an example embodiment.
Fig. 9 is a flow chart illustrating an audio sharing method according to an example embodiment.
Fig. 10 is a flowchart illustrating a step 801 in an audio sharing method according to an exemplary embodiment.
Fig. 11 is a flow chart illustrating an audio sharing method according to an application example.
Fig. 12 is a schematic diagram illustrating insertion of an audio file according to an application example.
FIG. 13 is a diagram illustrating an intercepted audio file according to an application example.
Fig. 14 is a diagram illustrating editing of a first multimedia file according to an exemplary embodiment.
FIG. 15 is a diagram illustrating a dynamic list of interest according to an application example.
Fig. 16 is a block diagram illustrating an audio sharing arrangement according to an example embodiment.
Fig. 17 is a block diagram illustrating an audio sharing arrangement according to an example embodiment.
Fig. 18 is a block diagram illustrating an audio sharing arrangement according to an example embodiment.
Fig. 19 is a block diagram illustrating an audio sharing arrangement according to an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 is a flow chart illustrating an audio sharing method according to an exemplary embodiment. The method may be applied to a first terminal, and the first terminal may include any one terminal device, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, or a smart watch, which is not limited herein. As shown in fig. 1, the method may include:
In the present disclosure, generally speaking, the audio may be represented as sound waves generated by vibration of an object, and the audio file may be represented as an electronic file obtained by recording the sound waves through a digital recording device.
A social application may be represented as application software that implements the social interaction purpose of a user through a network.
The account may be an account registered by the user through the social application, the user may interact with other accounts on the social application through a network by using the account, and the first account may be an account registered by a certain user on the social application.
The instructions, which may be expressed as instructions and commands directing the computer device to operate, may be sent by the user to the computer device via the controller to cause the computer device to execute the instructions to achieve the purpose of controlling the operation of the computer. The first instruction may be an instruction sent by a user corresponding to the first account on the first terminal by triggering a control for acquiring an audio, for example, the user opens a social application on the first terminal, a control for "acquiring an audio" is displayed on an interface of the social application, the user may trigger the control by touching or clicking, and when the control is triggered, the first terminal may detect the first instruction.
In one possible implementation manner, the audio source selection interface may include a plurality of mutually different controls corresponding to the first obtaining manner.
In an example, when the first terminal detects that a control corresponding to one of the first obtaining manners is triggered (for example, the triggering action may include a single click, a double click, or a slide, which is not limited herein), the first terminal may obtain the first audio file according to the triggered first obtaining manner. The first obtaining manner may include selecting a local audio, recording an audio, and the like.
And step 102, determining a second audio file to be shared according to the first audio file.
For example, the first terminal may edit the first audio file according to the first audio file and the editing instruction for the first audio file to form the second audio file.
Wherein the second account is a social account that focuses on the first account in the social application.
For example, when detecting that a control for issuing a second audio file is triggered, the terminal device may generate an issuing instruction for the second audio file, obtain the second audio file according to the issuing instruction, generate a first issuing request carrying the second audio file, and send the first issuing request to the server, so that the server displays the second audio file in the attention dynamic list of the second account.
The dynamic list of interest to an account may be used to present audio files shared by one or more other accounts that are of interest to the account. In the attention dynamic list of the account, the audio files shared by one or more other accounts which the account pays attention to can be displayed according to the sharing time from first to last or from last to first.
The first issuing request may carry a second audio file and/or unique identifiers of one or more second accounts selected by the first account (for example, IP addresses of the second accounts), and the server may store the second audio file when receiving the first issuing request, and send an issuing instruction including a link of a storage address of the second audio file at the server to the second terminal operated by the one or more second accounts according to the unique identifiers of the one or more second accounts, so that the second terminal operated by the one or more second accounts presents the link of the storage address of the second audio file at the server in a dynamic list of interest of the second accounts.
In one possible implementation manner, the link of the second audio file may be presented in the attention dynamic list of the second terminal, and the second terminal may acquire and play the second audio file from the server when detecting that the link of the second audio file is triggered. To save memory space of the second terminal.
In a possible implementation manner, the server may also directly send the second audio file to a second terminal operated by the second account, and control the second terminal to store the second audio file, so that the second terminal displays the second audio file in the attention dynamic list of the second account in a storage address link of the second terminal.
According to the method and the device, when the first terminal detects that a first instruction for acquiring the audio is currently operated in a first account of the social application of the first terminal, an audio source selection interface is displayed, when the first acquisition mode is triggered, a first audio file is acquired according to the first acquisition mode, and a second audio file to be shared is determined according to the first audio file. When a publishing instruction for the second audio file is detected, a first publishing request carrying the second audio file is sent to the server, so that the server displays the second audio file in a concern dynamic list of a second account concerning the first account in the social application. Therefore, the audio sharing method can realize the collection, editing, sharing and the like of the audio without additionally calling other social applications to share the audio file, the operation process is simple and smooth, and the efficiency of collecting, editing and sharing the audio file by a user is greatly improved.
In one possible implementation, the method of the present disclosure may further include: when a storage instruction for storing the first audio file and/or the second audio file is detected, storing the first audio file and/or the second audio file to a cloud or locally. For example, the first terminal may store the first audio file and/or the second audio file locally and/or store the first audio file and/or the second audio file to a server when detecting a storage instruction for storing the first audio file and/or the second audio file, where the server may be a cloud server.
In one possible implementation, step 102 may include the first terminal directly treating the first audio file as the second audio file.
Fig. 2 is a flow chart illustrating an audio sharing method according to an example embodiment. As shown in fig. 2, the difference between fig. 2 and fig. 1 is that step 103 may further include: the first release request is used for controlling the server to generate first media information, the first media information comprises an audio play bar of the second audio file,
the method may further comprise: and 200, receiving the first media information returned by the server.
As an example of this embodiment, step 103 may include: the first publishing request may be used to control the server to store a second audio file, and control the server to generate the first media information including an audio play bar of the second audio file according to the second audio file carried by the first publishing request.
The audio play bar of the second audio file may be represented as a control for acquiring and playing the second audio file and showing the playing progress of the second audio file.
The first media information may further include a link of a storage address of the second audio file in the server.
Step 200 may include: the first terminal may receive the first media information including the audio play bar returned by the server, and step 201 may include the first terminal presenting the first media information including the audio play bar in an attention dynamic list of the first account.
The first terminal may acquire and play the second audio file from the server according to the link of the second audio file at the storage address of the server when detecting that the audio play bar of the second audio file is triggered. Therefore, the first terminal operated by the first account can display the audio file shared by the first account in real time.
Fig. 3 is a flow chart illustrating an audio sharing method according to an example embodiment. As shown in fig. 3, fig. 3 differs from fig. 1 in that "to cause the server to present the second audio file in the attention dynamic list of the second account" in step 103 may include:
As an example of this embodiment, the first issue request may be used to control the server to store the second audio file, and may be used to control the server to generate the second media information of the audio play bar containing the second audio file according to the second audio file carried in the first issue request.
The audio play bar of the second audio file may be represented as a control for triggering the playing of the second audio file and showing the progress of the playing of the second audio file. The second media information may be the same as or different from the first media information, and is not limited herein. For example, the first media information may include a list of accounts that can see the second audio file, and the second media information may not include the list of accounts.
The first release request may also include a unique identification (e.g., may be an IP address of the second account) of one or more second accounts selected by the first account (which may also be one or more second accounts of mutual interest to the first account). The server may send the second media information to the second terminal operated by the one or more second accounts according to the unique identifiers of the one or more second accounts, so that the second media information is shown in the attention dynamic list of the second accounts in the social applications of the second terminal operated by the one or more second accounts.
The second terminal may acquire and play the second audio file from the server according to the link of the second audio file at the storage address of the server when detecting that the audio play bar of the second audio file is triggered. Therefore, the second terminal operated by the second account can display the audio file shared by the first account in real time.
As an example of this embodiment, the audio play bar may include a simulated sound wave pattern having a preset color, where the simulated sound wave pattern is a pattern determined according to a corresponding audio file.
The preset color may refer to a color selected/set by an account issuing the audio file corresponding to the audio play bar. For example, the first terminal may present a color option corresponding to the audio play bar, and use the selected color as the color of the simulated sound wave pattern corresponding to the audio play bar. Thereby further showing the intention of the user to share through the color.
In a possible implementation manner, the simulated sound wave pattern may be determined in a manner that:
wherein y represents the amplitude of the simulated sound wave graph, w represents the width of the attention dynamic list display interface of the social application, r represents the screen occupation ratio of the first terminal, s represents the playing time of the audio file, a represents the shortest time for which the audio file can be played, b represents the longest time for which the audio file can be played, and k is a constant. The first terminal can show the audio main body according to the mode shown in the formula 1 and the waveform formed by the vertical lines with different lengths according to the sound volume. Therefore, the simulated sound wave graph of the audio playing bar with the proper size can be flexibly determined according to the duration of the audio file and the size of the screen of the first terminal, and the situation that the audio playing bar corresponding to the audio file cannot be completely displayed due to the small screen or the overlong duration of the audio file is effectively avoided.
As an example of this embodiment, the method may further include: and if any audio play bar in the attention dynamic list is detected to be triggered, acquiring and playing the triggered audio play bar in the attention dynamic list and the audio corresponding to the subsequent audio play bar according to a preset playing sequence. Therefore, the user can listen to the audio playing bars in the concerned dynamic list according to the preset playing sequence without searching and manually playing the audio file, and the interference to the daily activities of the user is less caused.
For example, the first terminal may play the audio file corresponding to a certain audio play bar when detecting that the audio play bar is triggered, and automatically play the audio files corresponding to other audio play bars of which the sharing time is earlier than that of the audio play bar in the attention dynamic list according to the sequence of the sharing time from first to last. In this way, it is convenient for the account of the host to preferentially obtain the audio file shared by the account of the earliest interaction in the interactive program of the radio station host, for example.
For example, the first terminal may play the audio file corresponding to a certain audio play bar when detecting that the audio play bar is triggered, and automatically play the audio files corresponding to other audio play bars of which the sharing time is earlier than that of the audio play bar in the attention dynamic list according to the sequence of the sharing time from back to front.
For example, when detecting that a certain audio play bar is triggered, the first terminal may play the audio file corresponding to the audio play bar, and automatically play the audio files corresponding to all the audio play bars of the account corresponding to the triggered audio play bar in the dynamic list according to the sequence of sharing time from back to first or from back to first. In this way, the user can listen to the audio file shared by a certain account.
For example, when detecting that a certain audio play bar is triggered, the first terminal may play an audio file corresponding to the audio play bar, and automatically play other audio play bars in the concerned dynamic list according to a random sequence.
It should be noted that other playing sequences may also be preset according to the user's needs, and the disclosure is not limited herein.
Fig. 4 is a flow chart illustrating an audio sharing method according to an example embodiment. As shown in fig. 4, the method may further include:
As an example of this embodiment, the first terminal may receive third media information sent by the server, where the third media information may be generated by the server in response to a second publication request sent by the second account for the third audio file, and the third media information may include an audio play bar of the third audio file. The first terminal can display the third media information in the attention dynamic list of the first account in the social application, so that the first account and the second account which are concerned with each other can display the audio file shared by the other party in real time in the attention dynamic list.
Fig. 5 is a flowchart illustrating step 101 of an audio sharing method according to an exemplary embodiment. As shown in fig. 5, step 101 may include: the first acquisition mode includes selecting local audio.
For example, the first terminal may pre-store a plurality of audio materials, and the audio source selection interface corresponding to the first account may include a control for selecting local audio. The first terminal can display a local audio list comprising a plurality of audio materials when detecting that the control for selecting the local audio is triggered, and takes the selected audio material as a first audio file when detecting the selection operation of selecting a certain audio material. Therefore, the method is beneficial to the user to quickly share the audio files stored in the first terminal.
Fig. 6 is a flowchart illustrating step 101 of an audio sharing method according to an exemplary embodiment. As shown in fig. 6, step 101 may include: the first obtaining mode comprises recording audio.
For example, the audio source selection interface corresponding to the first account may include a control for recording audio, and the first terminal may start the recording device when detecting that the control for recording audio is triggered. And when a collecting instruction for starting to collect the audio is detected, collecting the audio through the recording equipment. And when an ending instruction for ending the audio acquisition is detected, the acquired audio is used as a first audio file. Therefore, the method is beneficial to the user to quickly record and share the sound which needs to be acquired by the user.
Fig. 7 is a flow chart illustrating an audio sharing method according to an example embodiment. As shown in fig. 7, fig. 7 differs from fig. 1 in that step 102 may include:
In an example, the first terminal may display an editing control for a first audio file after acquiring the first audio file, and when detecting that the editing control is triggered, when generating a second instruction for editing the first audio file, edit the first audio file according to the second instruction, to obtain a second audio file.
As an example of this embodiment, the second instruction may include: and intercepting the audio segment, wherein the intercepting of the audio segment comprises one or more sub-time periods in the time period corresponding to the first audio file. Step 700 may include: and intercepting the audio clips corresponding to the one or more sub-time periods according to the instruction for intercepting the audio clips, and synthesizing each audio clip into a second audio file.
For example, when detecting that a first sub-time period and a second sub-time period are selected in a time period corresponding to a first audio file, the first terminal may intercept audio clips corresponding to the first sub-time period and the second sub-time period, and synthesize the audio clips corresponding to the first sub-time period and the second sub-time period to obtain a second audio file. Therefore, the first audio file can be flexibly intercepted and combined according to the needs of the user, and the audio shared by the user can be more suitable for the intention of the user.
As an example of this embodiment, the second instruction may include: instructions for inserting an audio clip, the instructions for inserting an audio clip comprising: one or more time points in a time period corresponding to the first audio file, and a fourth audio file to be inserted corresponding to the one or more time points. Step 700 may include: and according to the instruction for inserting the audio clip, inserting the fourth audio file corresponding to the time point at each time point to obtain a second audio file.
For example, when the first terminal detects that a first time point in a time period corresponding to the first audio file is selected and a fourth audio file corresponding to the first time point is selected, the fourth audio file may be inserted at the first time point to obtain the second audio file. Therefore, the fourth audio file selected by the user can be flexibly inserted into the first audio file according to the needs of the user, and the audio shared by the user can better meet the intention of the user.
In a possible implementation manner, the first terminal may further obtain a fourth audio file by recording an audio. Therefore, after recording a piece of audio, the user can also insert and record new audio in the piece of audio. Thereby more flexibly meeting the requirements of users.
As an example of this embodiment, the second instruction may further include: a first multimedia file, wherein the first multimedia file may include one or more of video, images, expressions, text information. Step 700 may include: determining a display mode of the first multimedia file relative to the first audio file according to the second instruction to obtain the second audio file, wherein the first media information may further include: the first multimedia file.
For example, the user may also attach any one or more of video, images, expressions, text information around the first audio file for the first audio file. For example, an image or the like may be displayed below the first audio file, so that thought content to be shared by the user may be displayed in a richer way.
Fig. 8 is a flow chart illustrating an audio sharing method according to an example embodiment. The method may be applied to a server, and as shown in fig. 8, the method may include:
The description of step 800 and step 801 may refer to the description of step 103 above.
Fig. 9 is a flow chart illustrating an audio sharing method according to an example embodiment. As shown in fig. 9, fig. 9 and fig. 8 differ in that the method may further include:
The description of steps 900 and 901 can refer to the description of steps 200 and 201 above.
Fig. 10 is a flowchart illustrating a step 801 in an audio sharing method according to an exemplary embodiment. As shown in fig. 10, step 801 may include:
step 1000, generating second media information according to the first release request, where the second media information includes an audio play bar of the second audio file.
Reference is made to the description of step 1000 and step 1001 above with respect to the description of step 300.
Fig. 11 is a flow chart illustrating an audio sharing method according to an application example. As shown in fig. 11, this application example may include the following steps.
In step 1100, a user may open a social application installed on a first terminal and log into an account registered in the social application.
In step 1101, the user may obtain audio, for example, the user may select to use an audio material locally stored in the first terminal as the first audio file, or may select to capture a sound of interest to the user through a sound recording device of the first terminal as the first audio file.
In step 1102, the first terminal may display a selection interface whether to edit the first audio file, and if the user selects to edit the first audio file, step 1103 is entered. If the user chooses not to edit the first audio file, step 1104 is entered.
In step 1103, the first terminal may present a selection list of editing manners for the first audio file, where fig. 12 is a schematic diagram of an inserted audio file shown according to an application example, when detecting that the editing manner of the inserted audio file is selected, the first terminal may present an interface of the inserted audio file shown in fig. 12, in the interface, a user may drag a selection time point control at will to select a time point at which the audio file is inserted in a time period corresponding to the first audio file, after selecting the time point, the user may select a locally stored audio material as a fourth audio file to be inserted, or may select a fourth audio file to be inserted recorded by a recording device of the first terminal, and after determining the fourth audio file, trigger an "insert" control shown in fig. 12 to insert the fourth audio file into the first audio file, when the user confirms that the audio file is not inserted any more, the 'completion' control can be triggered to form a second audio file to be published. In addition, the user may also trigger a "listen on trial" control to determine if editing of the audio file is appropriate.
Fig. 13 is a schematic diagram of intercepting an audio file according to an application example, where when detecting that an editing mode of intercepting an audio file is selected, the first terminal may present an interface of intercepting an audio file as shown in fig. 13, in the interface, a user may drag a selection time period control at will to select a sub-time period of the audio file that needs to be intercepted in a time period corresponding to the first audio file, and after selecting the sub-time period, the user may trigger an "intercepting" control as shown in fig. 13 to intercept the audio file corresponding to the sub-time period. When the user confirms that the audio file is not inserted any more, the 'completion' control can be triggered to form a second audio file to be published. In addition, the user may also trigger a "listen on trial" control to determine if editing of the audio file is appropriate.
In the editing process, a user can intercept and/or insert any part of recorded audio, the length and position of the intercepted time period and the insertion time point can be freely determined, and multiple interception and/or insertion operations can be carried out, so that the user can supplement missing important sound information at any time. In addition, as shown in fig. 14, the interface for inserting the audio file and the interface for intercepting the audio file also provide functions of undo and redo, so as to ensure maximum fault tolerance, and the audio file can be saved after being edited, and step 1104 is performed.
In step 1104, the first terminal may present a selection interface of whether to temporarily store the second audio file. If it is detected that the second audio file is selected to be temporarily stored, step 1105 is performed, the second audio file is stored locally in the first terminal or stored in the cloud server, and then step 1106 is performed. The second audio file is stored locally in the first terminal or stored in the cloud server, so that a user can keep sound at any time, the sound can be processed for the second time at any time, and if the user selects to store the second audio file in the cloud, the user can also obtain the stored second audio file on any mobile equipment capable of communicating with the cloud server. If the selection of not temporarily storing the second audio file is detected, step 1106 is directly entered.
In step 1106, the first terminal may present a selection interface whether to publish the second audio file directly, and when it is detected that the second audio file is not published directly, step 1107 is entered.
In step 1107, the user can add a first multimedia file for a second audio file edit,
fig. 14 is a diagram illustrating editing of a first multimedia file according to an exemplary embodiment. As shown in fig. 14, the user may add any one or more of the following multimedia files for the second audio file:
a topic type, the user may select the topic type of the second audio file, for example, the type of the second audio file may include any one of: a sound diary for recording the mood and ideas of the user in the form of audio; a memo for representing contents rapidly recorded using audio to prevent forgetting; the voice message is used for dynamically showing the friends by using the audio content; posting a draft to the main broadcast for showing that the sound of the user is delivered to the account of the main broadcast concerned by the account of the user, so that the sound of the user is shown in a live broadcast room of a known host and the talent is shown; the works are used for publishing the sound of the user into the works to be displayed on the social application platform.
And the user can select the color of the audio playing bar of the second audio file, and the mood state of the user at the moment is displayed through different color selections. For example, the colors may include: the food is light and pleasant, has a hot feeling like fire, is calm like the sea, is lively, has deep color and is as a dead gray.
The user may select the location where the user published the second audio file, for example, the geographical location where the user is currently located, and may be presented in the format of a city/building name.
The user may select an account that can listen to the second audio file, e.g. who can listen to: selecting own visible friends; reminding who to listen: select to remind friends to look over
Labeling: the second audio file is tagged for data filtering, for example, the tag format may include "# text #".
The following steps are described: the second audio file is described, so that the friends can further know more details of the shared audio.
Picture: the user can upload a plurality of pictures through the first terminal and display the pictures in a squared figure form. Further representing the content that the user wishes to express.
After confirming that the first multimedia file is added to the second audio file editing, the user can select to release the second audio file, the first terminal can send the second audio file carrying the first multimedia file to the server, and the server can send the second audio file carrying the first multimedia file to the second terminal operated by one or more second accounts concerned by the first account, so that the second audio file carrying the first multimedia file is displayed in a concerned dynamic list of the second terminal.
FIG. 15 is a diagram illustrating a dynamic list of interest according to an application example. As shown in fig. 15, after the first account issues the second audio file carrying the first multimedia file, the second audio file carrying the first multimedia file may be displayed in the attention dynamic lists of both the first account and the second account. The first account and the second account may each perform any one or more of the following operations on them:
and listening, wherein the user can click the audio play bar to listen to the audio shared by friends of the user. In addition, the audio files corresponding to part or all of the audio play bars in the attention dynamic list can be played in a preset sequence such as from first to last, from last to first or randomly after any audio play bar in the attention dynamic list is detected to be triggered. Because the audio file only needs to be listened by the user, the audio file does not need to be checked by the user and manually triggered to be selected during automatic playing, and less interference is caused to the daily activities of the user.
Praise, which is used to indicate praise for the shared content. For example, the user clicks on the like control, and the icon of the like control may change from gray to red while displaying the account name of the like in the like box.
Comment on: for representing a feeling of thinking about sharing content. For example, after clicking, a comment input box pops up, and after the comment is successfully sent, a "nickname of a reviewer" may be displayed below the shared content: comment text ".
Sharing: the user can click the interesting shared content and then can share the interesting shared content to other social applications.
Collection: after the collection is clicked, the selected voice messages are stored in the cloud storage, and a user can search in the personal center-my collection. Therefore, emotional communication among different accounts can be facilitated.
The audio sharing method is a whole set of technical solution scheme integrating audio acquisition, audio editing, audio local storage, audio cloud storage and audio sharing, wherein the audio sharing integrates emotion, mood, interaction among friends and interaction among strangers, so that the audio is more attractive.
Fig. 16 is a block diagram illustrating an audio sharing arrangement according to an example embodiment. The apparatus is applied to the first terminal, and as shown in fig. 16, the apparatus may include:
the first presentation module 41 is configured to, when a first instruction for acquiring an audio of a first account is detected, present an audio source selection interface, where the audio source selection interface includes an audio acquisition manner, and the first account is a social account of a social application currently running on a first terminal.
The obtaining module 42 is configured to obtain the first audio file according to the first obtaining manner when it is detected that the first obtaining manner is triggered.
A determining module 43, configured to determine, according to the first audio file, a second audio file to be shared.
The sending module 44 is configured to send a first publishing request to a server when a publishing instruction for the second audio file is detected, where the first publishing request carries the second audio file, so that the server displays the second audio file in a dynamic attention list of a second account.
Wherein the second account is a social account that focuses on the first account in the social application.
Fig. 17 is a block diagram illustrating an audio sharing arrangement according to an example embodiment. The apparatus is applied to a server, and as shown in fig. 17, the apparatus may include:
the receiving module 51 is configured to receive a first publishing request sent by a first account running in a first terminal social application, where the first publishing request carries a second audio file.
A presentation module 52, configured to present the second audio file in a focus dynamic list of a second account, where the second account is a social account focused on the first account in the social application.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 18 is a block diagram illustrating an audio sharing arrangement according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 18, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
Fig. 19 is a block diagram illustrating an audio sharing arrangement according to an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 19, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (24)
1. An audio sharing method is applied to a first terminal, and is characterized by comprising the following steps:
when a first instruction for acquiring audio of a first account is detected, displaying an audio source selection interface, wherein the audio source selection interface comprises an audio acquisition mode, and the first account is a social contact account of a social contact application currently running on a first terminal;
when detecting that a first acquisition mode is triggered, acquiring a first audio file according to the first acquisition mode;
determining a second audio file to be shared according to the first audio file, wherein the second audio file is formed by editing the first audio file, and the editing mode comprises insertion;
when a publishing instruction for the second audio file is detected, sending a first publishing request to a server, wherein the first publishing request carries the second audio file, so that the server displays the second audio file in a concerned dynamic list of a second account;
wherein the second account is a social account that focuses on the first account in the social application;
the first issuing request is used for controlling the server to generate first media information, the first media information comprises an audio play bar of the second audio file, the audio play bar comprises a simulated sound wave graph with a preset color, and the simulated sound wave graph is a graph determined according to the corresponding audio file.
2. The method of claim 1, wherein the first release request is used to control the server to generate first media information, the first media information comprising an audio playbar of the second audio file,
the method further comprises the following steps: receiving first media information returned by the server;
and displaying the first media information in an attention dynamic list of a first account in the social application of the first terminal.
3. The method of claim 2, wherein causing the server to present the second audio file in a dynamic list of interest for the second account comprises:
the first publishing request is further used for controlling the server to generate second media information and sending the second media information to a second terminal operated by a second account so as to display the second media information in an attention dynamic list of the second account in the social application of the second terminal, wherein the second media information comprises an audio play bar of the second audio file.
4. The method of claim 1, further comprising:
receiving third media information sent by the server, wherein the third media information is generated by the server in response to a second issuing request sent by a second account for a third audio file, and the third media information comprises an audio play bar of the third audio file;
and displaying the third media information in an attention dynamic list of a first account in the social application of the first terminal.
5. The method of claim 1, wherein the first obtaining means comprises selecting local audio;
acquiring a first audio file according to the first acquisition mode, comprising:
displaying a local audio list, wherein the audio list comprises a plurality of audio materials;
and when the selection operation of selecting a certain audio material is detected, using the selected audio material as a first audio file.
6. The method of claim 1, wherein the first obtaining means comprises recording audio;
acquiring a first audio file according to the first acquisition mode, comprising:
starting the recording equipment;
when an acquisition instruction for starting to acquire audio is detected, acquiring the audio through a recording device;
and when an ending instruction for ending the audio acquisition is detected, the acquired audio is taken as a first audio file.
7. The method of claim 2, wherein determining the second audio file to be shared from the first audio file comprises:
and when a second instruction for editing the first audio file is detected, editing the first audio file according to the second instruction to obtain a second audio file.
8. The method of claim 7, wherein the second instructions comprise: intercepting an audio clip, wherein the audio clip intercepting instruction comprises one or more sub-time periods in a time period corresponding to a first audio file;
editing the first audio file according to the second instruction to obtain a second audio file, wherein the method comprises the following steps:
and intercepting the audio clips corresponding to the one or more sub-time periods according to the instruction for intercepting the audio clips, and synthesizing each audio clip into a second audio file.
9. The method of claim 7, wherein the second instructions comprise: instructions for inserting an audio clip, the instructions for inserting an audio clip comprising: one or more time points in a time period corresponding to the first audio file, and a fourth audio file to be inserted corresponding to the one or more time points;
editing the first audio file according to the second instruction to obtain a second audio file, wherein the method comprises the following steps:
and according to the instruction for inserting the audio clip, inserting the fourth audio file corresponding to the time point at each time point to obtain a second audio file.
10. The method of claim 9, further comprising: and the first terminal obtains a fourth audio file in an audio recording mode.
11. The method of claim 1, wherein the simulated acoustic pattern is determined by:
wherein y represents the amplitude of the simulated sound wave graph, w represents the width of the attention dynamic list display interface of the social application, r represents the screen occupation ratio of the first terminal, s represents the playing time of the audio file, a represents the shortest time for which the audio file can be played, b represents the longest time for which the audio file can be played, and k is a constant.
12. The method according to claim 2 or 4, characterized in that the method further comprises:
and if any audio play bar in the attention dynamic list is detected to be triggered, acquiring and playing the triggered audio play bar in the attention dynamic list and the audio corresponding to the subsequent audio play bar according to a preset playing sequence.
13. The method according to claim 12, wherein the predetermined playing sequence comprises any one of the following: according to the sequence of the shared time from first to last, the sequence of the shared time from last to first and the random sequence.
14. The method of claim 7 or 10, wherein the second instructions further comprise: the method comprises the steps of obtaining a first multimedia file, wherein the first multimedia file comprises one or more of videos, images, expressions and text information;
editing the first audio file according to the second instruction to obtain a second audio file, wherein the method comprises the following steps:
determining a display mode of the first multimedia file relative to the first audio file according to the second instruction to obtain a second audio file;
the first media information further comprises: the first multimedia file.
15. The method of any one of claims 1 to 10, 11, 13, further comprising:
when a storage instruction for storing the first audio file and/or the second audio file is detected, storing the first audio file and/or the second audio file to a cloud or locally.
16. An audio sharing method applied to a server is characterized by comprising the following steps:
receiving a first publishing request sent by a first account running in a social application of a first terminal, wherein the first publishing request carries a second audio file edited according to a first audio file, and the first audio file is an audio file obtained by the first account running in the first terminal in an audio source display selection interface in a first obtaining mode;
displaying the second audio file in a dynamic list of interest of a second account, wherein the second account is a social account that is interested in the first account in the social application;
generating first media information according to the first issuing request, wherein the first media information comprises an audio play bar of the second audio file and a first multimedia file, the audio play bar comprises a simulated sound wave graph with a preset color, the simulated sound wave graph is a graph determined according to a corresponding audio file, the preset color comprises a color selected or set by an account for issuing the audio file corresponding to the audio play bar, and the first multimedia file comprises at least one of a sound diary, a memo, a file for submitting a main broadcast and a sound work.
17. The method of claim 16, further comprising:
generating first media information according to the first publishing request, wherein the first media information comprises an audio play bar of the second audio file;
and sending the first media information to the first terminal so as to enable the first media information to be displayed in a focus dynamic list of a first account in the social application of the first terminal.
18. The method of claim 16, wherein presenting the second audio file in a dynamic list of interest for a second account comprises:
generating second media information according to the first publishing request, wherein the second media information comprises an audio playing bar of the second audio file;
and sending the second media information to a second terminal so as to enable the second media information to be displayed in a concerned dynamic list of a second account in the social application of the second terminal.
19. The method of claim 16, wherein the simulated acoustic wave pattern is determined by:
wherein y represents the amplitude of the simulated sound wave graph, w represents the width of the attention dynamic list display interface of the social application, r represents the screen occupation ratio of the first terminal, s represents the playing time of the audio file, a represents the shortest time for which the audio file can be played, b represents the longest time for which the audio file can be played, and k is a constant.
20. The method of claim 17,
the second audio file further comprises: the multimedia file display method comprises a first multimedia file and a display mode of the first multimedia file relative to a first audio file, wherein the first multimedia file comprises one or more of videos, images, expressions and text information.
21. An audio sharing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
performing the method of any one of claims 1 to 15.
22. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 15.
23. An audio sharing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
performing the method of any one of claims 16 to 20.
24. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 16 to 20.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811300693.0A CN109461462B (en) | 2018-11-02 | 2018-11-02 | Audio sharing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811300693.0A CN109461462B (en) | 2018-11-02 | 2018-11-02 | Audio sharing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109461462A CN109461462A (en) | 2019-03-12 |
CN109461462B true CN109461462B (en) | 2021-12-17 |
Family
ID=65609237
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811300693.0A Active CN109461462B (en) | 2018-11-02 | 2018-11-02 | Audio sharing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109461462B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111343060B (en) | 2017-05-16 | 2022-02-11 | 苹果公司 | Method and interface for home media control |
US10904029B2 (en) | 2019-05-31 | 2021-01-26 | Apple Inc. | User interfaces for managing controllable external devices |
US20200379716A1 (en) * | 2019-05-31 | 2020-12-03 | Apple Inc. | Audio media user interface |
CN110177155A (en) * | 2019-06-24 | 2019-08-27 | 广州酷狗计算机科技有限公司 | Playback method, the apparatus and system of audio file |
CN111144076B (en) * | 2019-12-13 | 2023-06-02 | 汉海信息技术(上海)有限公司 | Social information publishing method and device |
CN110996145A (en) * | 2019-12-18 | 2020-04-10 | 北京达佳互联信息技术有限公司 | Multimedia resource playing method, system, terminal equipment and server |
CN111583973B (en) * | 2020-05-15 | 2022-02-18 | Oppo广东移动通信有限公司 | Music sharing method and device and computer readable storage medium |
US11392291B2 (en) | 2020-09-25 | 2022-07-19 | Apple Inc. | Methods and interfaces for media control with dynamic feedback |
CN118590700A (en) * | 2021-02-25 | 2024-09-03 | 腾讯科技(深圳)有限公司 | Audio processing method, device, terminal and storage medium |
CN114327180A (en) * | 2021-12-13 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Audio content display method and device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102325173A (en) * | 2011-08-30 | 2012-01-18 | 重庆抛物线信息技术有限责任公司 | Mixed audio and video sharing method and system |
CN103780709A (en) * | 2014-02-26 | 2014-05-07 | 北京华夏翰科科技有限公司 | Method and system for rapidly editing and releasing messages of WeChat or EaseChat |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9245017B2 (en) * | 2009-04-06 | 2016-01-26 | Caption Colorado L.L.C. | Metatagging of captions |
CN106470147B (en) * | 2015-08-18 | 2020-09-08 | 腾讯科技(深圳)有限公司 | Video sharing method and device and video playing method and device |
US10747947B2 (en) * | 2016-02-25 | 2020-08-18 | Nxgn Management, Llc | Electronic health record compatible distributed dictation transcription system |
CN106027785A (en) * | 2016-05-26 | 2016-10-12 | 深圳市金立通信设备有限公司 | Voice processing method and terminal |
US20180301170A1 (en) * | 2018-01-27 | 2018-10-18 | Iman Rezanezhad Gatabi | Computer-Implemented Methods to Share Audios and Videos |
-
2018
- 2018-11-02 CN CN201811300693.0A patent/CN109461462B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102325173A (en) * | 2011-08-30 | 2012-01-18 | 重庆抛物线信息技术有限责任公司 | Mixed audio and video sharing method and system |
CN103780709A (en) * | 2014-02-26 | 2014-05-07 | 北京华夏翰科科技有限公司 | Method and system for rapidly editing and releasing messages of WeChat or EaseChat |
Also Published As
Publication number | Publication date |
---|---|
CN109461462A (en) | 2019-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109461462B (en) | Audio sharing method and device | |
US10728196B2 (en) | Method and storage medium for voice communication | |
CN106028166B (en) | Live broadcast room switching method and device in live broadcast process | |
CN110266879B (en) | Playing interface display method, device, terminal and storage medium | |
CN106791893B (en) | Video live broadcasting method and device | |
WO2018072741A1 (en) | Task management based on instant communication message | |
CN107644646B (en) | Voice processing method and device for voice processing | |
CN110490808B (en) | Picture splicing method, device, terminal and storage medium | |
CN107820131B (en) | Comment information sharing method and device | |
CN107566892B (en) | Video file processing method and device and computer readable storage medium | |
CN110069758B (en) | Method, device and storage medium for publishing multimedia information | |
CN110634220B (en) | Information processing method and device | |
CN109660873B (en) | Video-based interaction method, interaction device and computer-readable storage medium | |
CN107423386B (en) | Method and device for generating electronic card | |
CN108495168B (en) | Bullet screen information display method and device | |
CN104156204A (en) | Calendar event establishing method and device | |
CN108174269B (en) | Visual audio playing method and device | |
CN114422463A (en) | Communication method, communication apparatus, electronic device, and storage medium | |
CN110110315B (en) | To-do item management method and device | |
CN113988021A (en) | Content interaction method and device, electronic equipment and storage medium | |
CN108156506A (en) | The progress adjustment method and device of barrage information | |
CN105872573A (en) | Video playing method and apparatus | |
CN113986574A (en) | Comment content generation method and device, electronic equipment and storage medium | |
US20230059637A1 (en) | Multimedia data processing method, apparatus, and device, computer-readable storage medium, and computer program product | |
CN113553472B (en) | Information display method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |