CN111142665A - Stereo processing method and system of earphone assembly and earphone assembly - Google Patents
Stereo processing method and system of earphone assembly and earphone assembly Download PDFInfo
- Publication number
- CN111142665A CN111142665A CN201911377379.7A CN201911377379A CN111142665A CN 111142665 A CN111142665 A CN 111142665A CN 201911377379 A CN201911377379 A CN 201911377379A CN 111142665 A CN111142665 A CN 111142665A
- Authority
- CN
- China
- Prior art keywords
- earphone
- sound source
- headphone
- filter coefficients
- filter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 230000033001 locomotion Effects 0.000 claims abstract description 81
- 230000005236 sound signal Effects 0.000 claims abstract description 72
- 238000004891 communication Methods 0.000 claims description 48
- 230000006870 function Effects 0.000 claims description 38
- 230000005540 biological transmission Effects 0.000 claims description 33
- 230000009466 transformation Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 19
- 238000000034 method Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 12
- 230000001360 synchronised effect Effects 0.000 claims description 11
- 230000002194 synthesizing effect Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000012546 transfer Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 6
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 8
- 210000003128 head Anatomy 0.000 description 22
- 238000010586 diagram Methods 0.000 description 15
- 230000000694 effects Effects 0.000 description 11
- 238000012937 correction Methods 0.000 description 7
- 101100368149 Mus musculus Sync gene Proteins 0.000 description 4
- 101000741965 Homo sapiens Inactive tyrosine-protein kinase PRAG1 Proteins 0.000 description 3
- 102100038659 Inactive tyrosine-protein kinase PRAG1 Human genes 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000004807 localization Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000010363 phase shift Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 210000003454 tympanic membrane Anatomy 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/033—Headphones for stereophonic communication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Stereophonic System (AREA)
Abstract
The disclosure relates to a stereo processing method and system of an earphone assembly and the earphone assembly. The earphone assembly comprises a first earphone and a second earphone, and the stereo processing method comprises the following steps: detecting a motion parameter of the first earphone; transmitting the detected motion parameters of the first earphone to the second earphone; and adjusting the audio signals to be played by the first and second earphones, respectively, based on the motion parameters, such that the adjusted audio signals simulate sound changes caused by motion of the respective earphones relative to the sound source position. Therefore, each earphone in the earphone assembly can timely adjust the generated stereo sound so as to simulate the stereo sound change caused by the change of the relative position of the earphone and the sound source.
Description
Technical Field
The present disclosure relates to a headset and a sound effect processing method thereof, and more particularly, to a stereo processing method and system of a headset assembly, and a headset assembly.
Background
With the social progress and the improvement of the living standard of people, the earphone becomes an indispensable living article for people. Traditional wired earphones are connected with intelligent equipment (such as a smart phone, a notebook computer, a tablet computer and the like) through wires, so that the actions of a wearer can be limited, and the traditional wired earphones are very inconvenient in sports occasions. Meanwhile, the winding and pulling of the earphone cord, as well as the stethoscope effect, all affect the user experience. The common Bluetooth headset cancels the connection between the headset and the intelligent device, but the connection still exists between the left ear and the right ear. True wireless stereo headphones are produced at the same time.
However, in a game context, such as a virtual reality game context, as the headset wearer moves while wearing the headset, such as moving in various directions, the position of the headset relative to the sound source in the game context changes. The current true wireless stereo earphone still keeps the preset stereo effect when the earphone moves; for example, in the case where the stereo effect of the gunshot sound is set to be emitted from the front, and the headphone wearer turns back 180 degrees, the orientation of the sound source of the gunshot sound has already changed from the front to the rear of the headphone wearer, but the stereo effect heard by the headphone wearer still maintains the effect emitted from the front of the headphone wearer, and the wearer feels that the sound source of the gunshot sound suddenly shifts in position, which seriously affects the real experience of the wearer.
Disclosure of Invention
The present disclosure is provided to solve the above-mentioned problems occurring in the prior art.
There is a need for a stereo processing method and system for an earphone assembly and an earphone assembly, which can realize timely adjustment of stereo generated by an earphone to simulate stereo change caused by a change in a position of the earphone relative to a sound source, so that a user wearing the earphone can feel the position and location of the sound source more truly when carrying the earphone to move, thereby improving the experience of the user wearing the earphone in scenes such as games.
According to a first aspect of the present disclosure, there is provided a stereo processing method of a headphone assembly, the headphone assembly including a first headphone and a second headphone; the stereo processing method comprises the following steps: detecting a motion parameter of the first earphone; transmitting the detected motion parameters of the first earphone to the second earphone; and adjusting the audio signals to be played by the first and second earphones, respectively, based on the motion parameters, such that the adjusted audio signals simulate sound changes caused by motion of the respective earphones relative to the sound source position.
According to a second aspect of the present disclosure, there is provided a stereo processing system of a headphone assembly, the headphone assembly including a first headphone and a second headphone; the stereo processing system includes: a detection module configured to detect a motion parameter of the first earpiece; a transmission module configured to transmit the detected motion parameters of the first headset to the second headset; and an adjustment module configured to adjust the audio signals to be played by the first and second earphones, respectively, based on the motion parameters, such that the adjusted audio signals simulate sound changes caused by motion of the respective earphones relative to the sound source position.
According to a third aspect of the present disclosure, there is provided a headphone assembly configured to: the earphone comprises at least a first earphone and a second earphone, wherein the first earphone detects the motion parameters of the first earphone by using a detection device arranged on the first earphone and transmits the detected motion parameters of the first earphone to the second earphone; the first earphone and the second earphone respectively adjust the audio signals to be played based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position.
By using the stereo processing method and system for the earphone component and the earphone component according to the embodiments of the disclosure, each earphone in the earphone component can adjust the generated stereo in time to simulate the stereo change caused by the change of the earphone relative to the sound source, so that the feeling of the earphone wearer carrying the movement of the earphone on the position and location of the sound source is more real, and the experience of the earphone wearer in the scenes of games and the like is improved.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate various embodiments generally by way of example and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Such embodiments are illustrative, and are not intended to be exhaustive or exclusive embodiments of the present apparatus or method.
Fig. 1 shows a schematic overview of the communication connections between the various headsets of a headset assembly and with another device according to an embodiment of the disclosure;
fig. 2 shows a flow chart of a stereo processing method of a headphone assembly according to an embodiment of the present disclosure;
fig. 2(a) shows a flow diagram of a motion parameter based stereo processing of a headset assembly according to an embodiment of the present disclosure;
FIG. 3(a) shows a schematic processing diagram of a mono audio signal according to an embodiment of the present disclosure;
fig. 3(b) shows a schematic processing diagram of a binaural audio signal according to an embodiment of the disclosure;
fig. 4 illustrates a timing diagram of a stereo processing method of a headphone assembly according to an embodiment of the present disclosure;
fig. 5(a) shows a schematic structural diagram of a bluetooth physical frame according to an embodiment of the present disclosure;
fig. 5(b) shows a schematic structural diagram of a bluetooth physical frame according to another embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a stereo processing system of a headset assembly according to an embodiment of the disclosure.
Detailed Description
For a better understanding of the technical aspects of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings. Embodiments of the present disclosure are described in further detail below with reference to the figures and the detailed description, but the present disclosure is not limited thereto. The order in which the various steps described herein are described as examples should not be construed as a limitation if there is no requirement for a context relationship between each other, and one skilled in the art would know that sequential adjustments may be made without destroying the logical relationship between each other, rendering the overall process impractical.
Fig. 1 shows a schematic diagram of the communication connections between the various headsets of a headset assembly and with another device according to an embodiment of the disclosure. As shown in fig. 1, a communication system 100 in which a headset assembly establishes with another device comprises the other device 101, a first headset 102 and a second headset 103. Wherein the other device 101 may be various portable smart terminals including, but not limited to, a cell phone, a tablet, a wearable smart device, and the like. The first headset 102 establishes a first communication connection 104 with the further device 101, the first headset 102 also establishing a second communication connection 105 with the second headset 103. The first headset 102 is capable of transmitting the relevant communication parameters to the second headset 103, so that the second headset 103 listens to the first communication connection, i.e. the listening connection 106, with the relevant communication parameters; the relevant communication parameters may be transmitted directly to the second earpiece 103 or via a relay device, which may be any one or a combination of a charging box, another device 101, a wired circuit, etc. to the second earpiece 103. In some embodiments, the relevant communication parameters include, but are not limited to, a communication connection address of the other device 101, encryption parameter information of the communication connection, etc., such that the second earpiece 103 need not perform pairing and establishment of the communication connection, but may masquerade as the first earpiece 102 to listen and receive signals transmitted by the other device 101 via the first communication connection 104. The communication connection includes but is not limited to bluetooth, WIFI, radio frequency, wired transmission, etc. By listening to this first communication connection 104 by the second earpiece 103 without having to repeat the establishment of the first communication connection 104 and without having to forward all audio data received by the first earpiece 102 from the further device 101 to the second earpiece 103, the transmission of information between the further device 101 and the two earpieces 102 and 103 can be achieved more efficiently, and the time difference of the information received by the first earpiece 102 and the second earpiece 103 can be reduced, thereby improving the synchronization thereof.
As shown in fig. 1, the motion parameters of the first earphone 102 may be transmitted to the second earphone 103 via the second communication connection 105, such that the audio signals to be played by the first earphone 102 and the second earphone 103 may be adjusted based on the motion parameters, respectively, such that the adjusted audio signals simulate a change in the stereo effect of the sound caused by the movement of the respective earphone with respect to the sound source position. By adjusting the stereo effect of the audio signal in almost real time in response to the relative movement of the headphones, the position and localization of the sound source is made more realistic for the wearer of the headphones, e.g. for a fixed sound source, the wearer perceives a fixed sound source regardless of how the headphones move (pan, rotate, accelerate, etc.). In some embodiments, the second communication connection 105 may also be used to transmit between the first earphone 102 and the second earphone 103 synchronized playback information for enabling the respective earphones of the earphone assembly to play back audio signals simultaneously, thereby improving the synchronization effect of stereo playback between the two earphones, further improving the stereo listening experience for the earphone wearer.
Fig. 2 shows a flow diagram of a stereo processing method 200 of a headset assembly according to an embodiment of the disclosure. As shown in fig. 2, at step 201, a motion parameter of the first earpiece 102 is detected. The first earpiece 102 has mounted thereon sensors capable of detecting parameters of its motion, including but not limited to acceleration sensors, position sensors, inertial sensors, gyroscopes, etc. The sensor detects the motion parameters of the first earphone, including but not limited to angular velocity, acceleration, displacement, position and orientation, and the motion parameters may be any one or a combination of several of them. In step 202, the detected motion parameters of the first earpiece 102 are transmitted to the second earpiece 103. In one embodiment, the motion parameters may be transmitted between the respective headsets via the second communication connection 105. In step 203, the audio signals to be played by the first and second earphones are adjusted based on the motion parameters, respectively, so that the adjusted audio signals simulate sound changes caused by the motion of the respective earphones relative to the sound source position. When the position of the wearer of the headset changes with respect to the position of the sound source, i.e. the position of the headset changes with respect to the position of the sound source, the sensor located on the first headset is able to detect a movement parameter indicating the change in position. Each earphone in the earphone assembly performs stereo processing-based adjustment on the audio signal to be played based on the motion parameter to simulate and generate the audio signal which is supposed to be heard by the wearer in reality after the motion relative to the sound source position. The method adjusts the stereo generated by the earphone in time based on the motion parameters so as to simulate the stereo with the changed relative position with the sound source.
In some embodiments, the adjusting the audio signals to be played by the first earphone and the second earphone based on the motion parameter in step 204 includes: as shown in fig. 2(a), fig. 2(a) shows a flow chart of stereo processing based on motion parameters for a headset assembly according to an embodiment of the present disclosure, and at step 2041, the position and orientation of each of the first and second headsets relative to the sound source is determined based on the motion parameters. The individual headphones in the headphone assembly determine the respective position and orientation of each headphone relative to the sound source by analyzing and calculating the detected or received motion parameters. In step 2042, based on the determined position and orientation of each earpiece, the filter coefficient corresponding to the closest position and orientation relative to the sound source is selected in a predetermined filter list. In which the predetermined filter list stores therein the position and orientation of the sound source and filter coefficients corresponding to the position and orientation of the sound source, which can be determined in advance by measurement. In step 2043, the audio signals to be played back by the respective headphones are subjected to a filtering process using the selected filter coefficients. The filtering process enables the processed filtered signal to simulate stereo variations caused by the motion of the headphones relative to the sound source position.
In some embodiments, the filter list storing the position and orientation of the sound source and the filter coefficients corresponding to the position and orientation of the sound source in step 2042 may be predetermined by the following measurement: step 2042a pre-measures the head-related transfer function with respect to different positions and orientations of the sound source. A Head Related Transfer Function (HRTF) is an audio localization algorithm that can transform sound effects in a game so that a user feels that a sound source comes from different positions and directions. Step 2042b determines the corresponding filter coefficients based on the pre-measured head-related transform functions at different positions and orientations. Specifically, it is possible to emit sound from a plurality of fixed sound source positions by mounting the head model at the position of the microphone to the eardrum. The sound collected by the microphone is analyzed, the changed specific sound data is obtained by analyzing and calculating the collected sound, and a filter is designed based on the changed specific sound data to simulate the changed specific sound data. The corresponding filter coefficients are determined with the head-related transform function measured in advance as the transfer function of the filter. Step 2042c, the filter coefficients are stored in association with the position and orientation, thereby building the filter list. The position and orientation of the sound source and the corresponding filter coefficients are stored, and a filter list is formed for table lookup, facilitating fast determination of the filter coefficients.
In some embodiments, the filter is a digital filter, and the filter coefficients of the digital filter can be adaptively configured according to the motion parameters (one or more of angular velocity, acceleration, position and orientation) of the headset.
In some embodiments, the echo may also be used to make the sound effect more stereo, and the filter may be designed by acquiring the changed specific sound and echo data from the sound and echo.
In some embodiments, when each of the headphones (the first headphone 102 and the second headphone 103) in the headphone assembly is mono, the position and orientation of the left ear headphone and the right ear headphone of the headphones relative to the sound source are respectively obtained; and determining a head-related transformation function of the sound source based on the position and orientation of the sound source; and extracting filter coefficients corresponding to the head-related transform function from the filter list as left channel filter coefficients for the left ear headphones and right channel filter coefficients for the right ear headphones.
Fig. 3(a) shows an output schematic diagram of a mono audio signal according to an embodiment of the present disclosure. As shown in fig. 3(a), when a mono audio signal is fed to headphones, the left-ear headphones and the right-ear headphones acquire their own positions and orientations with respect to a sound source, respectively, and determine their head-related transformation functions, and determine corresponding filter coefficients by table look-up. Therefore, the left ear earphone filter and the right ear earphone filter are respectively configured by the corresponding filter coefficients to filter the received audio signals, finally, the left ear earphone outputs a left channel audio signal, and the right ear earphone outputs a right channel audio signal.
In some embodiments, when each of the headphones (first headphone 102 and second headphone 103) in the headphone assembly is binaural, the binaural including a left channel and a right channel, a first set of positions and orientations of the left ear headphone and the right ear headphone, respectively, with respect to the sound source is obtained for the left channel; and respectively acquiring a second group of positions and orientations of the left ear earphone and the right ear earphone relative to the sound source aiming at the right sound channel, wherein the group of positions and orientations comprise the position and orientation of the left ear earphone and the position and orientation of the right ear earphone. A first set of head related transformation functions is determined based on the first set of positions and orientations, and a second set of head related transformation functions is determined based on the second set of positions and orientations. Left channel filter coefficients of a left ear headphone and left channel filter coefficients of a right ear headphone corresponding to the first set of head-related transformation functions are extracted from the filter list, and right channel filter coefficients of the left ear headphone and right channel filter coefficients of the right ear headphone corresponding to the second set of head-related transformation functions are extracted from the filter list. And synthesizing the left channel filter coefficient of the left ear earphone and the right channel filter coefficient of the left ear earphone as the filter coefficient of the left ear earphone, and synthesizing the left channel filter coefficient of the right ear earphone and the right channel filter coefficient of the right ear earphone as the filter coefficient of the right ear earphone.
Fig. 3(b) illustrates an output schematic diagram of a binaural audio signal according to an embodiment of the present disclosure. As shown in fig. 3(b), when a binaural audio signal is fed to headphones, the left-ear headphones and the right-ear headphones acquire their own positions and orientations with respect to a sound source, respectively, for the left channel, and determine their head-related transformation functions, and determine the corresponding filter coefficients by table lookup. Therefore, the left channel filter of the left ear earphone and the left channel filter of the right ear earphone respectively use the filter coefficients thereof to filter the received left channel audio signals, finally, the left channel audio signals filtered by the left channel filter of the left ear earphone are transmitted to the left ear earphone for output, and the left channel audio signals filtered by the left channel filter of the right ear earphone are transmitted to the right ear earphone for output. Likewise, the left and right ear headphones also acquire their own position and orientation relative to the sound source for the right channel, respectively, and determine their head-related transform functions, as well as the corresponding filter coefficients by table lookup. Therefore, the left-ear earphone right channel filter and the right-ear earphone right channel filter respectively use the filter coefficients thereof to filter the received right channel audio signals, finally, the right channel audio signals filtered by the left-ear earphone right channel filter are transmitted to the left-ear earphone for output, and the right channel audio signals filtered by the right-ear earphone right channel filter are transmitted to the right-ear earphone for output. And synthesizing the left channel audio signal filtered by the left channel filter of the left ear earphone and the right channel audio signal filtered by the right channel filter of the left ear earphone as the output of the left ear earphone, and synthesizing the left channel audio signal filtered by the left channel filter of the right ear earphone and the right channel audio signal filtered by the right channel filter of the right ear earphone as the output of the right ear earphone.
In some embodiments, audio data information from the other device 101 is received by the first earpiece 102 via the first communication connection 104 and the audio data information from the other device 101 is listened to by the second earpiece 103 during a first time period within an nth communication frame, where N is a natural number. After establishing the communication connection shown in fig. 1, the other device 101 transmits audio data information to the first headset 102, the first headset 102 receives the audio data information, and the second headset 103 can also obtain the audio data information transmitted by the other device 101 based on its listening status, which occurs in the first time period 402 of the nth communication frame, as shown in fig. 4. Fig. 4 shows a timing diagram of a stereo processing method of a headphone assembly according to an embodiment of the present disclosure, and as shown in fig. 4, another device 101 transmits audio data information in a first period 402 (i.e., time 401 to time 403) of an nth frame.
In some embodiments, the acknowledgement packet is transmitted by the first earpiece 102 and/or the second earpiece 103 to the other device 101 via the first communication connection during the second time period within the N +1 th communication frame. Wherein the transmission response packet is a response packet containing ACK/NACK information; the transmission of the ACK information to the other device 101 indicates that the first earphone 102 and the second earphone 103 successfully receive the audio data information, and the transmission of the NACK information indicates that the first earphone 102 and the second earphone 103 do not successfully receive the audio data information, and the other device 101 needs to resend the audio data information.
As an example, after the number of times the audio data information is retransmitted by the other device 101 reaches the first preset value and when still one of the first earphone 102 and the second earphone 103 fails to receive the audio data information, an acknowledgement packet indicating that the audio data information was successfully received is transmitted by the other earphone to the other device 101 via the first communication connection 104, and the audio data information is forwarded to the earphone that did not successfully receive the audio data information.
As an example, after the other device 101 retransmits the audio data information by the second preset value, and when neither the first earphone 102 nor the second earphone 103 successfully receives the audio data information, the first earphone 102 and/or the second earphone 103 transmits a response packet indicating that the audio data information was successfully received to the other device 101 via the first communication connection 104, and recovers the audio data information thereof by using a packet loss compensation technique. The packet loss compensation technique is to use an autocorrelation function of a portion of correctly received audio data information to obtain a power spectrum of the correctly received audio data information, and use the power spectrum to estimate a power spectrum of a missing audio signal, thereby recovering the missing audio signal.
Thus, the above embodiment limits the retransmission times of the same audio data packet by another device 101, reduces the time delay of audio transmission, and combines with various compensation or correction means to achieve the accuracy of audio transmission. The above-mentioned process of sending the transmission acknowledgement packet occurs in the second time period 408 (i.e. time 407 to time 409) in the N +1 th communication frame, and the other device 101 receives the transmission acknowledgement packet, as shown in fig. 4.
In some embodiments, the acknowledgement packet indicates, directly or indirectly, the receiving status of the audio data information by its sender, and the successful reception or non-successful reception of the audio data information may be determined by the first headset 102 transmitting information related to audio data received from the other device 101 to the second headset 103 via the second communication connection 105. Fig. 4 illustrates an example of transmission from the first earphone 102 to the second earphone 103, or transmission from the second earphone 103 to the first earphone 102, where all the contents described in conjunction with fig. 4 are applicable to the transmission direction after being adjusted, and are not described herein again. The related information of the audio data may include an indication packet, which indicates the receiving condition of the audio data information by its sender in a direct or indirect manner.
First, the indication packet may include indication information indicating that the first headset 102 has successfully received or has not successfully received the audio data packet from the other device 101, and after receiving the indication packet, the second headset 103 sends a transmission acknowledgement packet to the other device based on the indication information, wherein the transmission acknowledgement packet includes ACK/NACK information.
Second, the indication packet may include an error correction code packet (also referred to as an ECC packet) containing an error correction code obtained by encoding the audio data received by the first headphone 102 but not the audio data; only in case of successful reception of the audio data, the first headphone 102 will encode the audio data, so that the first headphone 102 sends an ECC packet to the second headphone 103, which itself indicates that the audio data was successfully received, at which time a transmission response packet may be sent by the first headphone 102 to the other device 101, and the second headphone 103 may send the transmission response packet to the other device 101 after receiving the ECC packet. By transmitting the ECC packets instead of the audio data, it is possible to ensure that correct audio data is obtained while significantly reducing the amount of data transmission between the two headsets, thereby further increasing the reliability of the bluetooth data transmission.
Thirdly, the indication packet may further include an audio data packet received by the first headset 102 from the other device 101, and the first headset 102 directly packages and transmits the audio data to the second headset 103 after successfully receiving the audio data, and then the first headset 102 transmits a transmission response packet to the other device 101, or the second headset 103 transmits a transmission response packet to the other device 101 after receiving the audio data packet.
The above-described process of the first earpiece 102 sending the indication packet to the second earpiece 103 via the second communication connection 105 occurs during a third time period, which may include the transition time period, in the nth communication frame and the (N + 1) th communication frame, except for the first time period 402 and the second time period 408. As shown in fig. 4, the third time period may be located after the first time period 402 in the nth communication frame, i.e., 404 is a transition time period (time 403 to 405), and 406 is the third time period (time 405 to 407). In some embodiments, the third time period may also be located at time period 410 (time instants 409 to 411) after the second time period 408 within the N +1 th communication frame, which includes the transition time period and the third time period. In some embodiments, sending audio data to the headphones by the other device may be implemented in the first time period 402, transmitting an indication packet between each headphone in the headphone assembly in the third time period 406, sending a transmission response packet to the other device 101 by the headphones in the second time period 408, and transmitting motion parameters and/or synchronized playback information between each headphone in the third time period 410, such that the transmission of information for different roles implemented in the time periods 402, 406, 408, 410 is independent and non-interfering.
In some embodiments, the motion parameters are transmitted by the first earpiece 102 to the second earpiece 103 during a third time period within the nth communication frame and the (N + 1) th communication frame, excluding the first time period and the second time period. During the third time period, the motion parameters (and/or synchronized playback information) are transmitted between the first 102 and second 103 headphones via the second communication connection 105 so that the respective headphones perform stereo processing and synchronized playback of the audio signal to be played back, and the second communication connection 105 and the first communication connection 104 are independent of each other. Therefore, based on the motion parameters, the first earphone 102 and the second earphone 103 can generate more accurate surround sound, and transmit the motion parameters in a third time period independent of the first time period and the second time period, so that the bluetooth transmission between the earphones and the smart device is not interfered, and the stability of the communication connection shown in fig. 1 is guaranteed.
In some embodiments, the motion parameters (and/or the synchronous playing information) are integrated in the indication packet to be transmitted together, so that the transmission amount can be effectively reduced, and the transmission efficiency is improved. The process of combining the indication packet and the motion parameter (and/or the synchronized playback information) when transmitting over the bluetooth connection will be described with reference to fig. 5(a) and 5 (b).
Fig. 5(a) shows a structural diagram of a bluetooth physical frame according to an embodiment of the present disclosure, and 5(b) shows a structural diagram of a bluetooth physical frame according to another embodiment of the present disclosure. There are two data transfer rates for bluetooth transmission, one being the base rate and the other being the enhanced rate. Packet format of basic rate as shown in fig. 5(a), a bluetooth physical frame includes 3 fields, in the direction from least significant bit to most significant bit, respectively, an access code 501, a header 502, and a payload 503, where: the access code 501 is a flag for picoet, used for timing synchronization, offset compensation, paging, and inquiry; packet header 502 contains information for bluetooth link control; payload 503 carries payload information, which in this disclosure may be bluetooth audio data. The technical term "audio data packet" used herein may mean that the payload 503 corresponds to audio data after the access code 501, the packet header 502, and the like are removed from the bluetooth physical frame.
The packet format of the enhanced rate is as shown in fig. 5(b), and the bluetooth physical frame includes 6 fields, in the direction from least significant bit to most significant bit, respectively, an access code 504, a header 505, a guard interval 506, a sync 507, an enhanced rate payload 508 and a packet tail 509, where the access code 504, the header 505 and the enhanced rate payload 508 are similar to the access code 501, the header 502 and the payload 503, and are not described herein again. The guard interval 506 indicates an interval time between the packet header 505 and the sync 507; sync 507 comprises a sync sequence, typically used for differential phase shift keying modulation; the packet tail 509 is set differently for different modulation schemes. In some embodiments, for synchronized data, at the end of the payload 503 and the enhanced rate payload 508, there may also be, for example, 16 bits for cyclic redundancy check. In some embodiments, the motion information and/or the synchronized playback information may be integrated into the indication packet, so that the valid information and the motion parameter (and/or the synchronized playback information) of the indication packet may be placed in the payload 503 or the enhanced rate payload 508 for transmission, and thus the indication packet and the motion information packet and/or the synchronized playback information packet may be combined into one packet (bluetooth physical frame) to share the access code 501 or 504, the packet header 502 or 505, and so on, so as to effectively simplify the structure of the bluetooth physical frame, significantly reduce the overall transmission amount (for example, information in the access code and the packet header), reduce the switching time between multiple bluetooth physical frames, also reduce the control complexity, reduce mutual interference between two packets, and further increase the data transmission efficiency. In some embodiments, the receiver of the indication packet starts receiving before the transmission time of the indication packet, which can improve the accuracy of the reception to avoid missing the reception.
In addition, the error correction code included in the ECC packet herein is an error correction code for the audio data in the payload 503 and the enhanced rate payload 508, and may adopt various encoding modes, including but not limited to Reed Solomon (RS) encoding, BCH (Bose, Ray-Chaudhuri, and Hocquenghem) encoding, and the like. In some embodiments, the ECC packets are multiplexed with bluetooth protocols at layers above the physical layer, such as the bluetooth medium access control (mac) layer, the bluetooth host control interface layer, etc., a 2Mb/s symbol rate may be used at the physical layer, and the modulation scheme may be Quadrature Phase Shift Keying (QPSK) or Gaussian Frequency Shift Keying (GFSK). The Bluetooth physical layer can adopt a symbol rate of 1Mb/s, and the ECC packet adopts a higher symbol rate, so that more error correction bits can be transmitted and the error correction capability can be better.
The present disclosure also relates to a stereo processing system, and fig. 6 shows a schematic diagram of a stereo processing system according to an embodiment of the present disclosure, and as shown in fig. 6, the system 600 includes a detection module 601, a transmission module 602, and an adjustment module 603. The detection module 601 is configured to detect a motion parameter of the first earpiece; the transmitting module 602 is configured to transmit the detected motion parameters of the first earpiece to the second earpiece; the adjustment module 603 is configured to adjust the audio signals to be played by the first and second headphones, respectively, based on the motion parameters, such that the adjusted audio signals simulate sound changes caused by the motion of the respective headphones relative to the sound source position.
In some embodiments, the adjustment module 603 is specifically configured to: determining a position and an orientation of each of the first and second earpieces relative to the sound source based on the motion parameters; selecting a filter coefficient corresponding to the closest position and orientation relative to the sound source in a predetermined filter list based on the determined position and orientation of each headphone relative to the sound source; the audio signals to be played back by the respective headphones are subjected to filter processing using the selected filter coefficients.
In some embodiments, the adjustment module 603 is specifically configured to predetermine the filter list by: pre-measuring head-related transformation functions for different positions and orientations relative to the sound source; determining corresponding filter coefficients based on head-related transformation functions of different positions and orientations measured in advance; the filter coefficients are stored in association with the position and orientation, thereby constructing a filter list.
In some embodiments, the adjustment module 603 is specifically configured to, when the audio signal is mono: respectively acquiring the positions and the directions of the left ear earphone and the right ear earphone relative to a sound source; determining a head-related transformation function based on the location and orientation of the sound source; and extracting filter coefficients corresponding to the head-related transformation function from the filter list, wherein the filter coefficients are respectively used as a left channel filter coefficient of the left ear earphone and a right channel filter coefficient of the right ear earphone.
In some embodiments, the adjusting module 603 is specifically configured to, when the audio signal is a binaural, the binaural including a left channel and a right channel: respectively acquiring a first group of positions and orientations of a left ear earphone and a right ear earphone relative to a sound source aiming at a left sound channel, and respectively acquiring a second group of positions and orientations of the left ear earphone and the right ear earphone relative to the sound source aiming at a right sound channel; determining a first set of head related transformation functions based on the first set of positions and orientations, and a second set of head related transformation functions based on the second set of positions and orientations; extracting left channel filter coefficients of a left ear headphone and left channel filter coefficients of a right ear headphone corresponding to the first set of head-related transformation functions from the filter list, and extracting right channel filter coefficients of the left ear headphone and right channel filter coefficients of the right ear headphone corresponding to the second set of head-related transformation functions from the filter list; and synthesizing the left channel audio signal filtered by the left channel filter of the left ear earphone and the right channel audio signal filtered by the right channel filter of the left ear earphone as the output of the left ear earphone, and synthesizing the left channel audio signal filtered by the left channel filter of the right ear earphone and the right channel audio signal filtered by the right channel filter of the right ear earphone as the output of the right ear earphone.
In some embodiments, the motion parameters include any one or more of angular velocity, acceleration, displacement, position, and orientation.
The system can adjust the stereo generated by the earphone in time based on the motion parameters so as to simulate the stereo with the changed position relative to the sound source.
The present disclosure also relates to a headset assembly configured to include at least a first headset and a second headset. The first earphone detects the motion parameters of the first earphone by using a detection device arranged on the first earphone and transmits the detected motion parameters of the first earphone to the second earphone; the first earphone and the second earphone respectively adjust the audio signals to be played based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position. The earphone can adjust the stereo generated by the earphone in time based on the motion parameters so as to simulate the stereo with the changed relative position to the sound source.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the disclosure with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Claims (18)
1. A stereo processing method for a headphone assembly, the headphone assembly comprising a first headphone and a second headphone, the stereo processing method comprising:
detecting a motion parameter of the first earphone;
transmitting the detected motion parameters of the first headset to the second headset; and
the audio signals to be played by the first and second headphones are adjusted based on the motion parameters, respectively, such that the adjusted audio signals simulate sound changes caused by motion of the respective headphones relative to the sound source location.
2. The stereo processing method of claim 1, wherein the adjusting the audio signals to be played by the first and second headphones based on the motion parameters comprises:
determining a position and an orientation of each of the first and second earpieces relative to the sound source based on the motion parameters;
selecting a filter coefficient corresponding to a closest position and orientation relative to the sound source in a predetermined filter list based on the determined position and orientation of each earpiece relative to the sound source;
performing a filtering process on the audio signals to be played by the respective headphones using the selected filter coefficients.
3. Stereo processing method according to claim 2, characterized in that the filter list is predetermined by:
pre-measuring head related transformation functions for different positions and orientations relative to the sound source;
determining the corresponding filter coefficients based on the head-related transform functions of the different positions and orientations measured in advance;
storing the filter coefficients in association with the position and orientation, thereby constructing the filter list.
4. Stereo processing method according to claim 3, wherein when the audio signal is mono:
respectively acquiring the positions and the directions of the left ear earphone and the right ear earphone relative to the sound source;
determining the head-related transformation function based on the location and orientation of the sound source; and
and extracting the filter coefficients corresponding to the head-related transform function from the filter list, and respectively using the filter coefficients as left channel filter coefficients of the left ear earphone and right channel filter coefficients of the right ear earphone.
5. Stereo processing method according to claim 3, wherein when the audio signal is a binaural, the binaural comprises a left channel and a right channel:
for the left channel, respectively obtaining a first set of positions and orientations of a left ear headphone and a right ear headphone relative to the sound source, and for the right channel, respectively obtaining a second set of positions and orientations of the left ear headphone and the right ear headphone relative to the sound source;
determining a first set of head-related transformation functions based on the first set of positions and orientations, and a second set of head-related transformation functions based on the second set of positions and orientations;
extracting left channel filter coefficients of the left ear headphone and left channel filter coefficients of the right ear headphone corresponding to the first set of head related transfer functions from the filter list, and extracting right channel filter coefficients of the left ear headphone and right channel filter coefficients of the right ear headphone corresponding to the second set of head related transfer functions from the filter list; and
and synthesizing a left channel audio signal filtered by the left channel filter of the left ear headphone and a right channel audio signal filtered by the right channel filter of the left ear headphone as the output of the left ear headphone, and synthesizing a left channel audio signal filtered by the left channel filter of the right ear headphone and a right channel audio signal filtered by the right channel filter of the right ear headphone as the output of the right ear headphone.
6. Stereo processing method according to claim 1,
receiving, by the first earpiece via a first communication connection, audio data information from another device and listening, by the second earpiece, for the audio data information from the other device within a first time period within an nth communication frame, N being a natural number;
transmitting, by the first headset and/or the second headset to the other device via the first communication connection, an acknowledgement packet during a second time period within an N +1 th communication frame, the acknowledgement packet indicating a reception status of the audio data information by its sender in a direct or indirect manner;
transmitting, by the first headset, the motion parameter to the second headset in a third time period other than the first time period and the second time period within the nth and N +1 th communication frames.
7. The stereo processing method of claim 6, wherein the motion parameters and synchronized playback information are sent by the first earpiece to the second earpiece during the third time period.
8. The stereo processing method according to claim 6, wherein when the response packet indicates that the receiving condition of the audio data information by its sender is reception failure, the other device retransmits the audio data information.
9. Stereo processing method according to claim 8,
and after the frequency of retransmitting the audio data information by the other equipment reaches a first preset value and one of the first earphone and the second earphone does not successfully receive the audio data information, transmitting a response packet indicating that the audio data information is successfully received to the other equipment by the other earphone through the first communication connection, and forwarding the audio data information to the one earphone.
10. Stereo processing method according to claim 8,
and when the audio data information is not successfully received by the first earphone and the second earphone after the frequency of retransmitting the audio data information by the other equipment reaches a second preset value, the first earphone and/or the second earphone transmits a response packet indicating that the audio data information is successfully received to the other equipment through the first communication connection, and the audio data information is recovered by utilizing a packet loss compensation technology.
11. Stereo processing method according to claim 1, wherein the motion parameters comprise any one or several of angular velocity, acceleration, displacement, position and orientation.
12. A stereo processing system for a headset assembly including a first headset and a second headset, the stereo processing system comprising:
a detection module configured to detect a motion parameter of the first earpiece;
a transmission module configured to transmit the detected motion parameters of the first headset to the second headset; and
an adjustment module configured to adjust the audio signals to be played by the first and second earphones, respectively, based on the motion parameters, such that the adjusted audio signals simulate sound changes caused by motion of the respective earphones relative to a sound source position.
13. The stereo processing system of claim 12, wherein the adjustment module is specifically configured to:
determining a position and an orientation of each of the first and second earpieces relative to the sound source based on the motion parameters;
selecting a filter coefficient corresponding to a closest position and orientation relative to the sound source in a predetermined filter list based on the determined position and orientation of each earpiece relative to the sound source;
performing a filtering process on the audio signals to be played by the respective headphones using the selected filter coefficients.
14. The stereo processing system of claim 13, wherein the adjustment module is configured to predetermine the filter list by:
pre-measuring head related transformation functions for different positions and orientations relative to the sound source;
determining the corresponding filter coefficients based on the head-related transform functions of the different positions and orientations measured in advance;
storing the filter coefficients in association with the position and orientation, thereby constructing the filter list.
15. The stereo processing system of claim 14, wherein the adjustment module is configured to, when the audio signal is mono:
respectively acquiring the positions and the directions of the left ear earphone and the right ear earphone relative to the sound source;
determining the head-related transformation function based on the location and orientation of the sound source; and
and extracting the filter coefficients corresponding to the head-related transform function from the filter list, and respectively using the filter coefficients as left channel filter coefficients of the left ear earphone and right channel filter coefficients of the right ear earphone.
16. The stereo processing system of claim 14, wherein the adjustment module is configured to, when the audio signal is a binaural, the binaural including a left channel and a right channel:
for the left channel, respectively obtaining a first set of positions and orientations of a left ear headphone and a right ear headphone relative to the sound source, and for the right channel, respectively obtaining a second set of positions and orientations of the left ear headphone and the right ear headphone relative to the sound source;
determining a first set of head-related transformation functions based on the first set of positions and orientations, and a second set of head-related transformation functions based on the second set of positions and orientations;
extracting left channel filter coefficients of the left ear headphone and left channel filter coefficients of the right ear headphone corresponding to the first set of head related transfer functions from the filter list, and extracting right channel filter coefficients of the left ear headphone and right channel filter coefficients of the right ear headphone corresponding to the second set of head related transfer functions from the filter list; and
and synthesizing a left channel audio signal filtered by the left channel filter of the left ear headphone and a right channel audio signal filtered by the right channel filter of the left ear headphone as the output of the left ear headphone, and synthesizing a left channel audio signal filtered by the left channel filter of the right ear headphone and a right channel audio signal filtered by the right channel filter of the right ear headphone as the output of the right ear headphone.
17. Stereo processing system according to claim 12, wherein the motion parameters comprise any one or several of angular velocity, acceleration, displacement, position and orientation.
18. An earphone assembly, wherein the earphone assembly is configured to:
the earphone comprises at least a first earphone and a second earphone, wherein the first earphone detects the motion parameters of the first earphone by using a detection device arranged on the first earphone and transmits the detected motion parameters of the first earphone to the second earphone;
the first earphone and the second earphone respectively adjust the audio signals to be played based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911377379.7A CN111142665B (en) | 2019-12-27 | 2019-12-27 | Stereo processing method and system for earphone assembly and earphone assembly |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911377379.7A CN111142665B (en) | 2019-12-27 | 2019-12-27 | Stereo processing method and system for earphone assembly and earphone assembly |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111142665A true CN111142665A (en) | 2020-05-12 |
CN111142665B CN111142665B (en) | 2024-02-06 |
Family
ID=70520957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911377379.7A Active CN111142665B (en) | 2019-12-27 | 2019-12-27 | Stereo processing method and system for earphone assembly and earphone assembly |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111142665B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112235690A (en) * | 2020-10-13 | 2021-01-15 | 恒玄科技(上海)股份有限公司 | Audio signal adjusting method and device, earphone assembly and readable storage medium |
CN112612445A (en) * | 2020-12-28 | 2021-04-06 | 维沃移动通信有限公司 | Audio playing method and device |
CN114363770A (en) * | 2021-12-17 | 2022-04-15 | 北京小米移动软件有限公司 | Filtering method and device in pass-through mode, earphone and readable storage medium |
CN114422897A (en) * | 2022-01-12 | 2022-04-29 | Oppo广东移动通信有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN114543844A (en) * | 2021-04-09 | 2022-05-27 | 恒玄科技(上海)股份有限公司 | Audio playing processing method and device of wireless audio equipment and wireless audio equipment |
CN114745637A (en) * | 2022-04-14 | 2022-07-12 | 刘道正 | Sound effect realization method of wireless audio equipment |
CN115379339A (en) * | 2022-08-29 | 2022-11-22 | 歌尔科技有限公司 | Audio processing method and device and electronic equipment |
CN117378220A (en) * | 2021-05-27 | 2024-01-09 | 高通股份有限公司 | Spatial audio mono via data exchange |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9326092D0 (en) * | 1993-12-21 | 1994-02-23 | Central Research Lab Ltd | Apparatus and method for audio signal balance control |
CN1277532A (en) * | 1999-06-10 | 2000-12-20 | 三星电子株式会社 | Multiple-channel audio frequency replaying apparatus and method |
US20060008100A1 (en) * | 2004-07-09 | 2006-01-12 | Emersys Co., Ltd | Apparatus and method for producing 3D sound |
WO2008106680A2 (en) * | 2007-03-01 | 2008-09-04 | Jerry Mahabub | Audio spatialization and environment simulation |
EP2928213A1 (en) * | 2014-04-04 | 2015-10-07 | GN Resound A/S | A hearing aid with improved localization of a monaural signal source |
US20150289063A1 (en) * | 2014-04-04 | 2015-10-08 | Gn Resound A/S | Hearing aid with improved localization of a monaural signal source |
GB201517844D0 (en) * | 2015-10-08 | 2015-11-25 | Two Big Ears Ltd | Binaural synthesis |
US20170223474A1 (en) * | 2015-11-10 | 2017-08-03 | Bender Technologies, Inc. | Digital audio processing systems and methods |
CN109660971A (en) * | 2018-12-05 | 2019-04-19 | 恒玄科技(上海)有限公司 | Wireless headset and communication means for wireless headset |
-
2019
- 2019-12-27 CN CN201911377379.7A patent/CN111142665B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9326092D0 (en) * | 1993-12-21 | 1994-02-23 | Central Research Lab Ltd | Apparatus and method for audio signal balance control |
CN1277532A (en) * | 1999-06-10 | 2000-12-20 | 三星电子株式会社 | Multiple-channel audio frequency replaying apparatus and method |
US20060008100A1 (en) * | 2004-07-09 | 2006-01-12 | Emersys Co., Ltd | Apparatus and method for producing 3D sound |
WO2008106680A2 (en) * | 2007-03-01 | 2008-09-04 | Jerry Mahabub | Audio spatialization and environment simulation |
CN101960866A (en) * | 2007-03-01 | 2011-01-26 | 杰里·马哈布比 | Audio spatialization and environmental simulation |
EP2928213A1 (en) * | 2014-04-04 | 2015-10-07 | GN Resound A/S | A hearing aid with improved localization of a monaural signal source |
US20150289063A1 (en) * | 2014-04-04 | 2015-10-08 | Gn Resound A/S | Hearing aid with improved localization of a monaural signal source |
GB201517844D0 (en) * | 2015-10-08 | 2015-11-25 | Two Big Ears Ltd | Binaural synthesis |
US20170223474A1 (en) * | 2015-11-10 | 2017-08-03 | Bender Technologies, Inc. | Digital audio processing systems and methods |
CN109660971A (en) * | 2018-12-05 | 2019-04-19 | 恒玄科技(上海)有限公司 | Wireless headset and communication means for wireless headset |
Non-Patent Citations (3)
Title |
---|
岳大为;东楷涵;刘作军;王德峰;: "基于音频带通滤波的排险机器人导航方法", no. 02 * |
张宗帅;顾亚平;张俊;杨小平;: "基于HRTF的虚拟声源定位", no. 02 * |
罗福元,王行仁: "虚拟座舱中的三维音响", no. 03 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112235690A (en) * | 2020-10-13 | 2021-01-15 | 恒玄科技(上海)股份有限公司 | Audio signal adjusting method and device, earphone assembly and readable storage medium |
CN112612445A (en) * | 2020-12-28 | 2021-04-06 | 维沃移动通信有限公司 | Audio playing method and device |
CN114543844A (en) * | 2021-04-09 | 2022-05-27 | 恒玄科技(上海)股份有限公司 | Audio playing processing method and device of wireless audio equipment and wireless audio equipment |
CN114543844B (en) * | 2021-04-09 | 2024-05-03 | 恒玄科技(上海)股份有限公司 | Audio playing processing method and device of wireless audio equipment and wireless audio equipment |
CN117378220A (en) * | 2021-05-27 | 2024-01-09 | 高通股份有限公司 | Spatial audio mono via data exchange |
CN114363770A (en) * | 2021-12-17 | 2022-04-15 | 北京小米移动软件有限公司 | Filtering method and device in pass-through mode, earphone and readable storage medium |
CN114363770B (en) * | 2021-12-17 | 2024-03-26 | 北京小米移动软件有限公司 | Filtering method and device in pass-through mode, earphone and readable storage medium |
CN114422897A (en) * | 2022-01-12 | 2022-04-29 | Oppo广东移动通信有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN114745637A (en) * | 2022-04-14 | 2022-07-12 | 刘道正 | Sound effect realization method of wireless audio equipment |
CN115379339A (en) * | 2022-08-29 | 2022-11-22 | 歌尔科技有限公司 | Audio processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111142665B (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111142665B (en) | Stereo processing method and system for earphone assembly and earphone assembly | |
CN109660971B (en) | Wireless earphone and communication method for wireless earphone | |
CN111031437B (en) | Wireless earphone assembly and communication method thereof | |
CN109561419B (en) | The ears wireless headset of high reliability and communication means for ears wireless headset | |
CN109905925B (en) | The communication means of wireless device component and wireless device component | |
CN110769347B (en) | Synchronous playing method of earphone assembly and earphone assembly | |
CN111741401B (en) | Wireless communication method for wireless headset assembly and wireless headset assembly | |
US10798477B2 (en) | Wireless audio system and method for wirelessly communicating audio information using the same | |
CN110636487B (en) | Wireless earphone and communication method thereof | |
US11800312B2 (en) | Wireless audio system for recording an audio information and method for using the same | |
EP3439337A1 (en) | Wireless device communication | |
US11418297B2 (en) | Systems and methods including wireless data packet retransmission schemes | |
CN110708142A (en) | Audio data communication method, system and equipment | |
CN112039637B (en) | Audio data communication method, system and audio communication equipment | |
WO2021217723A1 (en) | Systems and methods for wireless transmission of audio information | |
US10778479B1 (en) | Systems and methods for wireless transmission of audio information | |
CN114747165B (en) | Data transmission method and device applied to Bluetooth communication | |
EP4184938A1 (en) | Communication method and device used for wireless dual earphones | |
CN111955018B (en) | Method and system for connecting an audio accessory device with a client computing device | |
CN112235690B (en) | Method and device for adjusting audio signal, earphone assembly and readable storage medium | |
CN113259803A (en) | Wireless earphone assembly and signal processing method thereof | |
CN114079537B (en) | Audio packet loss data receiving method, device, audio playing equipment and system | |
US20240305927A1 (en) | Method and system for transmitting audio signals | |
EP4325884A1 (en) | Head-mounted wireless earphones and communication method therefor | |
CN114079899A (en) | Bluetooth communication data processing circuit, packet loss processing method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |