[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US10091581B2 - Audio preferences for media content players - Google Patents

Audio preferences for media content players Download PDF

Info

Publication number
US10091581B2
US10091581B2 US14/813,628 US201514813628A US10091581B2 US 10091581 B2 US10091581 B2 US 10091581B2 US 201514813628 A US201514813628 A US 201514813628A US 10091581 B2 US10091581 B2 US 10091581B2
Authority
US
United States
Prior art keywords
speakers
filters
user
media content
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/813,628
Other versions
US20170034621A1 (en
Inventor
Gregory Mack Garner
Patrick Alan Brouillette
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Roku Inc
Original Assignee
Roku Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/813,628 priority Critical patent/US10091581B2/en
Application filed by Roku Inc filed Critical Roku Inc
Assigned to ROKU, INC. reassignment ROKU, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROUILLETTE, Patrick Alan, GARNER, GREGORY MACK
Priority to PCT/US2016/044053 priority patent/WO2017019690A1/en
Priority to EP16831241.1A priority patent/EP3329693A4/en
Publication of US20170034621A1 publication Critical patent/US20170034621A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK AMENDED AND RESTATED INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ROKU, INC.
Priority to US16/148,366 priority patent/US10827264B2/en
Publication of US10091581B2 publication Critical patent/US10091581B2/en
Application granted granted Critical
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. PATENT SECURITY AGREEMENT Assignors: ROKU, INC.
Assigned to ROKU, INC. reassignment ROKU, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK, AS BANK
Assigned to ROKU, INC. reassignment ROKU, INC. TERMINATION AND RELEASE OF INTELLECTUAL PROPERTY SECURITY AGREEMENT (REEL/FRAME 048385/0375) Assignors: MORGAN STANLEY SENIOR FUNDING, INC.
Assigned to CITIBANK, N.A. reassignment CITIBANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROKU, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/02Manually-operated control
    • H03G5/025Equalizers; Volume or gain control in limited frequency bands
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/16Automatic control
    • H03G5/165Equalizers; Volume or gain control in limited frequency bands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/307Frequency adjustment, e.g. tone control

Definitions

  • Embodiments included herein generally relate to creating a desired listening experience for users in home entertainment systems. More particularly, embodiments relate to creating the desired listening experience for the users by use of a media remote device in conjunction with a media content player, a television and a plurality of speakers.
  • an audio component e.g., movies, video, music, games, Internet content, etc.
  • different users may desire different listening experiences. For example, users may differ in their preferences for volume and other audio features (bass, treble, balance, etc.), sound mode (movie, music, surround decoder, direct playback, unprocessed, etc.), movie mode (standard, sci-fi, adventure, drama, sports, etc.), music mode (concert hall, chamber, cellar club, music video, 2 channel stereo, etc.), as well as any other audio characteristics.
  • volume and other audio features basic, treble, balance, etc.
  • sound mode movie, music, surround decoder, direct playback, unprocessed, etc.
  • movie mode standard, sci-fi, adventure, drama, sports, etc.
  • music mode concert hall, chamber, cellar club, music video, 2 channel stereo, etc.
  • Media/content such as movies, however, have a default audio track typically established by the content provider. Thus, what is needed is a way to customize the audio component of media/content to suit the listening preferences of different users.
  • An embodiment includes a method for creating a desired audio effect.
  • the method operates by causing a plurality of speakers to play test signals, where each test signal is specific to one of the speakers. Frequency responses of the speakers resulting from the playing of the test signals are recorded. One or more filters matching an audio profile selected by a user is created. Then, the filters are applied to the recorded frequency responses to obtain filtered transformations of the speakers. The filtered transformations are applied at the speakers to thereby achieve the user audio profile.
  • Another embodiment includes a system having a media content player that is operable to cause a plurality of speakers to play test signals, where each test signal is specific to one of the speakers. Frequency responses of the speakers resulting from the playing of the test signals are recorded.
  • the media content player creates one or more filters matching an audio profile selected by a user.
  • the media content player applies the filters to the recorded frequency responses to obtain filtered transformations of the speakers.
  • the filtered transformations are applied at the speakers to thereby achieve the user audio profile.
  • Another embodiment includes a tangible computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the computing device to perform operations comprising: causing a plurality of speakers to play test signals, each test signal being specific to one of the speakers; receiving from a remote device recorded frequency responses of the speakers resulting from the playing of the test signals; creating one or more filters to match an audio profile selected by a user; applying the filters to the recorded frequency responses to obtain filtered transformations of the speakers; and transmitting the filtered transformations to the speakers; wherein the filtered transformations are applied at the speakers to thereby achieve the user audio profile.
  • Another embodiment includes a method of using aggregating volume information to enhance audio playback of content.
  • the method includes the steps of receiving requested content from a server; determining if the received content includes aggregate volume statistics; playing the received content; continuously adjusting volume of the received content based on the aggregate volume statistics, if it was determined the received content included aggregate volume statistics; monitoring changes in volume made by a user while viewing the received content; and providing information reflecting the monitored volume changes to the server.
  • FIG. 1 illustrates a home entertainment system 100 for creating a desired audio effect for a user, according to an example embodiment.
  • FIG. 2 illustrates a home entertainment system 100 for creating a desired audio effect for a user, according to another example embodiment.
  • FIG. 3 illustrates a home entertainment system 100 for creating a desired audio effect for a user, according to still another example embodiment.
  • FIG. 4 illustrates a media content player, according to an example embodiment.
  • FIG. 5 illustrates a wireless and/or wired speaker, according to an example embodiment.
  • FIG. 6 illustrates a media remote device, according to an example embodiment.
  • FIG. 7 illustrates an example chirp waveform
  • FIG. 8 illustrates an example frequency response of a chirp waveform.
  • FIG. 9 is a flowchart for creating a desired audio effect for a user, according to an example embodiment.
  • FIG. 10 illustrates a network environment 1000 for analyzing volume statistics for users, according to an example embodiment.
  • FIG. 11 is a flowchart for monitoring, recording and transmitting volume statistics for users, according to an example embodiment.
  • FIG. 12 is a flowchart for analyzing volume statistics for users, according to an example embodiment.
  • FIG. 13 is an example computer system useful for implementing various embodiments.
  • system, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for creating an improved audio experience for a user are provided herein.
  • FIG. 1 illustrates a home entertainment system 100 for creating an improved audio experience for a user.
  • the home entertainment system 100 is a multi-speaker home theater environment where a user 102 sits in an appropriate viewing location in chair 104 in view of media content television 108 with a media remote device 106 .
  • Media remote device 106 may be, for example, a media player/television remote, a wireless device, a smartphone, a tablet computer, a laptop/mobile computer, a handheld computer, a server computer, an in-appliance device, a Personal Digital Assistant (PDA), or a videogame controller.
  • a media player/television remote a wireless device
  • smartphone a tablet computer
  • laptop/mobile computer a laptop/mobile computer
  • handheld computer a server computer
  • an in-appliance device a Personal Digital Assistant (PDA), or a videogame controller.
  • PDA Personal Digital Assistant
  • the media content television 108 may include one or more internal speakers 110 , according to an embodiment. Further, the media content television 108 may include a media content player 112 , according to an embodiment.
  • the media content player 112 may be, without limitation, a streaming media player, a game console, and/or an audio/video receiver, according to example embodiments.
  • the home entertainment system 100 may include any number of wireless and/or wired speakers 122 .
  • the wireless and/or wired speakers 122 may include front speakers, rear speakers, and a center channel speaker.
  • the user 102 may place the speakers 122 in any location and/or configuration.
  • FIG. 2 illustrates another embodiment of the home entertainment system 100 .
  • FIG. 2 is similar to FIG. 1 , but shows media content player 112 as external to the television 204 .
  • FIG. 3 is another embodiment of the home entertainment system 100 .
  • FIG. 3 is similar to FIG. 1 , but includes additional components in media entertainment system 302 .
  • media content player 112 is connected to a stereo 304 .
  • the audio and video output of stereo 304 may be connected to television 204 .
  • a line out of stereo 304 may be a non-amplified signal output port connected to a cascaded device 306 for sound enhancements, according to an example embodiment.
  • the cascaded device 306 may transmit its output to wireless and/or wired speakers 122 , according to an embodiment.
  • the cascaded device 306 may include but not be limited to a preamplifier, an equalizer device, a microphone, a speaker, a tablet computer, a personal desktop, a laptop/mobile computer, a handheld computer, a server computer, or an in-appliance device, according to example embodiments.
  • FIG. 4 illustrates media content player 112 , according to an example embodiment.
  • the media content player 112 may include a TV System on a Chip (TV SOC) 402 , a transmitter 404 , a receiver 406 , and a Network Interface Circuit (NIC) 408 .
  • the TV SOC 402 communicates with the transmitter 404 and the receiver 406 .
  • the TV SOC 402 may be configured to receive video streaming from NIC 408 and integrate high efficiency video codec to the received video streaming, according to an embodiment.
  • the TV SOC 402 may use a High Efficiency Video Codec (HEVC) or H.265 standard.
  • HEVC High Efficiency Video Codec
  • the TV SOC 402 may transmit the integrated video to television 204 or to the media content television 108 .
  • HEVC High Efficiency Video Codec
  • FIG. 5 illustrates wireless and/or wired speakers 122 , according to an example embodiment.
  • the wireless and/or wired speakers 122 may include a receiver 502 , amplifier circuitry 504 , a speaker 506 , and a transmitter 508 .
  • FIG. 6 illustrates a media remote device 106 , according to an example embodiment.
  • the media remote device 106 may include interactive buttons 602 (e.g., volume, channel, up-down-left-right arrows, select, menu, etc.), a transmitter 604 , a receiver 606 , a microphone A 608 , a microphone B 610 , and a Central Processing Unit (CPU) 612 .
  • buttons 602 e.g., volume, channel, up-down-left-right arrows, select, menu, etc.
  • a transmitter 604 e.g., a transmitter 604 , a receiver 606 , a microphone A 608 , a microphone B 610 , and a Central Processing Unit (CPU) 612 .
  • CPU Central Processing Unit
  • microphone A 608 is configured to receive human voice and has a frequency response range of 300 Hertz (Hz) to 3000 Hz.
  • Microphone B 610 is configured to receive background noise and has a frequency response range of 20 Hz to 20 kilohertz (kHz).
  • CPU 612 operates to discern between audio voices and background noises received by microphone A 608 and microphone B 610 .
  • FIG. 9 is a method 900 for creating a desired audio experience for a user, according to an example embodiment.
  • Method 900 can be performed using, for example, system 100 of FIGS. 1-3 .
  • method 900 operates to configure the components of system 100 so as to customize the audio experience for user 102 .
  • system 100 determines the user 102 's audio preferences (this is generally covered by steps 902 - 908 ).
  • system 100 determines the current audio response of the components of system 100 (this is generally covered by steps 910 - 934 ).
  • system 100 modifies the audio response of the components of system 100 so as to align such audio response with the user 102 's audio preferences (this is generally covered by steps 936 - 946 ).
  • Method 900 shall now be described in detail.
  • steps 902 and 904 user 102 turns on the television 108 / 204 and media content player 112 (in an embodiment, media content player 112 will automatically turn on when the television 108 / 204 turns on, if these components are integrated into a single unit as shown in FIG. 1 ).
  • step 906 the user 102 uses remote 106 to communicate with media content player 112 and set his audio preferences.
  • the media content player 112 displays a series of menus on television 108 / 204 to enable the user 102 to select and define a plurality of audio effects.
  • Such audio effects include but are not limited to: delay between front and back speakers; delay between right and left speakers; volume and other audio features (bass, treble, balance, midrange, fading, etc.); sound mode (movie, music, surround decoder, direct playback, unprocessed, etc.); movie mode (standard, sci-fi, adventure, drama, sports, etc.); music mode (concert hall, chamber, cellar club, music video, 2 channel stereo, etc.); as well as any other audio characteristics.
  • the media content player 112 saves the user 102 's audio preferences (which are also called the user 102 's audio profile).
  • the user 102 may define different audio profiles for different types of content, such as different types of movies (e.g., action, drama, comedies, etc.), different types of music (e.g., pop, country, alternative, etc.), different types of venues (e.g., stadium, concert hall, intimate night club, etc.), different types of technical features (subwoofer on/off; rear speakers on/off; 2 channel mono; etc.), as well as any other combination of audio features the user 102 may wish to define.
  • movies e.g., action, drama, comedies, etc.
  • different types of music e.g., pop, country, alternative, etc.
  • venues e.g., stadium, concert hall, intimate night club, etc.
  • technical features subwoofer on/off; rear speakers on/off; 2 channel mono; etc.
  • step 906 user 102 selects one of his audio profiles.
  • the room in which the user 102 is seated may include acoustic anomalies, such as room configuration or furniture that affect acoustics.
  • the acoustic anomalies may include the frequency response of speakers 110 , 122 and 206 , as well as the frequency response of the interaction between speakers 110 , 122 and 206 .
  • Another acoustic anomaly may be coupling, reflections, or echoes from interaction between speakers 110 , 122 and 206 and the walls of the home entertainment system 100 .
  • An additional acoustic anomaly may be audio effects caused by dynamic conditions of temperature, humidity and changing absorption.
  • media content player 112 prompts the user 102 to silence any background noise. Also, media content player 112 turns off any background noise reduction algorithms in components of system 100 .
  • media content player 112 prompts the user 102 to ensure the speakers 110 , 122 , 206 and stereo 304 are placed in their desired position.
  • user 102 can place speakers 110 , 122 , 206 and stereo 304 in any desired location and configuration.
  • media content player 112 will accordingly adjust the operation of components of system 100 to achieve the user 102 's selected audio profile.
  • step 912 media content player 112 prompts the user 102 to place the media remote device 106 in the desired position.
  • the desired position of remote 106 is where the user 102 will normally sit (i.e., chair 104 ).
  • media content player 112 may instruct the user 102 to remain stationary.
  • Media content player 112 also may instruct user 102 to keep the remote 106 stationary. Having both the user 102 and remote 106 stationary during the following steps may enhance the ability of media content player 112 to achieve the user 102 's selected audio profile.
  • media content player 112 may compensate for such hearing degradation. Accordingly, in step 916 , media content player 112 transmits a tone to test the audible hearing frequency range of user 102 . Media content player 112 may transmit tones stepping in increments of frequency until the tones are no longer audible. The process for transmitting tones may begin with transmitting the tone at the lowest frequency, according to an embodiment. In an alternative embodiment, the tone may begin with transmitting the tone at the highest frequency.
  • step 918 media content player 112 asks the user 102 if the tone was audible. If user 102 answers “Yes” via the remote 106 , then in step 920 the media content player 112 determines if the last transmitted tone was at the maximum allowable frequency. If not, then in step 922 the frequency is increased by some increment (such as 10 Hz in a non-limiting example), and in step 916 the tone at the higher frequency is transmitted.
  • the media content player 112 stores the maximum audible frequency in the user 102 's audio profile.
  • media content player 112 uses the maximum audible frequency as a threshold for user 102 .
  • media content player 112 may not play sounds above the maximum audible frequency when the user 102 's selected audio profile is being used.
  • media content player 112 transmits a test signal from transmitter 404 to speakers 110 , 122 and 206 .
  • media content player 112 may send a different test signal to different speakers 110 , 122 and 206 .
  • speakers 110 , 122 and 206 receive the test signal(s) via their respective receiver 502 .
  • the test signal may include a chirp signal, an example of which is shown in FIG. 7 .
  • a chirp signal is a sinusoid that sweeps rapidly from a starting frequency to an end frequency.
  • the chirp waveform may range in amplitude from +1 volt to ⁇ 1 volt.
  • a desirable feature of the chirp waveforms is its small crest factor and flat frequency response.
  • the flat frequency response of the chirp signal as shown in FIG. 8 , is used to equally test all frequency components of a system.
  • test signal may include a step signal.
  • a step signal may be useful to evaluate the transient response of a system under test.
  • speakers 110 , 122 and 206 play their respective test signal.
  • amplifier circuitry 504 plays the test signal via speaker 506 .
  • speakers 110 , 122 , and 206 sequentially play the received test signal.
  • the speakers 110 , 122 and 206 play the test signal at the same time.
  • the remote 106 receives the test signal played by speakers 110 , 122 and 206 via microphone A 608 and microphone B 610 . In doing so, remote 106 processes and/or records the frequency response of speakers 110 , 122 and 206 . Where the test signals differ by speaker 110 , 122 and 206 , remote 106 also timestamps when it received the test signal from each speaker 110 , 122 and 206 .
  • media content player 112 receives the recorded frequency responses of speakers 110 , 122 and 206 from media remote device 106 .
  • Media content player 112 also receives from remote 106 the timestamps (when available).
  • media content player 112 calculates the respective distances of the remote 106 to each of the speakers 110 , 122 , and 206 . In an embodiment, media content player 112 calculates these distances based on the delay of the test signal between when the test signals were issued by media content player 112 (in step 926 ) and heard by remote 106 (in step 930 ). In an embodiment, because speakers 110 , 122 and 206 were assigned different test signals, media content player 112 is able to calculate the distance for each.
  • media content player 112 creates a filter(s) for the selected audio profile.
  • the filter(s) may take into consideration the respective frequency responses (from step 930 ) and distances (from step 934 ) of the speakers 110 , 122 and 206 . As discussed below, the filter(s) will operate to transform the frequency response of each speaker 110 , 122 and 206 to the user 102 's selected audio profile.
  • media content player 112 generates a linear time invariant (LTI) filter for the selected audio profile.
  • the LTI filter is generated based on a frequency response of the selected audio profile (or each component thereof) and will be respectively convolved with the recorded frequency response of speakers 110 , 122 and 206 .
  • media content player 112 strives to modify the frequency response of speakers 110 , 122 and 206 to match the user 102 's selected audio profile. In an embodiment, media content player 112 may also strive to eliminate unwanted frequency components in the recorded frequency responses of speakers 110 , 122 and 206 . Such unwanted frequency components may be the result of acoustic anomalies discussed above.
  • Embodiments may use two types of LTI filters, Finite Impulse Response (FIR) filters and/or Infinite Impulse Response (IIR) filters.
  • FIR Finite Impulse Response
  • IIR Infinite Impulse Response
  • An advantage of using a FIR filter is its ability to reduce or eliminate phase distortion.
  • a FIR filter may be generated by the media content player 112 to reduce or eliminate phase adjustments.
  • a FIR filter may be designed to create particular desired phase adjustments.
  • the FIR filter may be designed for the theater environment audio effect, where a phase adjustment of around 70 degrees may be desirable.
  • An advantage of an IIR filter is it may be more computationally efficient.
  • media content player 112 may generate a combination of IIR and FIR filters for each speaker 110 , 112 and 206 in step 936 .
  • media content player 112 may generate filters to compensate for delays associated with the distance of speakers 110 , 122 and 206 determined in step 934 .
  • media content player 112 applies the filters to the frequency responses of speakers 110 , 122 and 206 to create a filter transformation for each of the speakers 110 , 122 and 206 .
  • media content player 112 transmits the filter transformations to their respective speakers 110 , 122 and 206 , as well as to stereo 304 .
  • speakers 110 , 122 and 206 and stereo 304 respectively receive the filter transformations.
  • the filter transformations are applied to each of the speakers 110 , 122 and 206 and the stereo 304 .
  • each speaker 110 , 122 and 206 processes and applies the received filter transformation with amplifier circuitry 504 .
  • the filter transformations may be applied by stereo 304 .
  • the stereo 304 may apply the filter transformations to the line out which is connected to the cascaded device 306 .
  • step 946 user 102 selects content from the media content player 112 .
  • media content 112 plays the content on television 108 / 204 using the selected audio profile in the manner discussed above.
  • FIG. 10 illustrates a network environment for analyzing and applying volume statistics for users 102 .
  • the system 1000 includes a set of environments 1002 - 1 through 1002 -N, a network 1004 , and an aggregate server 1006 , according to an embodiment.
  • Each environment 1002 - 1 through 1002 -N includes a user 102 with a television 108 / 204 and media content player 112 , similar to that shown in FIGS. 1-3 .
  • Media content players 112 are configured to access aggregate server 1006 as described below.
  • aggregate server 1006 includes network interface circuit (NIC) 1008 , an input/output (I/O) device 1010 , a central processing unit (CPU) 1012 , a bus 1014 , and a memory 1016 .
  • NIC network interface circuit
  • I/O input/output
  • CPU central processing unit
  • users 102 experience media/content using media content players 112 .
  • users 102 use remotes 106 to change the volume level.
  • media content players 112 monitor and record volume levels of users 102 and correlate these volume levels with the content being watched.
  • Media content players 112 upload these volume statistics 1018 to aggregate server 1006 .
  • the associated media content player 112 may access volume statistics 1018 of that content from aggregate server 1006 . If any volume statistics 1018 exist, media content play 112 may download and apply the volume statistics 1018 while presenting the content to the user 102 .
  • volume statistics 1018 for a given media/content may vary by time of day. For example, the volume may be lower late at night relative to early evening. Accordingly, aggregate server 1006 may store volume statistics 1018 for a given content based on time of day. Aggregate server 1006 may also select and provide volume statistics 1018 to media content player 112 based on time of day.
  • Memory 1016 of aggregate server 1006 may include a cluster algorithm 1020 , according to an embodiment.
  • the cluster algorithm 1020 causes CPU 1012 to process volume statistics 1018 .
  • volume statistics 1018 may include N sections of volume statistics 1018 for a particular media for each of environments 1002 -N. These volume statistics 1018 may be aggregated to form a crowd sourced repository of volume level changes associated with different points in time in the playback of media.
  • FIG. 11 is a method 1100 for monitoring, recording, and applying volume statistics for user 102 , according to an example embodiment.
  • media content player 112 may receive media from aggregate server 1006 or some other source, according to an example embodiment.
  • media content player 112 may check whether the received media includes volume statistics 1018 . If yes, then in step 1106 media content player 112 plays the received media on television 108 / 204 and, while doing so, continually adjusts the volume according to the received volume statistics 1018 . If not, then in step 1108 media content player 112 plays the received media using the audio information included therein. Alternatively, media content player 112 may request volume statistics 1018 from aggregate server 1006 and then perform step 1106 upon receipt.
  • step 1110 while playing the received media, media content player 112 monitors and records as the user 102 changes the volume level. Media content player 112 correlates any volume changes to the portions of the media being viewed at the time the volume changes were made.
  • step 1114 media content player 112 uploads that volume information to aggregate server 1006 .
  • FIG. 12 is a method 1200 for analyzing volume statistics, according to an example embodiment.
  • aggregate server 1006 receives volume statistics from media content players 112 .
  • the volume statistics are mapped to portions of media that were being viewed when volume changes took place.
  • aggregate server 1006 aggregates and stores the aggregated volume statistics 1018 .
  • aggregate server 1006 aggregates by organizing and averaging volume information by media timeline. For example, assume aggregate server 1006 receives information reflecting multiple volume changes at time t 5 of a given content. In an embodiment, aggregate server 1006 averages those volume changes to generate the aggregate volume information 1018 for time t 5 of that given content.
  • step 1206 media content player 112 requests content.
  • aggregate server 1006 determines if volume statistics 1018 exist for the requested content. If yes, then in step 1210 aggregate server 1006 provides such volume statistics 1018 to media content player 112 . Aggregate server 1006 may also provide the requested content to media content player 112 .
  • aggregate server 1006 may provide only the requested content to the media content player 112 .
  • Computer system 1300 can be any well-known computer capable of performing the functions described herein, such as computers available from International Business Machines, Apple, Sun, HP, Dell, Sony, Toshiba, etc.
  • Computer system 1300 includes one or more processors (also called central processing units, or CPUs), such as a processor 1304 .
  • processors also called central processing units, or CPUs
  • Processor 1304 is connected to a communication infrastructure or bus 1306 .
  • Computer system 1300 also includes user input/output device(s) 1303 , such as monitors, keyboards, pointing devices, etc., which communicate with communication infrastructure 1306 through user input/output interface(s) 1302 .
  • Computer system 1300 also includes a main or primary memory 1308 , such as random access memory (RAM).
  • Main memory 1308 may include one or more levels of cache.
  • Main memory 1308 has stored therein control logic (i.e., computer software) and/or data.
  • Computer system 1300 may also include one or more secondary storage devices or memory 1310 .
  • Secondary memory 1310 may include, for example, a hard disk drive 1312 and/or a removable storage device or drive 1314 .
  • Removable storage drive 1314 may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive.
  • Removable storage drive 1314 may interact with a removable storage unit 1318 .
  • Removable storage unit 1318 includes a computer usable or readable storage device having stored thereon computer software (control logic) and/or data.
  • Removable storage unit 1318 may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device.
  • Removable storage drive 1314 reads from and/or writes to removable storage unit 1318 in a well-known manner.
  • secondary memory 1310 may include other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 1300 .
  • Such means, instrumentalities or other approaches may include, for example, a removable storage unit 1322 and an interface 1320 .
  • the removable storage unit 1322 and the interface 1320 may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface.
  • Computer system 1300 may further include a communication or network interface 1324 .
  • Communication interface 1324 enables computer system 1300 to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number 1328 ).
  • communication interface 1324 may allow computer system 1300 to communicate with remote devices 1328 over communications path 1326 , which may be wired and/or wireless, and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system 1300 via communication path 1326 .
  • a tangible apparatus or article of manufacture comprising a tangible computer useable or readable medium having control logic (software) stored thereon is also referred to herein as a computer program product or program storage device.
  • control logic software stored thereon
  • control logic when executed by one or more data processing devices (such as computer system 1300 ), causes such data processing devices to operate as described herein.
  • references herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)

Abstract

Disclosed herein are system, method, and tangible computer readable medium for creating a desired audio effect for a user. The method includes operations including: causing a plurality of speakers to play test signals, each test signal being specific to one of the speakers; receiving from a remote device recorded frequency responses of the speakers resulting from the playing of the test signals; creating one or more filters to match an audio profile selected by a user; applying the filters to the recorded frequency responses to obtain filtered transformations of the speakers; and transmitting the filtered transformations to the speakers; wherein the filtered transformations are applied at the speakers to thereby achieve the user audio profile.

Description

FIELD OF THE INVENTION
Embodiments included herein generally relate to creating a desired listening experience for users in home entertainment systems. More particularly, embodiments relate to creating the desired listening experience for the users by use of a media remote device in conjunction with a media content player, a television and a plurality of speakers.
BACKGROUND OF THE INVENTION
When experiencing media/content having an audio component (e.g., movies, video, music, games, Internet content, etc.), different users may desire different listening experiences. For example, users may differ in their preferences for volume and other audio features (bass, treble, balance, etc.), sound mode (movie, music, surround decoder, direct playback, unprocessed, etc.), movie mode (standard, sci-fi, adventure, drama, sports, etc.), music mode (concert hall, chamber, cellar club, music video, 2 channel stereo, etc.), as well as any other audio characteristics.
Media/content such as movies, however, have a default audio track typically established by the content provider. Thus, what is needed is a way to customize the audio component of media/content to suit the listening preferences of different users.
SUMMARY OF THE INVENTION
An embodiment includes a method for creating a desired audio effect. The method operates by causing a plurality of speakers to play test signals, where each test signal is specific to one of the speakers. Frequency responses of the speakers resulting from the playing of the test signals are recorded. One or more filters matching an audio profile selected by a user is created. Then, the filters are applied to the recorded frequency responses to obtain filtered transformations of the speakers. The filtered transformations are applied at the speakers to thereby achieve the user audio profile.
Another embodiment includes a system having a media content player that is operable to cause a plurality of speakers to play test signals, where each test signal is specific to one of the speakers. Frequency responses of the speakers resulting from the playing of the test signals are recorded. The media content player creates one or more filters matching an audio profile selected by a user. The media content player applies the filters to the recorded frequency responses to obtain filtered transformations of the speakers. The filtered transformations are applied at the speakers to thereby achieve the user audio profile.
Another embodiment includes a tangible computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the computing device to perform operations comprising: causing a plurality of speakers to play test signals, each test signal being specific to one of the speakers; receiving from a remote device recorded frequency responses of the speakers resulting from the playing of the test signals; creating one or more filters to match an audio profile selected by a user; applying the filters to the recorded frequency responses to obtain filtered transformations of the speakers; and transmitting the filtered transformations to the speakers; wherein the filtered transformations are applied at the speakers to thereby achieve the user audio profile.
Another embodiment includes a method of using aggregating volume information to enhance audio playback of content. The method includes the steps of receiving requested content from a server; determining if the received content includes aggregate volume statistics; playing the received content; continuously adjusting volume of the received content based on the aggregate volume statistics, if it was determined the received content included aggregate volume statistics; monitoring changes in volume made by a user while viewing the received content; and providing information reflecting the monitored volume changes to the server.
Further features and advantages of the embodiments disclosed herein, as well as the structure and operation of various embodiments, are described in details below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to a person skilled in the relevant art based on the teachings contained herein.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated herein and form a part of the specification.
FIG. 1 illustrates a home entertainment system 100 for creating a desired audio effect for a user, according to an example embodiment.
FIG. 2 illustrates a home entertainment system 100 for creating a desired audio effect for a user, according to another example embodiment.
FIG. 3 illustrates a home entertainment system 100 for creating a desired audio effect for a user, according to still another example embodiment.
FIG. 4 illustrates a media content player, according to an example embodiment.
FIG. 5 illustrates a wireless and/or wired speaker, according to an example embodiment.
FIG. 6 illustrates a media remote device, according to an example embodiment.
FIG. 7 illustrates an example chirp waveform.
FIG. 8 illustrates an example frequency response of a chirp waveform.
FIG. 9 is a flowchart for creating a desired audio effect for a user, according to an example embodiment.
FIG. 10 illustrates a network environment 1000 for analyzing volume statistics for users, according to an example embodiment.
FIG. 11 is a flowchart for monitoring, recording and transmitting volume statistics for users, according to an example embodiment.
FIG. 12 is a flowchart for analyzing volume statistics for users, according to an example embodiment.
FIG. 13 is an example computer system useful for implementing various embodiments.
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
DETAILED DESCRIPTION OF THE INVENTION
Provided herein are system, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for creating an improved audio experience for a user.
Customizing Audio Components to Match User's Audio Profile
FIG. 1 illustrates a home entertainment system 100 for creating an improved audio experience for a user. In an embodiment, the home entertainment system 100 is a multi-speaker home theater environment where a user 102 sits in an appropriate viewing location in chair 104 in view of media content television 108 with a media remote device 106.
Media remote device 106 may be, for example, a media player/television remote, a wireless device, a smartphone, a tablet computer, a laptop/mobile computer, a handheld computer, a server computer, an in-appliance device, a Personal Digital Assistant (PDA), or a videogame controller.
The media content television 108 may include one or more internal speakers 110, according to an embodiment. Further, the media content television 108 may include a media content player 112, according to an embodiment. The media content player 112 may be, without limitation, a streaming media player, a game console, and/or an audio/video receiver, according to example embodiments.
The home entertainment system 100 may include any number of wireless and/or wired speakers 122. In an embodiment, the wireless and/or wired speakers 122 may include front speakers, rear speakers, and a center channel speaker. In an example embodiment, the user 102 may place the speakers 122 in any location and/or configuration.
FIG. 2 illustrates another embodiment of the home entertainment system 100. FIG. 2 is similar to FIG. 1, but shows media content player 112 as external to the television 204.
FIG. 3 is another embodiment of the home entertainment system 100. FIG. 3 is similar to FIG. 1, but includes additional components in media entertainment system 302.
In particular, in the example of FIG. 3, media content player 112 is connected to a stereo 304. The audio and video output of stereo 304 may be connected to television 204. A line out of stereo 304 may be a non-amplified signal output port connected to a cascaded device 306 for sound enhancements, according to an example embodiment. The cascaded device 306 may transmit its output to wireless and/or wired speakers 122, according to an embodiment. The cascaded device 306 may include but not be limited to a preamplifier, an equalizer device, a microphone, a speaker, a tablet computer, a personal desktop, a laptop/mobile computer, a handheld computer, a server computer, or an in-appliance device, according to example embodiments.
FIG. 4 illustrates media content player 112, according to an example embodiment. The media content player 112 may include a TV System on a Chip (TV SOC) 402, a transmitter 404, a receiver 406, and a Network Interface Circuit (NIC) 408. According to an embodiment, the TV SOC 402 communicates with the transmitter 404 and the receiver 406. The TV SOC 402 may be configured to receive video streaming from NIC 408 and integrate high efficiency video codec to the received video streaming, according to an embodiment. The TV SOC 402 may use a High Efficiency Video Codec (HEVC) or H.265 standard. The TV SOC 402 may transmit the integrated video to television 204 or to the media content television 108.
FIG. 5 illustrates wireless and/or wired speakers 122, according to an example embodiment. The wireless and/or wired speakers 122 may include a receiver 502, amplifier circuitry 504, a speaker 506, and a transmitter 508.
FIG. 6 illustrates a media remote device 106, according to an example embodiment. The media remote device 106 may include interactive buttons 602 (e.g., volume, channel, up-down-left-right arrows, select, menu, etc.), a transmitter 604, a receiver 606, a microphone A 608, a microphone B 610, and a Central Processing Unit (CPU) 612.
According to an embodiment, microphone A 608 is configured to receive human voice and has a frequency response range of 300 Hertz (Hz) to 3000 Hz. Microphone B 610 is configured to receive background noise and has a frequency response range of 20 Hz to 20 kilohertz (kHz). CPU 612 operates to discern between audio voices and background noises received by microphone A 608 and microphone B 610.
FIG. 9 is a method 900 for creating a desired audio experience for a user, according to an example embodiment. Method 900 can be performed using, for example, system 100 of FIGS. 1-3.
Generally, method 900 operates to configure the components of system 100 so as to customize the audio experience for user 102. First, system 100 determines the user 102's audio preferences (this is generally covered by steps 902-908). Second, system 100 determines the current audio response of the components of system 100 (this is generally covered by steps 910-934). Third and finally, system 100 modifies the audio response of the components of system 100 so as to align such audio response with the user 102's audio preferences (this is generally covered by steps 936-946). Method 900 shall now be described in detail.
In steps 902 and 904, user 102 turns on the television 108/204 and media content player 112 (in an embodiment, media content player 112 will automatically turn on when the television 108/204 turns on, if these components are integrated into a single unit as shown in FIG. 1).
In step 906, the user 102 uses remote 106 to communicate with media content player 112 and set his audio preferences. In an embodiment, the media content player 112 displays a series of menus on television 108/204 to enable the user 102 to select and define a plurality of audio effects. Such audio effects include but are not limited to: delay between front and back speakers; delay between right and left speakers; volume and other audio features (bass, treble, balance, midrange, fading, etc.); sound mode (movie, music, surround decoder, direct playback, unprocessed, etc.); movie mode (standard, sci-fi, adventure, drama, sports, etc.); music mode (concert hall, chamber, cellar club, music video, 2 channel stereo, etc.); as well as any other audio characteristics. The media content player 112 saves the user 102's audio preferences (which are also called the user 102's audio profile).
In an embodiment, the user 102 may define different audio profiles for different types of content, such as different types of movies (e.g., action, drama, comedies, etc.), different types of music (e.g., pop, country, alternative, etc.), different types of venues (e.g., stadium, concert hall, intimate night club, etc.), different types of technical features (subwoofer on/off; rear speakers on/off; 2 channel mono; etc.), as well as any other combination of audio features the user 102 may wish to define.
Also in step 906, user 102 selects one of his audio profiles.
The room in which the user 102 is seated may include acoustic anomalies, such as room configuration or furniture that affect acoustics. Also, the acoustic anomalies may include the frequency response of speakers 110, 122 and 206, as well as the frequency response of the interaction between speakers 110, 122 and 206. Another acoustic anomaly may be coupling, reflections, or echoes from interaction between speakers 110, 122 and 206 and the walls of the home entertainment system 100. An additional acoustic anomaly may be audio effects caused by dynamic conditions of temperature, humidity and changing absorption.
These acoustic anomalies may not be detected by media content player 112 if background noise is present. Thus, in step 908, media content player 112 prompts the user 102 to silence any background noise. Also, media content player 112 turns off any background noise reduction algorithms in components of system 100.
In step 910, media content player 112 prompts the user 102 to ensure the speakers 110, 122, 206 and stereo 304 are placed in their desired position. In an embodiment, user 102 can place speakers 110, 122, 206 and stereo 304 in any desired location and configuration. In the following steps, regardless of the location of speakers 110, 122, 206 or stereo 304, media content player 112 will accordingly adjust the operation of components of system 100 to achieve the user 102's selected audio profile.
In step 912, media content player 112 prompts the user 102 to place the media remote device 106 in the desired position. In an embodiment, the desired position of remote 106 is where the user 102 will normally sit (i.e., chair 104).
In step 914, media content player 112 may instruct the user 102 to remain stationary. Media content player 112 also may instruct user 102 to keep the remote 106 stationary. Having both the user 102 and remote 106 stationary during the following steps may enhance the ability of media content player 112 to achieve the user 102's selected audio profile.
As people age, gradual hearing loss may occur. In an embodiment, media content player 112 may compensate for such hearing degradation. Accordingly, in step 916, media content player 112 transmits a tone to test the audible hearing frequency range of user 102. Media content player 112 may transmit tones stepping in increments of frequency until the tones are no longer audible. The process for transmitting tones may begin with transmitting the tone at the lowest frequency, according to an embodiment. In an alternative embodiment, the tone may begin with transmitting the tone at the highest frequency.
In step 918, media content player 112 asks the user 102 if the tone was audible. If user 102 answers “Yes” via the remote 106, then in step 920 the media content player 112 determines if the last transmitted tone was at the maximum allowable frequency. If not, then in step 922 the frequency is increased by some increment (such as 10 Hz in a non-limiting example), and in step 916 the tone at the higher frequency is transmitted.
If at step 918 the user 102 answered “No,” then in step 924 the media content player 112 stores the maximum audible frequency in the user 102's audio profile. In an embodiment, media content player 112 uses the maximum audible frequency as a threshold for user 102. Specifically, in an embodiment, media content player 112 may not play sounds above the maximum audible frequency when the user 102's selected audio profile is being used.
In step 926, media content player 112 transmits a test signal from transmitter 404 to speakers 110, 122 and 206. In an embodiment, media content player 112 may send a different test signal to different speakers 110, 122 and 206. In step 928, speakers 110, 122 and 206 receive the test signal(s) via their respective receiver 502. According to an embodiment, the test signal may include a chirp signal, an example of which is shown in FIG. 7. A chirp signal is a sinusoid that sweeps rapidly from a starting frequency to an end frequency. The chirp waveform may range in amplitude from +1 volt to −1 volt. A desirable feature of the chirp waveforms is its small crest factor and flat frequency response. The flat frequency response of the chirp signal, as shown in FIG. 8, is used to equally test all frequency components of a system.
In an alternative embodiment, the test signal may include a step signal. A step signal may be useful to evaluate the transient response of a system under test.
In step 930, speakers 110, 122 and 206 play their respective test signal. In an embodiment, for each speaker 110, 122 and 206, amplifier circuitry 504 plays the test signal via speaker 506. In an embodiment, speakers 110, 122, and 206 sequentially play the received test signal. Alternatively, the speakers 110, 122 and 206 play the test signal at the same time.
Also in step 930, the remote 106 receives the test signal played by speakers 110, 122 and 206 via microphone A 608 and microphone B 610. In doing so, remote 106 processes and/or records the frequency response of speakers 110, 122 and 206. Where the test signals differ by speaker 110, 122 and 206, remote 106 also timestamps when it received the test signal from each speaker 110, 122 and 206.
In step 932, media content player 112 receives the recorded frequency responses of speakers 110, 122 and 206 from media remote device 106. Media content player 112 also receives from remote 106 the timestamps (when available).
In step 934, media content player 112 calculates the respective distances of the remote 106 to each of the speakers 110, 122, and 206. In an embodiment, media content player 112 calculates these distances based on the delay of the test signal between when the test signals were issued by media content player 112 (in step 926) and heard by remote 106 (in step 930). In an embodiment, because speakers 110, 122 and 206 were assigned different test signals, media content player 112 is able to calculate the distance for each.
In step 936, media content player 112 creates a filter(s) for the selected audio profile. The filter(s) may take into consideration the respective frequency responses (from step 930) and distances (from step 934) of the speakers 110, 122 and 206. As discussed below, the filter(s) will operate to transform the frequency response of each speaker 110, 122 and 206 to the user 102's selected audio profile. According to an embodiment, media content player 112 generates a linear time invariant (LTI) filter for the selected audio profile. The LTI filter is generated based on a frequency response of the selected audio profile (or each component thereof) and will be respectively convolved with the recorded frequency response of speakers 110, 122 and 206. In doing so, media content player 112 strives to modify the frequency response of speakers 110, 122 and 206 to match the user 102's selected audio profile. In an embodiment, media content player 112 may also strive to eliminate unwanted frequency components in the recorded frequency responses of speakers 110, 122 and 206. Such unwanted frequency components may be the result of acoustic anomalies discussed above.
Embodiments may use two types of LTI filters, Finite Impulse Response (FIR) filters and/or Infinite Impulse Response (IIR) filters. An advantage of using a FIR filter is its ability to reduce or eliminate phase distortion. As such, a FIR filter may be generated by the media content player 112 to reduce or eliminate phase adjustments. Alternatively, a FIR filter may be designed to create particular desired phase adjustments. For example, the FIR filter may be designed for the theater environment audio effect, where a phase adjustment of around 70 degrees may be desirable. An advantage of an IIR filter is it may be more computationally efficient.
In another embodiment, media content player 112 may generate a combination of IIR and FIR filters for each speaker 110, 112 and 206 in step 936.
Also, in step 936, media content player 112 may generate filters to compensate for delays associated with the distance of speakers 110, 122 and 206 determined in step 934.
In step 938, media content player 112 applies the filters to the frequency responses of speakers 110, 122 and 206 to create a filter transformation for each of the speakers 110, 122 and 206.
In step 940, media content player 112 transmits the filter transformations to their respective speakers 110, 122 and 206, as well as to stereo 304.
In step 942, speakers 110, 122 and 206 and stereo 304 respectively receive the filter transformations.
In step 944, the filter transformations are applied to each of the speakers 110, 122 and 206 and the stereo 304. In an embodiment, each speaker 110, 122 and 206 processes and applies the received filter transformation with amplifier circuitry 504. In an alternative embodiment, the filter transformations may be applied by stereo 304. In an embodiment, the stereo 304 may apply the filter transformations to the line out which is connected to the cascaded device 306.
In step 946, user 102 selects content from the media content player 112. In response, media content 112 plays the content on television 108/204 using the selected audio profile in the manner discussed above.
Crowd-Sourcing Volume Information for Enhanced Playback of Content
In another aspect of embodiments of the invention, FIG. 10 illustrates a network environment for analyzing and applying volume statistics for users 102. The system 1000 includes a set of environments 1002-1 through 1002-N, a network 1004, and an aggregate server 1006, according to an embodiment. Each environment 1002-1 through 1002-N includes a user 102 with a television 108/204 and media content player 112, similar to that shown in FIGS. 1-3. Media content players 112 are configured to access aggregate server 1006 as described below. In an embodiment, aggregate server 1006 includes network interface circuit (NIC) 1008, an input/output (I/O) device 1010, a central processing unit (CPU) 1012, a bus 1014, and a memory 1016.
As further described below, users 102 experience media/content using media content players 112. In doing so, users 102 use remotes 106 to change the volume level. In an embodiment, media content players 112 monitor and record volume levels of users 102 and correlate these volume levels with the content being watched. Media content players 112 upload these volume statistics 1018 to aggregate server 1006. Later, when a user 102 wishes to view content, the associated media content player 112 may access volume statistics 1018 of that content from aggregate server 1006. If any volume statistics 1018 exist, media content play 112 may download and apply the volume statistics 1018 while presenting the content to the user 102.
In an embodiment, volume statistics 1018 for a given media/content may vary by time of day. For example, the volume may be lower late at night relative to early evening. Accordingly, aggregate server 1006 may store volume statistics 1018 for a given content based on time of day. Aggregate server 1006 may also select and provide volume statistics 1018 to media content player 112 based on time of day.
Memory 1016 of aggregate server 1006 may include a cluster algorithm 1020, according to an embodiment. The cluster algorithm 1020 causes CPU 1012 to process volume statistics 1018. Such volume statistics 1018 may include N sections of volume statistics 1018 for a particular media for each of environments 1002-N. These volume statistics 1018 may be aggregated to form a crowd sourced repository of volume level changes associated with different points in time in the playback of media.
FIG. 11 is a method 1100 for monitoring, recording, and applying volume statistics for user 102, according to an example embodiment.
In step 1102, media content player 112 may receive media from aggregate server 1006 or some other source, according to an example embodiment.
In step 1104, media content player 112 may check whether the received media includes volume statistics 1018. If yes, then in step 1106 media content player 112 plays the received media on television 108/204 and, while doing so, continually adjusts the volume according to the received volume statistics 1018. If not, then in step 1108 media content player 112 plays the received media using the audio information included therein. Alternatively, media content player 112 may request volume statistics 1018 from aggregate server 1006 and then perform step 1106 upon receipt.
In step 1110, while playing the received media, media content player 112 monitors and records as the user 102 changes the volume level. Media content player 112 correlates any volume changes to the portions of the media being viewed at the time the volume changes were made.
If volume changes were made (as determined in step 1112), then in step 1114 media content player 112 uploads that volume information to aggregate server 1006.
FIG. 12 is a method 1200 for analyzing volume statistics, according to an example embodiment.
In step 1202, aggregate server 1006 receives volume statistics from media content players 112. The volume statistics are mapped to portions of media that were being viewed when volume changes took place.
In step 1204, aggregate server 1006 aggregates and stores the aggregated volume statistics 1018. In an embodiment, aggregate server 1006 aggregates by organizing and averaging volume information by media timeline. For example, assume aggregate server 1006 receives information reflecting multiple volume changes at time t5 of a given content. In an embodiment, aggregate server 1006 averages those volume changes to generate the aggregate volume information 1018 for time t5 of that given content.
In step 1206, media content player 112 requests content.
In step 1208, aggregate server 1006 determines if volume statistics 1018 exist for the requested content. If yes, then in step 1210 aggregate server 1006 provides such volume statistics 1018 to media content player 112. Aggregate server 1006 may also provide the requested content to media content player 112.
Otherwise, in step 1212 aggregate server 1006 may provide only the requested content to the media content player 112.
Example Computer System
Various embodiments can be implemented, for example, using one or more well-known computer systems, such as computer system 1300 shown in FIG. 13. Computer system 1300 can be any well-known computer capable of performing the functions described herein, such as computers available from International Business Machines, Apple, Sun, HP, Dell, Sony, Toshiba, etc.
Computer system 1300 includes one or more processors (also called central processing units, or CPUs), such as a processor 1304. Processor 1304 is connected to a communication infrastructure or bus 1306.
Computer system 1300 also includes user input/output device(s) 1303, such as monitors, keyboards, pointing devices, etc., which communicate with communication infrastructure 1306 through user input/output interface(s) 1302.
Computer system 1300 also includes a main or primary memory 1308, such as random access memory (RAM). Main memory 1308 may include one or more levels of cache. Main memory 1308 has stored therein control logic (i.e., computer software) and/or data.
Computer system 1300 may also include one or more secondary storage devices or memory 1310. Secondary memory 1310 may include, for example, a hard disk drive 1312 and/or a removable storage device or drive 1314. Removable storage drive 1314 may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive.
Removable storage drive 1314 may interact with a removable storage unit 1318. Removable storage unit 1318 includes a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit 1318 may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive 1314 reads from and/or writes to removable storage unit 1318 in a well-known manner.
According to an exemplary embodiment, secondary memory 1310 may include other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 1300. Such means, instrumentalities or other approaches may include, for example, a removable storage unit 1322 and an interface 1320. Examples of the removable storage unit 1322 and the interface 1320 may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface.
Computer system 1300 may further include a communication or network interface 1324. Communication interface 1324 enables computer system 1300 to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number 1328). For example, communication interface 1324 may allow computer system 1300 to communicate with remote devices 1328 over communications path 1326, which may be wired and/or wireless, and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system 1300 via communication path 1326.
In an embodiment, a tangible apparatus or article of manufacture comprising a tangible computer useable or readable medium having control logic (software) stored thereon is also referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 1300, main memory 1308, secondary memory 1310, and removable storage units 1318 and 1322, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system 1300), causes such data processing devices to operate as described herein.
Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use the invention using data processing devices, computer systems and/or computer architectures other than that shown in FIG. 13. In particular, embodiments may operate with software, hardware, and/or operating system implementations other than those described herein.
Conclusion
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections (if any), is intended to be used to interpret the claims. The Summary and Abstract sections (if any) may set forth one or more but not all exemplary embodiments of the invention as contemplated by the inventor(s), and thus, are not intended to limit the invention or the appended claims in any way.
While the invention has been described herein with reference to exemplary embodiments for exemplary fields and applications, it should be understood that the invention is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of the invention. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein.
Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments may perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein.
References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein.
The breadth and scope of the invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (19)

What is claimed is:
1. A method for creating a desired audio effect, comprising:
causing a plurality of speakers to simultaneously play test signals configured to be received by a remote control device, wherein at least one of the test signals differs from at least one other of the test signals;
receiving recorded frequency responses of the speakers from the remote control device resulting from the playing of the test signals;
creating one or more filters to match an audio profile selected by a user, wherein at least some of the filters are based on at least distances from the speakers to the remote control device, wherein the remote control device is configured to enable wireless control of a media content device that provides content for playback using the speakers;
applying the filters to the recorded frequency responses to obtain filtered transformations of the speakers; and
applying the filtered transformations at the speakers to thereby achieve the selected audio profile.
2. The method of claim 1, wherein the filters are Linear Time Invariant (LTI) filters.
3. The method of claim 2, wherein the LTI filters comprise at least one of a Finite Impulse Response (FIR) filter and an Infinite Impulse Response (IIR) filter.
4. The method of claim 2, wherein the filters compensate for delays caused by distances between the speakers and the remote control device.
5. The method of claim 2, wherein the filters compensate for acoustic anomalies.
6. The method of claim 1, wherein applying the filters comprises convolving the filters with the recorded frequency responses of the speakers.
7. A media content device, comprising:
a processor operable to:
cause a plurality of speakers to simultaneously play test signals configured to be received by a remote control device, wherein at least one of the test signals differs from at least one other of the test signals;
receive from the remote control device recorded frequency responses of the speakers resulting from the playing of the test signals, wherein the remote control device is operable to enable a user to wirelessly control the media content device;
create one or more filters to match an audio profile selected by the user, wherein at least some of the filters are based on at least distances from the speakers to the remote control device, wherein the remote control device is configured to enable wireless control of the media content device that provides content for playback using the speakers; and
apply the filters to the recorded frequency responses to obtain filtered transformations of the speakers, wherein the filtered transformations are applied at the speakers to thereby achieve the selected audio profile.
8. The media content device of claim 7, wherein the filters are Linear Time Invariant (LTI) filters.
9. The media content device of claim 8, wherein the LTI filters comprise at least one of a Finite Impulse Response (FIR) filter and an Infinite Impulse Response (IIR) filter.
10. The media content device of claim 7, wherein the filters compensate for delays caused by distances between the speakers and the remote control device.
11. The media content device of claim 7, wherein the filters compensate for acoustic anomalies.
12. The media content device of claim 7, wherein to apply the filters the processor is operable to convolve the filters with the recorded frequency responses of the speakers.
13. A non-transitory, tangible computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the at least one computing device to perform operations comprising:
causing a plurality of speakers to play test signals configured to be received by a remote control device, wherein the test signals are generated having a sine wave, a crest factor, and a flat frequency response that sweeps from a starting frequency to an ending frequency;
receiving from the remote control device recorded frequency responses of the speakers resulting from the playing of the test signals;
creating one or more filters to match an audio profile selected by a user;
applying the filters to the recorded frequency responses to obtain filtered transformations of the speakers; and
determining a maximum audible frequency that the user can hear, wherein the filtered transformations are applied at the speakers to thereby achieve the selected audio profile, and wherein the filters comprise Linear Time Invariant (LTI) filters created based on at least distances between the speakers and the remote control device, the recorded frequency responses of the plurality of speakers, and a frequency response of the selected audio profile.
14. The method of claim 2, wherein the LTI filters are created based on distances between each of the plurality of speakers and the remote control device, the recorded frequency responses of the plurality of speakers, and a frequency response of the selected audio profile.
15. The method of claim 14, wherein the selected audio profile comprises at least one of: movie audio profile identifying one or more types of movies, music audio profile identifying one or more types of music, venue audio profile identifying one or more types of venues, and technical features audio profile identifying one or more types of technical features.
16. The method of claim 14, wherein causing the plurality of speakers to simultaneously play test signals comprises:
generating the test signals having a sine wave, a crest factor, and a flat frequency response that sweeps rapidly from a starting frequency to an ending frequency; and
transmitting the generated test signals to the plurality of speakers for output by the speakers.
17. The method of claim 16, further comprising:
determining a distance between each of the plurality of speakers and the remote control device based on at least a delay between when a generated test signal is transmitted to a speaker for output and when the transmitted test signal for output by the speaker is detected by the remote control device.
18. The method of claim 16, further comprising determining a maximum audible frequency of the user by:
transmitting a sequence of tones to the speakers for playback to the user, each tone of the sequence of tones having a frequency greater than a previously transmitted tone;
determining a frequency of the last transmitted tone of the sequence of tones, when receiving a response indicating that a current transmitted tone is not audible by the user; and
storing the frequency of the last transmitted tone of the sequence as the maximum audible frequency of the user.
19. The method of claim 18, wherein the maximum audible frequency is representative of a frequency threshold for the user, such that output of audio does not exceed the maximum audible frequency for the user, when the selected audio profile is being used.
US14/813,628 2015-07-30 2015-07-30 Audio preferences for media content players Active 2036-04-05 US10091581B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/813,628 US10091581B2 (en) 2015-07-30 2015-07-30 Audio preferences for media content players
PCT/US2016/044053 WO2017019690A1 (en) 2015-07-30 2016-07-26 Audio preferences for media content players
EP16831241.1A EP3329693A4 (en) 2015-07-30 2016-07-26 Audio preferences for media content players
US16/148,366 US10827264B2 (en) 2015-07-30 2018-10-01 Audio preferences for media content players

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/813,628 US10091581B2 (en) 2015-07-30 2015-07-30 Audio preferences for media content players

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/148,366 Continuation US10827264B2 (en) 2015-07-30 2018-10-01 Audio preferences for media content players

Publications (2)

Publication Number Publication Date
US20170034621A1 US20170034621A1 (en) 2017-02-02
US10091581B2 true US10091581B2 (en) 2018-10-02

Family

ID=57883535

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/813,628 Active 2036-04-05 US10091581B2 (en) 2015-07-30 2015-07-30 Audio preferences for media content players
US16/148,366 Active 2035-10-08 US10827264B2 (en) 2015-07-30 2018-10-01 Audio preferences for media content players

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/148,366 Active 2035-10-08 US10827264B2 (en) 2015-07-30 2018-10-01 Audio preferences for media content players

Country Status (3)

Country Link
US (2) US10091581B2 (en)
EP (1) EP3329693A4 (en)
WO (1) WO2017019690A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240168125A1 (en) * 2015-07-30 2024-05-23 Roku, Inc. Mobile Device Based Control Device Locator

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10310803B2 (en) * 2016-09-02 2019-06-04 Bose Corporation Systems and methods for controlling a modular speaker system
CN108377443B (en) * 2018-03-16 2020-09-11 中影数字巨幕(北京)有限公司 Sound playing control method and system
CN108322856B (en) * 2018-03-16 2020-05-05 中影数字巨幕(北京)有限公司 Sound playing control method and system
WO2021112646A1 (en) * 2019-12-06 2021-06-10 엘지전자 주식회사 Method for transmitting audio data by using short-range wireless communication in wireless communication system, and apparatus for same

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243476B1 (en) * 1997-06-18 2001-06-05 Massachusetts Institute Of Technology Method and apparatus for producing binaural audio for a moving listener
US20020151995A1 (en) 2001-04-12 2002-10-17 Jorgenson Joel A. Distributed audio system for the capture, conditioning and delivery of sound
US6606280B1 (en) * 1999-02-22 2003-08-12 Hewlett-Packard Development Company Voice-operated remote control
US20060251265A1 (en) * 2005-05-09 2006-11-09 Sony Corporation Apparatus and method for checking loudspeaker
US20070147636A1 (en) * 2005-11-18 2007-06-28 Sony Corporation Acoustics correcting apparatus
US20080189355A1 (en) 2007-02-07 2008-08-07 Microsoft Corporation Per-Application Remote Volume Control
US20090110218A1 (en) * 2007-10-31 2009-04-30 Swain Allan L Dynamic equalizer
US20090225996A1 (en) * 2008-03-07 2009-09-10 Ksc Industries, Inc. Speakers with a digital signal processor
US20090232318A1 (en) * 2006-07-03 2009-09-17 Pioneer Corporation Output correcting device and method, and loudspeaker output correcting device and method
US20110002471A1 (en) 2009-07-02 2011-01-06 Conexant Systems, Inc. Systems and methods for transducer calibration and tuning
US20110299706A1 (en) * 2010-06-07 2011-12-08 Kazuki Sakai Audio signal processing apparatus and audio signal processing method
US8081776B2 (en) * 2004-04-29 2011-12-20 Harman Becker Automotive Systems Gmbh Indoor communication system for a vehicular cabin
US20120224701A1 (en) * 2011-03-04 2012-09-06 Kazuki Sakai Acoustic apparatus, acoustic adjustment method and program
US20130142366A1 (en) 2010-05-12 2013-06-06 Sound Id Personalized hearing profile generation with real-time feedback
US20140161281A1 (en) * 2012-12-11 2014-06-12 Amx, Llc Audio signal correction and calibration for a room environment
WO2014137740A1 (en) 2013-03-08 2014-09-12 Sound Innovations Inc. System and method for personalization of an audio equalizer
US20150036833A1 (en) * 2011-12-02 2015-02-05 Thomas Lukasczyk Method and device for testing a loudspeaker arrangement
US20150078596A1 (en) * 2012-04-04 2015-03-19 Sonicworks, Slr. Optimizing audio systems
US9031268B2 (en) * 2011-05-09 2015-05-12 Dts, Inc. Room characterization and correction for multi-channel audio
US20150382121A1 (en) * 2013-03-06 2015-12-31 Tiskerling Dynamics Llc System and method for robust simultaneous driver measurement for a speaker system
US20160198275A1 (en) * 2002-03-25 2016-07-07 Bose Corporation Automatic audio system equalizing
US20170330571A1 (en) * 2014-10-30 2017-11-16 D&M Holdings Inc. Audio device and computer-readable program

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7095455B2 (en) * 2001-03-21 2006-08-22 Harman International Industries, Inc. Method for automatically adjusting the sound and visual parameters of a home theatre system
US6804565B2 (en) * 2001-05-07 2004-10-12 Harman International Industries, Incorporated Data-driven software architecture for digital sound processing and equalization
US7848531B1 (en) * 2002-01-09 2010-12-07 Creative Technology Ltd. Method and apparatus for audio loudness and dynamics matching
US7502480B2 (en) * 2003-08-19 2009-03-10 Microsoft Corporation System and method for implementing a flat audio volume control model
WO2005099252A1 (en) * 2004-04-08 2005-10-20 Koninklijke Philips Electronics N.V. Audio level control
CN101853683B (en) * 2005-02-18 2011-12-21 松下电器产业株式会社 Stream supply device
US8406435B2 (en) * 2005-03-18 2013-03-26 Microsoft Corporation Audio submix management
US20060285701A1 (en) * 2005-06-16 2006-12-21 Chumbley Robert B System and method for OS control of application access to audio hardware
JP4257862B2 (en) * 2006-10-06 2009-04-22 パナソニック株式会社 Speech decoder
JP2010514235A (en) * 2006-12-18 2010-04-30 ソフト・ディービー・インコーポレーテッド Volume automatic adjustment method and system
US8103008B2 (en) * 2007-04-26 2012-01-24 Microsoft Corporation Loudness-based compensation for background noise
US20090076825A1 (en) * 2007-09-13 2009-03-19 Bionica Corporation Method of enhancing sound for hearing impaired individuals
US8611559B2 (en) * 2010-08-31 2013-12-17 Apple Inc. Dynamic adjustment of master and individual volume controls
JP5609445B2 (en) * 2010-09-03 2014-10-22 ソニー株式会社 Control terminal device and control method
US8595624B2 (en) * 2010-10-29 2013-11-26 Nokia Corporation Software application output volume control
US9952576B2 (en) * 2012-10-16 2018-04-24 Sonos, Inc. Methods and apparatus to learn and share remote commands
US9385678B2 (en) * 2013-05-03 2016-07-05 Honda Motor Co., Ltd. Methods and systems for controlling volume
US10063207B2 (en) * 2014-02-27 2018-08-28 Dts, Inc. Object-based audio loudness management

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243476B1 (en) * 1997-06-18 2001-06-05 Massachusetts Institute Of Technology Method and apparatus for producing binaural audio for a moving listener
US6606280B1 (en) * 1999-02-22 2003-08-12 Hewlett-Packard Development Company Voice-operated remote control
US20020151995A1 (en) 2001-04-12 2002-10-17 Jorgenson Joel A. Distributed audio system for the capture, conditioning and delivery of sound
US20160198275A1 (en) * 2002-03-25 2016-07-07 Bose Corporation Automatic audio system equalizing
US8081776B2 (en) * 2004-04-29 2011-12-20 Harman Becker Automotive Systems Gmbh Indoor communication system for a vehicular cabin
US20060251265A1 (en) * 2005-05-09 2006-11-09 Sony Corporation Apparatus and method for checking loudspeaker
US20070147636A1 (en) * 2005-11-18 2007-06-28 Sony Corporation Acoustics correcting apparatus
US20090232318A1 (en) * 2006-07-03 2009-09-17 Pioneer Corporation Output correcting device and method, and loudspeaker output correcting device and method
US20080189355A1 (en) 2007-02-07 2008-08-07 Microsoft Corporation Per-Application Remote Volume Control
US20090110218A1 (en) * 2007-10-31 2009-04-30 Swain Allan L Dynamic equalizer
US20090225996A1 (en) * 2008-03-07 2009-09-10 Ksc Industries, Inc. Speakers with a digital signal processor
US20110002471A1 (en) 2009-07-02 2011-01-06 Conexant Systems, Inc. Systems and methods for transducer calibration and tuning
US20130142366A1 (en) 2010-05-12 2013-06-06 Sound Id Personalized hearing profile generation with real-time feedback
US20110299706A1 (en) * 2010-06-07 2011-12-08 Kazuki Sakai Audio signal processing apparatus and audio signal processing method
US20120224701A1 (en) * 2011-03-04 2012-09-06 Kazuki Sakai Acoustic apparatus, acoustic adjustment method and program
US9031268B2 (en) * 2011-05-09 2015-05-12 Dts, Inc. Room characterization and correction for multi-channel audio
US9641952B2 (en) * 2011-05-09 2017-05-02 Dts, Inc. Room characterization and correction for multi-channel audio
US20150036833A1 (en) * 2011-12-02 2015-02-05 Thomas Lukasczyk Method and device for testing a loudspeaker arrangement
US20150078596A1 (en) * 2012-04-04 2015-03-19 Sonicworks, Slr. Optimizing audio systems
US9380400B2 (en) * 2012-04-04 2016-06-28 Sonarworks Sia Optimizing audio systems
US20140161281A1 (en) * 2012-12-11 2014-06-12 Amx, Llc Audio signal correction and calibration for a room environment
US20150382121A1 (en) * 2013-03-06 2015-12-31 Tiskerling Dynamics Llc System and method for robust simultaneous driver measurement for a speaker system
WO2014137740A1 (en) 2013-03-08 2014-09-12 Sound Innovations Inc. System and method for personalization of an audio equalizer
US20170330571A1 (en) * 2014-10-30 2017-11-16 D&M Holdings Inc. Audio device and computer-readable program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion of the International Searching Authority for International Application No. PCT/US2016/044053, dated Nov. 3, 2016 (10 pages).

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240168125A1 (en) * 2015-07-30 2024-05-23 Roku, Inc. Mobile Device Based Control Device Locator

Also Published As

Publication number Publication date
US10827264B2 (en) 2020-11-03
WO2017019690A1 (en) 2017-02-02
US20190037311A1 (en) 2019-01-31
EP3329693A4 (en) 2019-06-05
US20170034621A1 (en) 2017-02-02
EP3329693A1 (en) 2018-06-06

Similar Documents

Publication Publication Date Title
US10827264B2 (en) Audio preferences for media content players
CN104871566B (en) Collaborative sound system
US20160119730A1 (en) Method for improving audio quality of online multimedia content
US10638245B2 (en) Dynamic multi-speaker optimization
JP7566897B2 (en) SYSTEM AND METHOD FOR PROVIDING SPATIAL AUDIO ASSOCIATED WITH A SIMULATED ENVIRONMENT - Patent application
CN111415673A (en) Customized audio processing based on user-specific and hardware-specific audio information
US9847767B2 (en) Electronic device capable of adjusting an equalizer according to physiological condition of hearing and adjustment method thereof
US20240069853A1 (en) Techniques for Extending the Lifespan of Playback Devices
JP7150033B2 (en) Methods for Dynamic Sound Equalization
Kim et al. Mobile maestro: Enabling immersive multi-speaker audio applications on commodity mobile devices
AU2021259316B2 (en) Priority media content
JP2020537470A (en) How to set parameters for personal application of audio signals
US20240331678A1 (en) Systems and methods for pre-generated inverse audio canceling
US20230195783A1 (en) Speech Enhancement Based on Metadata Associated with Audio Content
WO2022129487A1 (en) Personalised audio output
WO2024206496A1 (en) Adaptive streaming content selection for playback groups

Legal Events

Date Code Title Description
AS Assignment

Owner name: ROKU, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GARNER, GREGORY MACK;BROUILLETTE, PATRICK ALAN;REEL/FRAME:036235/0426

Effective date: 20150729

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: AMENDED AND RESTATED INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:ROKU, INC.;REEL/FRAME:042768/0268

Effective date: 20170609

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:ROKU, INC.;REEL/FRAME:048385/0375

Effective date: 20190219

AS Assignment

Owner name: ROKU, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK, AS BANK;REEL/FRAME:048420/0841

Effective date: 20190222

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: ROKU, INC., CALIFORNIA

Free format text: TERMINATION AND RELEASE OF INTELLECTUAL PROPERTY SECURITY AGREEMENT (REEL/FRAME 048385/0375);ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:062826/0001

Effective date: 20230221

AS Assignment

Owner name: CITIBANK, N.A., TEXAS

Free format text: SECURITY INTEREST;ASSIGNOR:ROKU, INC.;REEL/FRAME:068982/0377

Effective date: 20240916