[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP2432224A1 - Multimedia system - Google Patents

Multimedia system Download PDF

Info

Publication number
EP2432224A1
EP2432224A1 EP10177142A EP10177142A EP2432224A1 EP 2432224 A1 EP2432224 A1 EP 2432224A1 EP 10177142 A EP10177142 A EP 10177142A EP 10177142 A EP10177142 A EP 10177142A EP 2432224 A1 EP2432224 A1 EP 2432224A1
Authority
EP
European Patent Office
Prior art keywords
processing unit
unit
audio
data stream
head unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10177142A
Other languages
German (de)
French (fr)
Inventor
Wolfgang Hess
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harman Becker Automotive Systems GmbH
Original Assignee
Harman Becker Automotive Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harman Becker Automotive Systems GmbH filed Critical Harman Becker Automotive Systems GmbH
Priority to EP10177142A priority Critical patent/EP2432224A1/en
Publication of EP2432224A1 publication Critical patent/EP2432224A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2368Multiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41422Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance located in transportation means, e.g. personal vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4341Demultiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Definitions

  • the present invention relates to an multimedia system, in particular to a head unit of an in-car infotainment system.
  • a "head unit” is a component of a multimedia system in particular in a vehicle which provides a hardware interface for the various components of an electronic media system.
  • the following description concentrates on automotive applications. However, the use of multimedia systems is not limited to the automotive sector.
  • the head unit may form the centerpiece of the car's sound system and is typically located in the center of the dashboard. Head units give the user control over the vehicle's entertainment media: AM/FM radio, satellite radio, CDs/DVDs, cassette tapes (although these are now uncommon), MP3, GPS navigation, Bluetooth, telephone, media player, etc. Many head units afford the user precise control over detailed audio functions such as volume, band, frequency, speaker balance, speaker fade, bass, treble, equalizer, surround sound settings and so on.
  • a head unit may thus serve as a secondary instrument panel.
  • the main board (also referred to as main circuit board) of a head unit may carry SDRAM (Synchronous Dynamic Random Access Memory) devices and a CPU (Central Processing Unit) similar to a main board of a PC.
  • Further components typically are digital signal processors (DSPs) and FPGAs (Field Programmable Gate Array) for decoding MPEG-coded (MPEG: short for "Moving Picture Experts Group” who set the MPEG standard for coding audio and video) audio and video data, processing audio data (e.g. applying digital filters) and processing graphic data (e.g. generating 2D and 3D graphic effects).
  • a Global Positioning System (GPS) receiver may also be included in the head unit in order to provide the function of a navigation system.
  • GPS Global Positioning System
  • the speedometer and other sensors may be connected to the head unit via a CAN (Controller-Area Network) bus interface.
  • CAN Controller-Area Network
  • Further audio and multimedia components CD (compact disc) changer, audio amplifier, telephone, rear seat displays, etc.
  • MOST Media Oriented Systems Transport
  • a head unit of a multimedia system comprises: an interface providing an input audio data stream; an interface configured to receive an output audio data stream; a central processing unit; a graphic processing unit; and a audio processing unit receiving the input audio data stream and providing the processed output audio data stream, in which the audio processing unit resides solely on the graphic processing unit or is distributed across the graphic processing unit and the central processing unit.
  • a GPU Graphic Processing Unit
  • the GPU has a programmable rendering pipeline, which takes care of the vector and matrix transformations and lighting calculations needed, usually to display the 2D projection of the data to the screen.
  • GPGPU General-purpose computing on graphics processing units
  • GPU General-purpose computing on graphics processing units
  • Vertex shaders allow the programmer to alter per-vertex attributes, such as position, colour, texture coordinates, and normal vector.
  • Fragment shaders are used to calculate the colour of a fragment, or per-pixel.
  • Programmable fragment shaders allow the programmer to substitute, for example, a lighting model other than those provided by default by the graphics card, typically simple Gouraud shading. Shaders have enabled graphics programmers to create lens effects, displacement mapping, and depth of field.
  • graphic processing units which are available today, are generally well suited for stream processing (see, for example, J.D. Owens, M. Houston, D. Luebke, S. Green, J.E. Stone, J.C. Phillips: GPU Computing, in: Proceedings of the IEEE, Vol. 96, No. 5, pp. 879-899, May 2008 ).
  • FIG. 1 illustrates the basic structure of a modern automotive multimedia system (infotainment system) as it is currently employed by many manufacturers.
  • a bus that may be, for example, a MOST bus (Media Oriented Systems Transport bus), which is very common in automotive applications.
  • the above mentioned components are, for example, the head unit 10 (which may serve as a bus master device), an audio amplifier 20, a television tuner 30, a CD changer 40, a telephone interface 50 (for establishing a wireless link, e.g. a Bluetooth link, to a mobile phone 51), etc.
  • this list is not complete and may include various other components.
  • the mentioned components are not inevitably required in a multimedia system and have been included in the system of FIG. 1 as illustrative examples.
  • FIG. 2 illustrates a typical example of the hardware structure which may be used to implement the head unit 10 of FIG. 1 .
  • the illustrated components namely a digital signal processor (DSP) 110, a field programmable gate array (FPGA) 120, a microcontroller ( ⁇ C) 150, and a central processing unit (CPU) 130 are typically arranged on a main board of the head unit 10.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ⁇ C microcontroller
  • CPU central processing unit
  • the CPU 120 is configured to handle the user-machine communication. This function is represented by the human-machine interface (HMI) 132 in the example of FIG. 2 .
  • the HMI 132 is coupled to an audio management unit 131 which may also be implemented as software in the CPU 130.
  • the audio management unit 121 controls the audio settings of the audio subsystem of the overall multimedia system such as, for example, user definable equalizer settings (representing, for example, different acoustic situations), surround sound settings, bass, treble, etc.
  • the CPU 130 may further include a (software implemented) media driver controller 133 which forms an interface to a non-volatile data storage medium such as, for example, a DVD (digital versatile disc) drive 140, a CD drive, or a flash memory device, to name just a few examples.
  • the media device controller 133 provides data streams representing audio and/or video data, in particular audio and video streams coded in accordance with the MPEG standard (e.g. AC-3, AAC and MP3 coded audio streams).
  • the data streams may be encrypted such as, for example, in accordance with the Content Scrambling System (CSS) commonly used in DVDs.
  • CCS Content Scrambling System
  • DRM digital rights management
  • the audio processing itself usually can not be performed by the CPU which is not designed to process a huge amount of numerical calculations necessary to process digital audio data.
  • the audio processing is typically "outsourced" to a dedicated digital signal processor (DSP) optimised for audio data processing.
  • DSP digital signal processor
  • the DSP 110 includes a (software implemented) audio decoder unit 111 receiving a digital audio stream from the media drive controller 133 managed by the CPU 130. If the audio stream is encrypted, a stream decryption unit 121 (implemented, for example, in the FPGA 120) coupled between the audio decoder unit 111 and the media device controller 133 may provide for data decryption.
  • the decoded (and, when necessary, sample-rate converted) audio data may be made available to an audio post-processing unit 113 which provides a number of audio processing algorithms ranging, for example, from digital equalizing filters to complex sound field processing algorithms for creating a virtual concert hall impression or the like.
  • the audio post-processing unit 113 provides the final digital audio signal which is forwarded to an audio power amplifier, for example, via the MOST bus 60 (cf. FIG. 1 ).
  • the signal flow within the DSP 110 is managed by a (software implemented) unit denoted as routing engine 112 in FIG. 2 .
  • the routing engine 112 is configured to forward audio stream data from the audio decoder unit 111 to the audio post processing unit 113 and further to forward the post-processed audio data from the post-processing unit 113 to a MOST bus interface 112 (implemented in the FPGA 120 in the present example) so as to be forwarded via the bus 60 to an audio data sink such as, for example, the audio amplifier 20 (see FIG. 1 ).
  • a MOST bus interface 112 implemented in the FPGA 120 in the present example
  • the routing engine 112 may be further configured to receive audio data from the MOST bus via interface 122 and forward the received audio data to the post-processing unit 113. The processed audio data may then be sent back (using again the routing engine 112) to the MOST bus 60 via interface 122 and further to the audio amplifier 20 as already mentioned above.
  • the operation of the audio decoder unit 111, the routing engine 112, and the post processing unit 113 is controlled and managed by a command and control logic 114 implemented in the DSP 110.
  • the control logic 114 may be implemented as software executed by the DSP, coupled to the audio decoder unit 111, the routing engine 112 and the post processing unit 113, and further receive high-level control commands from the audio management unit 131 included in the CPU 130.
  • a network management unit 151 for managing the data transfer over the MOST bus 60 may be implemented in a separate microcontroller 150 and coupled to the MOST interface 122. However, the network management unit 151 could also be included in the CPU 140.
  • head units may include high-performance graphic processing units (GPUs) to manage the graphic processing tasks and thus to reduce the CPU load.
  • GPUs graphic processing units
  • the stream processing capability of modern GPUs may be advantageously used for the processing of audio streams.
  • FIG. 3 illustrates a head unit 10 for use in an automotive multimedia system (cf. FIG. 1 ).
  • the system comprises a CPU 130, a GPU 160, and optionally a microcontroller 150 and a FPGA 120.
  • the functionality of the latter two may be alternatively taken over by the GPU 160 and/or by the CPU 130.
  • the MOST interface 122 is implemented using the FPGA 120 and the network management unit 151 is implemented using the microcontroller 150 analogously to the example of FIG. 2 .
  • the CPU 120 implements similar functions as in the example of FIG. 2 , in particular the media drive controller 133, the audio management unit 131, and the HMI 132.
  • the audio management unit 131 is coupled to the HMI 132 as well as to the network management unit 151 (in order to allow sending and receiving audio control commands, e.g. user volume settings, via the network).
  • the graphic processing (unit 161) resides (i.e. is implemented) in the GPU 150 which is coupled to the CPU 120 via a data bus.
  • the system comprises an audio processing unit 162 which is mainly implemented using software executed solely by the GPU 162 or, alternatively, the audio processing is distributed over CPU 120 and GPU 162.
  • the main calculation power is provided by the GPU 160 and thus a separate DSP 110 is not needed any more resulting in a less complex hardware structure of the head unit 10.
  • some signal processing may optionally be outsourced to the FPGA (i.e. the stream decryption in the example of FIG. 2 ) or directly included in the (optionally distributed) signal processing unit 162.
  • a programmable GPU 160 is required.
  • some manufacturers have developed standard architectures such as CUDA (short for "Compute Unified Device Architecture") by NVIDIA Corp.
  • CUDA serial for "Compute Unified Device Architecture”
  • synchronisation is necessary.
  • Such synchronization ensures that only valid data is exchanged between CPU and GPU.
  • different threads may be synchronized using semaphores (i.e. protected variables or abstract data types that constitute a classic method of controlling access by several threads to a common resource in a parallel programming environment), mutex algorithms or the like.
  • IIR filters may (due to the feedback loop) be implemented on the CPU whereas finite impulse response (FIR) filters may be distributed to parallel threads and thus advantageously be implemented on the GPU.
  • FIR filters may be distributed to parallel threads and thus advantageously be implemented on the GPU.
  • a synchronization mechanism may be necessary to define which processor (CPU or GPU) gets valid input data at a certain time.
  • IIR and FIR filters are used to realize various functions of the head unit (e.g. equalizing, surround sound processing, etc.).
  • a load balancing can be implemented by using an appropriate thread scheduling.
  • audio algorithms including algorithms that have to be executed step by step in a sequential manner are advantageously implemented in the CPU 130 whereas algorithms which can be parallelized (e.g. FIR filters, delay lines, decoder algorithms, etc.) are more efficiently processed by the GPU 160.
  • some audio processing algorithms tasks, e.g. delay lines or decoder algorithms such as MP3 or AC3 may be implemented on the GPU 160 and on the CPU 130 as well.
  • the audio processing may be redistributed such that some audio processing algorithms initially executed by the GPU 160 are "moved" to the CPU 130.
  • the respective algorithms is executed by both processors (GPU 160 and CPU 130) both performing the same operations.
  • a cross-fading is performed to the processor with lower load (e.g. CPU) and the respective task is stopped in the processor with higher load (e.g. GPU) in order to free resources.
  • the parallel tasks performing the same function
  • the communication between the GPU 160 and the CPU 130 as well as the load balancing may be managed and controlled by a load-balancing controller 134 which may be executed by the CPU 130.
  • the load-balancing controller 134 residing on the CPU 130 is illustrated in FIG. 4 which includes all components of the example of FIG. 3 and additionally the load-balancing controller 134.
  • the load balancing-controller "knows" the tasks executed by the CPU 130 and the GPU 160 and is configured to estimate the respective processor loads (as a fraction of the respective maximum processor load) and to move tasks from one processor to the other in order to free resources on one processor when necessary (e.g. if the processor load exceeds a given threshold).
  • a proven bus system may be used for communication and data exchange between GPU 160 and CPU 130 such as, for example, the Peripheral Component Interconnect Express (PCIe) bus.
  • PCIe Peripheral Component Interconnect Express

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

A head unit of a multimedia system is disclosed. In accordance with one example of the invention the head unit comprises: an interface providing an input audio data stream; an interface configured to receive an output audio data stream; a central processing unit; a graphic processing unit; and a audio processing unit receiving the input audio data stream and providing the processed output audio data stream, wherein the audio processing unit resides solely on the graphic processing unit or is distributed across the graphic processing unit and the central processing unit.

Description

    TECHNICAL FIELD
  • The present invention relates to an multimedia system, in particular to a head unit of an in-car infotainment system.
  • BACKGROUND
  • A "head unit" is a component of a multimedia system in particular in a vehicle which provides a hardware interface for the various components of an electronic media system. The following description concentrates on automotive applications. However, the use of multimedia systems is not limited to the automotive sector.
  • The head unit may form the centerpiece of the car's sound system and is typically located in the center of the dashboard. Head units give the user control over the vehicle's entertainment media: AM/FM radio, satellite radio, CDs/DVDs, cassette tapes (although these are now uncommon), MP3, GPS navigation, Bluetooth, telephone, media player, etc. Many head units afford the user precise control over detailed audio functions such as volume, band, frequency, speaker balance, speaker fade, bass, treble, equalizer, surround sound settings and so on.
  • Several car manufacturers are integrating more advanced systems into vehicle's head units such that they can control vehicular functions such as navigation and offer vehicle data such as trouble warnings and odometer information. A head unit may thus serve as a secondary instrument panel.
  • The main board (also referred to as main circuit board) of a head unit may carry SDRAM (Synchronous Dynamic Random Access Memory) devices and a CPU (Central Processing Unit) similar to a main board of a PC. Further components typically are digital signal processors (DSPs) and FPGAs (Field Programmable Gate Array) for decoding MPEG-coded (MPEG: short for "Moving Picture Experts Group" who set the MPEG standard for coding audio and video) audio and video data, processing audio data (e.g. applying digital filters) and processing graphic data (e.g. generating 2D and 3D graphic effects). A Global Positioning System (GPS) receiver may also be included in the head unit in order to provide the function of a navigation system. The speedometer and other sensors may be connected to the head unit via a CAN (Controller-Area Network) bus interface. Further audio and multimedia components (CD (compact disc) changer, audio amplifier, telephone, rear seat displays, etc.) are typically connected to the head unit using a MOST (Media Oriented Systems Transport) bus.
  • As the head unit hardware becomes more and more complex the hardware design efforts and the manufacturing costs increase. Consequently, there is a need for a head unit requiring a less complicated hardware structure without degrading the overall performance of the head unit.
  • SUMMARY
  • A head unit of a multimedia system is disclosed. In accordance with one example of the invention the head unit comprises: an interface providing an input audio data stream; an interface configured to receive an output audio data stream; a central processing unit; a graphic processing unit; and a audio processing unit receiving the input audio data stream and providing the processed output audio data stream, in which the audio processing unit resides solely on the graphic processing unit or is distributed across the graphic processing unit and the central processing unit.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention can be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, instead emphasis being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts. In the drawings:
  • FIG. 1
    illustrates a head unit of a car infotainment system connected with various external devices via a MOST bus;
    Fig. 2
    illustrates by means of a simplified block diagram the basic components of currently used head units including a CPU, a DSP and a FPGA;
    FIG. 3
    illustrates by means of a simplified block diagram the basic components of a head unit in accordance with the present invention including in particular a CPU and a GPU; and
    FIG. 4
    illustrates a head unit similar to FIG. 3 with an additional load-balancing unit.
    DETAILED DESCRIPTION
  • A GPU (Graphic Processing Unit) is a dedicated processor designed to perform floating-point calculations that are fundamental in 2D and 3D graphics. The GPU has a programmable rendering pipeline, which takes care of the vector and matrix transformations and lighting calculations needed, usually to display the 2D projection of the data to the screen.
  • General-purpose computing on graphics processing units (referred to as GPGPU) is the technique of using a GPU, which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the CPU. It is made possible by adding programmable stages and higher precision arithmetic to the rendering pipelines, which allows software developers to use stream processing on non-graphics data. GPU functionality has, traditionally, been very limited. In fact, for many years the GPU was only used to accelerate certain parts of the graphics pipeline. Some improvements were needed before GPGPU became feasible.
  • Programmable vertex and fragment shaders were added to the graphics pipeline to enable game programmers to generate even more realistic effects. Vertex shaders allow the programmer to alter per-vertex attributes, such as position, colour, texture coordinates, and normal vector. Fragment shaders are used to calculate the colour of a fragment, or per-pixel. Programmable fragment shaders allow the programmer to substitute, for example, a lighting model other than those provided by default by the graphics card, typically simple Gouraud shading. Shaders have enabled graphics programmers to create lens effects, displacement mapping, and depth of field. Summarizing the above, graphic processing units (GPU), which are available today, are generally well suited for stream processing (see, for example, J.D. Owens, M. Houston, D. Luebke, S. Green, J.E. Stone, J.C. Phillips: GPU Computing, in: Proceedings of the IEEE, Vol. 96, No. 5, pp. 879-899, May 2008).
  • FIG. 1 illustrates the basic structure of a modern automotive multimedia system (infotainment system) as it is currently employed by many manufacturers. Several separate components composing the multimedia system are connected via a bus that may be, for example, a MOST bus (Media Oriented Systems Transport bus), which is very common in automotive applications. The above mentioned components are, for example, the head unit 10 (which may serve as a bus master device), an audio amplifier 20, a television tuner 30, a CD changer 40, a telephone interface 50 (for establishing a wireless link, e.g. a Bluetooth link, to a mobile phone 51), etc. It should be noted that this list is not complete and may include various other components. Further, the mentioned components are not inevitably required in a multimedia system and have been included in the system of FIG. 1 as illustrative examples.
  • FIG. 2 illustrates a typical example of the hardware structure which may be used to implement the head unit 10 of FIG. 1. The illustrated components, namely a digital signal processor (DSP) 110, a field programmable gate array (FPGA) 120, a microcontroller (µC) 150, and a central processing unit (CPU) 130 are typically arranged on a main board of the head unit 10. However, the functionality provided by the FPGA 120 and by the microcontroller may be also be included in the CPU 130 using appropriate software.
  • The CPU 120 is configured to handle the user-machine communication. This function is represented by the human-machine interface (HMI) 132 in the example of FIG. 2. The HMI 132 is coupled to an audio management unit 131 which may also be implemented as software in the CPU 130. The audio management unit 121 controls the audio settings of the audio subsystem of the overall multimedia system such as, for example, user definable equalizer settings (representing, for example, different acoustic situations), surround sound settings, bass, treble, etc. The CPU 130 may further include a (software implemented) media driver controller 133 which forms an interface to a non-volatile data storage medium such as, for example, a DVD (digital versatile disc) drive 140, a CD drive, or a flash memory device, to name just a few examples. The media device controller 133 provides data streams representing audio and/or video data, in particular audio and video streams coded in accordance with the MPEG standard (e.g. AC-3, AAC and MP3 coded audio streams). The data streams may be encrypted such as, for example, in accordance with the Content Scrambling System (CSS) commonly used in DVDs. However, other digital rights management (DRM) systems may be used in practice (e.g. FairPlay for encrypting AAC-coded audio files, AAC: short for Advanced Audio Coding).
  • The audio processing itself usually can not be performed by the CPU which is not designed to process a huge amount of numerical calculations necessary to process digital audio data. Thus, the audio processing is typically "outsourced" to a dedicated digital signal processor (DSP) optimised for audio data processing. The DSP 110 includes a (software implemented) audio decoder unit 111 receiving a digital audio stream from the media drive controller 133 managed by the CPU 130. If the audio stream is encrypted, a stream decryption unit 121 (implemented, for example, in the FPGA 120) coupled between the audio decoder unit 111 and the media device controller 133 may provide for data decryption. The decoded (and, when necessary, sample-rate converted) audio data may be made available to an audio post-processing unit 113 which provides a number of audio processing algorithms ranging, for example, from digital equalizing filters to complex sound field processing algorithms for creating a virtual concert hall impression or the like. The audio post-processing unit 113 provides the final digital audio signal which is forwarded to an audio power amplifier, for example, via the MOST bus 60 (cf. FIG. 1).
  • The signal flow within the DSP 110 is managed by a (software implemented) unit denoted as routing engine 112 in FIG. 2. The routing engine 112 is configured to forward audio stream data from the audio decoder unit 111 to the audio post processing unit 113 and further to forward the post-processed audio data from the post-processing unit 113 to a MOST bus interface 112 (implemented in the FPGA 120 in the present example) so as to be forwarded via the bus 60 to an audio data sink such as, for example, the audio amplifier 20 (see FIG. 1). In order to process audio data originating from a different source than the storage device (e.g. DVD drive 140) coupled to the media device controller 133, the routing engine 112 may be further configured to receive audio data from the MOST bus via interface 122 and forward the received audio data to the post-processing unit 113. The processed audio data may then be sent back (using again the routing engine 112) to the MOST bus 60 via interface 122 and further to the audio amplifier 20 as already mentioned above.
  • The operation of the audio decoder unit 111, the routing engine 112, and the post processing unit 113 is controlled and managed by a command and control logic 114 implemented in the DSP 110. The control logic 114 may be implemented as software executed by the DSP, coupled to the audio decoder unit 111, the routing engine 112 and the post processing unit 113, and further receive high-level control commands from the audio management unit 131 included in the CPU 130.
  • A network management unit 151 for managing the data transfer over the MOST bus 60 may be implemented in a separate microcontroller 150 and coupled to the MOST interface 122. However, the network management unit 151 could also be included in the CPU 140.
  • Present automotive multimedia systems (also called "infotainment systems") are configured to display information on large-area flat screens to some extent including computationally complex 3D graphic effects. High definition (HD) video systems are going to find their way into automotive applications thereby increasing the required graphic computation power. Thus, head units may include high-performance graphic processing units (GPUs) to manage the graphic processing tasks and thus to reduce the CPU load.
  • The stream processing capability of modern GPUs may be advantageously used for the processing of audio streams.
  • FIG. 3 illustrates a head unit 10 for use in an automotive multimedia system (cf. FIG. 1). The system comprises a CPU 130, a GPU 160, and optionally a microcontroller 150 and a FPGA 120. The functionality of the latter two may be alternatively taken over by the GPU 160 and/or by the CPU 130.
  • In the example of FIG. 3, the MOST interface 122 is implemented using the FPGA 120 and the network management unit 151 is implemented using the microcontroller 150 analogously to the example of FIG. 2. The CPU 120 implements similar functions as in the example of FIG. 2, in particular the media drive controller 133, the audio management unit 131, and the HMI 132. As in the example of FIG. 2 the audio management unit 131 is coupled to the HMI 132 as well as to the network management unit 151 (in order to allow sending and receiving audio control commands, e.g. user volume settings, via the network).
  • Naturally, the graphic processing (unit 161) resides (i.e. is implemented) in the GPU 150 which is coupled to the CPU 120 via a data bus. According to one example of the invention the system comprises an audio processing unit 162 which is mainly implemented using software executed solely by the GPU 162 or, alternatively, the audio processing is distributed over CPU 120 and GPU 162. However, the main calculation power is provided by the GPU 160 and thus a separate DSP 110 is not needed any more resulting in a less complex hardware structure of the head unit 10. As illustrated in the previous example (FIG. 2), some signal processing may optionally be outsourced to the FPGA (i.e. the stream decryption in the example of FIG. 2) or directly included in the (optionally distributed) signal processing unit 162.
  • In accordance with the present invention a programmable GPU 160 is required. Fur this purpose some manufacturers have developed standard architectures such as CUDA (short for "Compute Unified Device Architecture") by NVIDIA Corp. When using CPU 130 and GPU 160 for distributed audio processing, synchronisation is necessary. Such synchronization ensures that only valid data is exchanged between CPU and GPU. For this purpose, different threads may be synchronized using semaphores (i.e. protected variables or abstract data types that constitute a classic method of controlling access by several threads to a common resource in a parallel programming environment), mutex algorithms or the like. For example, infinite impulse response (IIR) filters may (due to the feedback loop) be implemented on the CPU whereas finite impulse response (FIR) filters may be distributed to parallel threads and thus advantageously be implemented on the GPU. In this case a synchronization mechanism may be necessary to define which processor (CPU or GPU) gets valid input data at a certain time. IIR and FIR filters are used to realize various functions of the head unit (e.g. equalizing, surround sound processing, etc.).
  • Using a head unit as illustrated in FIG. 3, a load balancing can be implemented by using an appropriate thread scheduling. In particular, audio algorithms including algorithms that have to be executed step by step in a sequential manner are advantageously implemented in the CPU 130 whereas algorithms which can be parallelized (e.g. FIR filters, delay lines, decoder algorithms, etc.) are more efficiently processed by the GPU 160. In order to achieve the above-mentioned load balancing some audio processing algorithms (tasks, e.g. delay lines or decoder algorithms such as MP3 or AC3) may be implemented on the GPU 160 and on the CPU 130 as well. In case the GPU 160 is loaded with extensive graphic processing (graphic processing unit 161) the audio processing may be redistributed such that some audio processing algorithms initially executed by the GPU 160 are "moved" to the CPU 130. During a transition time (when redistributing tasks for load balancing) the respective algorithms is executed by both processors (GPU 160 and CPU 130) both performing the same operations. Subsequently, a cross-fading is performed to the processor with lower load (e.g. CPU) and the respective task is stopped in the processor with higher load (e.g. GPU) in order to free resources. During the transition time the parallel tasks (performing the same function) have to be synchronized, too.
  • The communication between the GPU 160 and the CPU 130 as well as the load balancing may be managed and controlled by a load-balancing controller 134 which may be executed by the CPU 130. The load-balancing controller 134 residing on the CPU 130 is illustrated in FIG. 4 which includes all components of the example of FIG. 3 and additionally the load-balancing controller 134.
  • The load balancing-controller "knows" the tasks executed by the CPU 130 and the GPU 160 and is configured to estimate the respective processor loads (as a fraction of the respective maximum processor load) and to move tasks from one processor to the other in order to free resources on one processor when necessary (e.g. if the processor load exceeds a given threshold). A proven bus system may be used for communication and data exchange between GPU 160 and CPU 130 such as, for example, the Peripheral Component Interconnect Express (PCIe) bus.
  • Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. For example, it will be readily understood by those skilled in the art that some function units may be alternatively be implemented as hardware or software residing on different processors while remaining within the scope of the present invention.
  • Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods, and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims (9)

  1. A head unit of a multimedia system comprising:
    an interface providing an input audio data stream;
    an interface configured to receive an output audio data stream;
    a central processing unit;
    a graphic processing unit; and
    an audio processing unit receiving the input audio data stream and providing the processed output audio data stream,
    the audio processing unit residing solely on the graphic processing unit or is distributed across the graphic processing unit and the central processing unit.
  2. The head unit of claim 1 further comprising a load balancing controller residing on the central processing unit, the load balancing controller being configured to provide a load balancing between the central processing unit and the graphic processing unit.
  3. The head unit of claim 1 or 2 further comprising a field programmable gate array, a stream decryption unit residing on the field programmable gate array for decrypting the input audio data stream.
  4. The head unit of claim 1 or 2 further comprising a stream decryption unit residing on the graphic processing unit for decrypting the input audio data stream.
  5. The head unit of one of the claims 1 to 4 further comprising a network interface and a network management unit, the network interface and the network management unit being configured to send and receive data to and, respectively, from a data bus.
  6. The head unit of claim 5 wherein the network interface resides on a field programmable gate array and the network management unit resides on a microcontroller.
  7. The head unit of claim 2, in which
    the audio processing unit is distributed across the graphic processing unit and the central processing unit and
    the load balancing controller is further configured to move the execution of at least one audio processing algorithm performed by the audio processing unit from the graphic processing unit to the central processing unit or vice versa.
  8. The head unit of claim 7, in which the load balancing controller is configured to estimate the processor load of the graphic processing unit and the central processing unit and to move the execution of at least one audio processing algorithm to the other processing unit if the processor load of one processing unit exceeds a given threshold.
  9. The head unit of claim 7 or 8, in which during a transition time, when moving the execution of an audio algorithm between processing units, the audio algorithm is executed by both processing units in parallel and synchronously.
EP10177142A 2010-09-16 2010-09-16 Multimedia system Withdrawn EP2432224A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP10177142A EP2432224A1 (en) 2010-09-16 2010-09-16 Multimedia system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP10177142A EP2432224A1 (en) 2010-09-16 2010-09-16 Multimedia system

Publications (1)

Publication Number Publication Date
EP2432224A1 true EP2432224A1 (en) 2012-03-21

Family

ID=43501410

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10177142A Withdrawn EP2432224A1 (en) 2010-09-16 2010-09-16 Multimedia system

Country Status (1)

Country Link
EP (1) EP2432224A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012219917A1 (en) * 2012-10-31 2014-06-12 Continental Automotive Gmbh Method for managing a control unit network in a vehicle and ECU network
WO2016198112A1 (en) * 2015-06-11 2016-12-15 Telefonaktiebolaget Lm Ericsson (Publ) Nodes and methods for handling packet flows
CN117278761A (en) * 2023-11-16 2023-12-22 北京傲星科技有限公司 Vehicle-mounted video transmission system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006024957A2 (en) * 2004-07-01 2006-03-09 Harman Becker Automotive Systems Gmbh Computer architecture for a multimedia system used in a vehicle
US20060059494A1 (en) * 2004-09-16 2006-03-16 Nvidia Corporation Load balancing
EP2184869A1 (en) * 2008-11-06 2010-05-12 Studer Professional Audio GmbH Method and device for processing audio signals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006024957A2 (en) * 2004-07-01 2006-03-09 Harman Becker Automotive Systems Gmbh Computer architecture for a multimedia system used in a vehicle
US20060059494A1 (en) * 2004-09-16 2006-03-16 Nvidia Corporation Load balancing
EP2184869A1 (en) * 2008-11-06 2010-05-12 Studer Professional Audio GmbH Method and device for processing audio signals

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
J.D. OWENS; M. HOUSTON; D. LUEBKE; S. GREEN; J.E. STONE; J.C. PHILLIPS: "GPU Computing", PROCEEDINGS OF THE IEEE, vol. 96, no. 5, May 2008 (2008-05-01), pages 879 - 899
JOHN NICKOLLS ET AL: "The GPU Computing Era", IEEE MICRO, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 30, no. 2, 1 March 2010 (2010-03-01), pages 56 - 69, XP011307192, ISSN: 0272-1732 *
OWENS J D ET AL: "GPU Computing", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 96, no. 5, 1 May 2008 (2008-05-01), pages 879 - 899, XP011207684, ISSN: 0018-9219 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012219917A1 (en) * 2012-10-31 2014-06-12 Continental Automotive Gmbh Method for managing a control unit network in a vehicle and ECU network
WO2016198112A1 (en) * 2015-06-11 2016-12-15 Telefonaktiebolaget Lm Ericsson (Publ) Nodes and methods for handling packet flows
CN117278761A (en) * 2023-11-16 2023-12-22 北京傲星科技有限公司 Vehicle-mounted video transmission system and method
CN117278761B (en) * 2023-11-16 2024-02-13 北京傲星科技有限公司 Vehicle-mounted video transmission system and method

Similar Documents

Publication Publication Date Title
KR101697910B1 (en) Fault-tolerant preemption mechanism at arbitrary control points for graphics processing
TWI797576B (en) Apparatus and method for rendering a sound scene using pipeline stages
JP6333180B2 (en) online game
US11263064B2 (en) Methods and apparatus to facilitate improving processing of machine learning primitives
KR101666416B1 (en) Priority based context preemption
KR20160001710A (en) Method and apparatus for computing precision
US10176644B2 (en) Automatic rendering of 3D sound
US20200134906A1 (en) Techniques for generating visualizations of ray tracing images
TW202215376A (en) Apparatus and method for graphics processing unit hybrid rendering
WO2009131007A1 (en) Simd parallel computer system, simd parallel computing method, and control program
CN118285117A (en) Audio rendering method, audio rendering device and electronic device
CN113785279A (en) Stateless parallel processing method and device for tasks and workflows
EP2432224A1 (en) Multimedia system
JP7121019B2 (en) Exporting out-of-order pixel shaders
KR102223446B1 (en) Graphics workload submissions by unprivileged applications
US20220286798A1 (en) Methods and apparatus to generate binaural sounds for hearing devices
US20210111976A1 (en) Methods and apparatus for augmented reality viewer configuration
US20240331083A1 (en) Methods, systems, apparatus, and articles of manufacture to deliver immersive videos
US20240244216A1 (en) Predictive video decoding and rendering based on artificial intelligence
US11893654B2 (en) Optimization of depth and shadow pass rendering in tile based architectures
US20240331705A1 (en) Methods and apparatus to model speaker audio
US11862117B2 (en) Method and apparatus for matched buffer decompression
US20240256839A1 (en) Methods, systems, articles of manufacture and apparatus to generate flow and audio multi-modal output
WO2023025143A1 (en) Audio signal processing method and apparatus
US9014530B2 (en) System having movie clip object controlling an external native application

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME RS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20120922