[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2007043841A1 - Method and apparatus for signal processing - Google Patents

Method and apparatus for signal processing Download PDF

Info

Publication number
WO2007043841A1
WO2007043841A1 PCT/KR2006/004149 KR2006004149W WO2007043841A1 WO 2007043841 A1 WO2007043841 A1 WO 2007043841A1 KR 2006004149 W KR2006004149 W KR 2006004149W WO 2007043841 A1 WO2007043841 A1 WO 2007043841A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
coding
coding scheme
pilot
scheme
Prior art date
Application number
PCT/KR2006/004149
Other languages
French (fr)
Inventor
Hyen O Oh
Hee Suk Pang
Dong Soo Kim
Jae Hyun Lim
Yang Won Jung
Chul Soo Lee
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060079836A external-priority patent/KR20070108312A/en
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Priority to AU2006300102A priority Critical patent/AU2006300102B2/en
Priority to US12/083,459 priority patent/US8199827B2/en
Priority to EP06799227A priority patent/EP1946555A4/en
Publication of WO2007043841A1 publication Critical patent/WO2007043841A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4382Demodulation or channel decoding, e.g. QPSK demodulation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present invention relates to a method and apparatus for signal processing. More particularly, the present invention relates to a coding method for signal compression and signal restoration in an alternative coding scheme, an apparatus therefor, a method transmitting the resultant digital broadcast signal, a data structure of the digital broadcast signal, and a broadcast receiver for the digital broadcast signal.
  • Background Art
  • the present invention relates to digital broadcasting. Recently, research for appliances capable of transmitting audio broadcasts, video broadcasts, data broadcasts, etc. in accordance with a digital scheme other than an analog scheme, and appliances capable of transmitting and displaying the transmitted broadcasts have been actively conducted. Currently, several appliances are commercially available.
  • Examples of digital broadcasting are digital audio broadcasting and digital multimedia broadcasting.
  • Such digital broadcasting has various advantages.
  • the digital broadcasting can inexpensively provide diverse multimedia information services, and can be used for mobile broadcasting in accordance with an appropriate frequency band allocation.
  • it is possible to create new earning sources, and to provide new vital power to the receiver markets, and thus, to obtain vast industrial effects.
  • audio services for example, seven audio services, in a frequency band of about 1.5 MHz. All the seven audio services are transmitted in a state of being compressed in accordance with a "masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM)" audio coding scheme.
  • MUSICAM universal sub-band integrated coding and multiplexing
  • DMB digital multimedia broadcasting
  • audio services for example, one DMB service and three audio services
  • DMB digital multimedia broadcasting
  • the three audio services are transmitted in a state of being compressed in accordance with the MUSICAM audio coding scheme.
  • the present invention is directed to a signal processing method and apparatus that substantially obviate one or more of the problems due to limitations and disadvantages of the related art.
  • An object of the present invention devised to solve the above-mentioned problems is to provide a method for transmitting a digital broadcast signal and a data structure which enable transmission of an increased number of broadcast signals in a limited frequency band, and a broadcast receiver therefor.
  • Another object of the present invention is to provide a digital broadcast signal transmitting method and a data structure which enable decoding of services coded in accordance with at least one alternative coding scheme and outputting of the decoded services, and a broadcast receiver therefor.
  • Another object of the present invention devised to solve the above-mentioned problems is to provide a method for signal processing and apparatus capable of achieving an optimal signal transmission efficiency.
  • Another object of the present invention is to provide an efficient data coding method and an apparatus therefor.
  • Another object of the present invention is to provide encoding and decoding methods capable of optimizing the transmission efficiency of control data used for restoration of audio, and an apparatus therefor.
  • Another object of the present invention is to provide a medium including data encoded in accordance with the above-described encoding method.
  • Another object of the present invention is to provide a data structure for efficiently transmitting the encoded data.
  • Still another object of the present invention is to provide a system including the decoding apparatus.
  • a method for signal processing includes obtaining data coding identification information from a signal and data- decoding data in accordance with a data coding scheme indicated by the data coding identification information.
  • the data coding scheme includes at least a pilot coding scheme; the pilot coding scheme comprises decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
  • the data coding scheme further includes a differential coding scheme
  • the differential coding scheme is one of a frequency differential coding scheme and a time differential coding scheme
  • the time differential coding scheme is one of a forward time differential coding scheme and a backward time differential coding scheme.
  • the method further includes obtaining entropy coding identification information and entropy-decoding the data using an entropy coding scheme indicated by the entropy coding identification information.
  • the data decoding step comprises executing the data-decoding for the entropy-decoded data using the data coding scheme.
  • the entropy decoding scheme is one of a one-dimensional coding scheme or a multi-dimensional coding scheme, and the multi-dimensional coding scheme is one of a frequency pair coding scheme and a time pair coding scheme.
  • the method further includes decoding an audio signal, using the data as a parameter.
  • an apparatus for signal processing includes an identification information obtaining part for obtaining data coding identification information from a signal and a decoding part for data-decoding data in accordance with a data coding scheme indicated by the data coding identification information, wherein the data coding scheme comprises at least a pilot coding scheme; the pilot coding scheme comprises decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
  • a method for signal processing includes data- encoding data in accordance with a data coding scheme and generating and transferring data coding identification information indicating the data coding scheme.
  • the data coding scheme includes at least a pilot coding scheme, the pilot coding scheme comprises encoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value and the pilot difference value is generated using the data and the pilot reference value.
  • an apparatus for signal processing includes an encoding part for data-encoding data in accordance with a data coding scheme and an outputting part for generating and transferring data coding identification information indicating the data coding scheme.
  • the data coding scheme includes at least a pilot coding scheme; the pilot coding scheme comprises encoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
  • the present invention provides an effect of an enhancement in data transmission efficiency in that it is possible to transmit an increased number of audio services in a limited frequency band.
  • the present invention provides an effect capable of securing a desired compatibility in that it is possible to decode an audio service coded in accordance with one or more coding schemes, and to receive and output audio services coded in a conventional masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM) scheme.
  • MUSICAM universal sub-band integrated coding and multiplexing
  • the present invention enables efficient data coding and entropy coding, thereby enabling data compression and recovery with high transmission efficiency.
  • FIG. 1 is a diagram schematically illustrating fast information channel (FIC) and main service channel (MSC) structures for digital broadcasting according to the present invention
  • FIG. 2 is a diagram illustrating a structure of a fast information block (FIB) in digital broadcasting
  • FIG. 3 is a diagram illustrating a structure of a fast information group (FIG) in digital broadcasting
  • FIG. 4 is a diagram illustrating a service organization in the case in which the type of the FIG is 0, and an "Extension" field is 2
  • FIG. 5 is a table illustrating examples of the value of an added "audio service component type (ASCTy) "field
  • FIG. 6 is a table illustrating another examples of the value of the added
  • FIG. 7 is a table illustrating another examples of the value of the added
  • FIG. 8 is a table illustrating a procedure in which service components are decoded in the case that an addition of an "ASCTy" field as shown in FIG. 6 is made;
  • FIG. 9 is a flowchart illustrating a digital broadcast transmitting method according to the present invention.
  • FIG. 10 is a flowchart illustrating a digital broadcast receiving method according to the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of the broadcast receiver adapted to receive a digital broadcast according to the present invention;
  • FIG. 12 and FIG. 13 are block diagrams of a system according to the present invention;
  • FIG. 14 and FIG. 15 are diagrams to explain PBC coding according to the present invention;
  • FIG. 16 is a diagram to explain types of DIFF coding according to the present invention;
  • FIGs. 17 to 19 are diagrams of examples to which DIFF coding scheme is applied.
  • FIG. 20 is a block diagram to explain a relation in selecting one of at least three coding schemes according to the present invention
  • FIG. 21 is a block diagram to explain a relation in selecting one of at least three coding schemes according to a related art
  • FIG. 22 and FIG. 23 are flowcharts for the data coding selecting scheme according to the present invention, respectively
  • FIG. 24 is a diagram to explaining internal grouping according to the present invention
  • FIG. 25 is a diagram to explaining external grouping according to the present invention
  • FIG. 26 is a diagram to explain multiple grouping according to the present invention
  • FIG. 27 and FIG. 28 are diagrams to explain mixed grouping according to another embodiments of the present invention, respectively;
  • FIG. 29 is an exemplary diagram of ID and 2D entropy table according to the present invention.
  • FIG. 30 is an exemplary diagram of two methods for 2D entropy coding according to the present invention.
  • FIG. 31 is a diagram of entropy coding scheme for PBC coding result according to the present invention.
  • FIG. 32 is a diagram of entropy coding scheme for DIFF coding result according to the present invention.
  • FIG. 33 is a diagram to explain a method of selecting an entropy table according to the present invention.
  • FIG. 34 is a hierarchical diagram of a data structure according to the present invention.
  • FIG. 35 is a block diagram of an apparatus for audio compression and recovery according to one embodiment of the present invention.
  • FIG. 36 is a detailed block diagram of a spatial information encoding part according to one embodiment of the present invention.
  • FIG. 37 is a detailed block diagram of a spatial information decoding part according to one embodiment of the present invention. Best Mode for Carrying Out the Invention
  • coding used herein should be construed as including both an encoding procedure and a decoding procedure. Of course, it can be appreciated that a specific coding procedure is applicable only to one of the encoding and decoding procedures. In this case, a description thereof will be separately given.
  • coding will also be referred to as "codec”.
  • the present invention provides a method for transmitting a digital broadcast signal in a digital broadcasting control method, including inserting, into a broadcast stream, at least one service component compressed in accordance with an alternative coding scheme; inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme and transmitting, to a broadcast receiver, the broadcast stream including the at least one service component and the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
  • the alternative coding scheme comprises an alternative audio coding scheme.
  • the alternative audio coding scheme comprises at least one of an advanced audio coding (AAC) scheme and a bit sliced arithmetic coding (BSAC) scheme.
  • AAC advanced audio coding
  • BSAC bit sliced arithmetic coding
  • the alternative extension audio coding scheme additionally comprises at least one of a spectral band replication (SBR) scheme, a parametric stereo (PS) scheme, and a moving picture experts group (MPEG) surround scheme.
  • SBR spectral band replication
  • PS parametric stereo
  • MPEG moving picture experts group
  • the alternative coding scheme may use the alternative audio coding scheme alone, or may use a combination of the alternative audio coding scheme with at least one alternative extension audio coding scheme.
  • the alternative audio coding scheme comprises an audio coding scheme having a higher compression rate than a masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM) scheme.
  • MUSICAM universal sub-band integrated coding and multiplexing
  • the step of inserting, into a broadcast stream, at least one service component compressed in accordance with an alternative coding scheme preferably comprises including the at least one service component in a main service channel (MSC) of the broadcast stream.
  • MSC main service channel
  • the step of inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme preferably comprises including the information in a fast information channel (FIC) of the broadcast stream.
  • FIC fast information channel
  • the step of inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme preferably comprises including, in the FIC of the broadcast stream, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, and including, in an audio superframe of the MSC, information indicating that the at least one service component has been compressed in accordance with a specific alternative extension audio coding scheme.
  • the audio superframe may include a header and one or more frames.
  • the audio superframe may also include a syncword for detection, and a cyclic redundancy check (CRC) for the header.
  • CRC cyclic redundancy check
  • the audio superframe may further include identifiers respectively informing of whether or not associated alternative extension audio coding schemes, for example, SBR, PS, and MPEG surround schemes were used.
  • the identifiers respectively informing of whether or not the associated alternative extension audio coding schemes were used may be selectively included.
  • the PS scheme may be used only when the number of AAC-coded channels is mono.
  • the identifier associated with the PS scheme namely, 'ps_flag' may be included in the header only when the number of AAC-coded channels corresponds to mono.
  • the PS scheme may also be used only when the SBR scheme is used. In this case, accordingly, the identifier 'ps_flag' associated with the PS scheme may be included in the header only when the SBR scheme is used.
  • the PS and MPEG surround schemes may be simultaneously used. Also, there may be the case wherein one of the PS and MPEG surround schemes is used, and the other is not used. Accordingly, only when the PS scheme was not used, whether or not the MPEG surround scheme was used may be informed of. Information about whether or not the MPEG surround scheme was used may not be expressed in the form of a simple ON/OFF expression, but may be expressed in the form of bits expressing one of diverse modes associated with the MPEG surround scheme.
  • the detailed mode of the MPEG surround scheme can be identified through config. information present in an MPEG surround payload.
  • the MPEG surround mode may be partially determined, based on the number of
  • the MPEG surround mode must be 515 or the like.
  • the MPEG surround mode may be 525 or the like.
  • the inclusion of the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme in the FIC of the broadcast stream comprises defining, in an audio service component type (ASCTy) field, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
  • ASCTy audio service component type
  • the alternative coding scheme comprises an alternative audio coding scheme, as described above.
  • the present invention also provides a data structure of a digital broadcast signal including at least one service component compressed in accordance with an alternative coding scheme and information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme.
  • the information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme is defined for a decoding operation in a broadcast receiver.
  • the information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme is defined in a fast information channel (FIC).
  • FIC fast information channel
  • the information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme is defined in an audio service component type (ASCTy) field.
  • ASCTy audio service component type
  • such an audio service component type (ASCTy) field defines information indicating that at least one service component, which is transmitted, has been compressed in accordance with a specific alternative coding scheme selected from at least one alternative coding scheme.
  • the alternative coding scheme includes an alternative audio coding scheme.
  • the present invention further provides a digital broadcast receiver for receiving a digital broadcast including a tuner for receiving a broadcast stream containing at least one service component compressed in accordance with an alternative coding scheme, and information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme; a determinator for determining, based on the information, the alternative coding scheme used to compress the at least one service component included in the received broadcast stream and a controller for decoding the at least one service component compressed in accordance with the alternative coding scheme, using a corresponding decoding scheme selected based on the result of the determination by the determinator.
  • the determinator executes the determination, using an FIC decoder and an MSC decoder.
  • the tuner preferably receives a broadcast stream including an MSC containing the at least one service component, and an FIC containing the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
  • the broadcast stream which contains, in the FIC thereof, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, preferably defines the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, in an ASCTy field of the FIC.
  • the alternative coding scheme comprises an alternative audio coding scheme.
  • the technique of the present invention for alternative coding of broadcast signals may be applied to the case in which a video signal or data signal is transmitted as a broadcast signal, in order to achieve alternative coding of the video signal or data signal. Accordingly, transmission of video signals or data signals after alternative coding thereof using the following embodiments also falls under the scope of the present invention.
  • FIG. 1 is a diagram schematically illustrating fast information channel (FIC) and main service channel (MSC) structures for digital broadcasting according to the present invention.
  • a main service channel MSC
  • an audio service compressed in accordance with a MUSICAM audio coding scheme not only an audio service compressed in accordance with a MUSICAM audio coding scheme, but also several audio services compressed in accordance with an alternative audio coding scheme.
  • MUSICAM audio coding scheme is referred to as MUSICAM audio (MA), and the audio service compressed in accordance with the alternative audio coding scheme is referred to as alternative audio (AA).
  • MA MUSICAM audio
  • AA alternative audio
  • MSC means a channel used to transmit an audio service component, a data service component, or the like.
  • the MSC is a data channel divided into a number of coded sub-channels, namely, a time-interleaved data channel.
  • Each sub-channel transmits one or more service components.
  • the organizations of the sub-channels and service components are referred to as a multiplex configuration.
  • BSAC BSAC
  • MUSICAM MUSICAM
  • SBR spectral band replication
  • MPEG moving picture experts group
  • FIC in order to enable the broadcast receiver to decode the newly-added audio services compressed in accordance with the alternative audio coding scheme, and to output the decoded audio services.
  • the FIC which is changed in accordance with the present invention, will be described in detail in conjunction with the third and fourth embodiments.
  • the FIC means a channel for enabling the broadcast receiver to more rapidly access various information associated with digital audio broadcasting.
  • the FIC is used to transmit multiplex configuration information (MCI) and service information (SI).
  • MCI multiplex configuration information
  • SI service information
  • FIG. 2 is a diagram illustrating a structure of a fast information block (FIB) in digital broadcasting.
  • FIB fast information block
  • FIG. 3 is a diagram illustrating a structure of a fast information group (FIG) in digital broadcasting.
  • FIG. 4 is a diagram illustrating a service organization in the case in which the type of the FIG is 0, and an "Extension" field is 2.
  • the FIC shown in FIG. 1 includes FIBs.
  • each FIB consists of 215 bits.
  • the FIB includes an FIB data field and a cyclic redundancy check (CRC).
  • CRC cyclic redundancy check
  • the FIB data field includes one or more FIGs, and an end marker, and a padding.
  • Each FIG has an FIG header consisting of information about FIG type and information about length, and an FIG data field.
  • the application of the FIG may mean information about the MCI and SI.
  • the FIG data field includes a "Current/next (C/N)" field, an
  • the "Extension” field may be re-defined by 32 different ap- plications.
  • a basic service organization is defined.
  • one field, which is carried in one FIG includes all service descriptions applied to one service (for example, a service k).
  • SId Service Identification
  • D(Program/Data)" field has a value of 1
  • the "SId” field includes an "ECC” field, a
  • “Country Id” field and a “Service reference” field are a field indicating whether the service that is transmitted is usable in a specific area served by a specific ensemble or in all areas.
  • "CAId(Conditional Access Identifier)” field is a field for identifying an access control system (ACS) used for the associated service that is transmitted.
  • "Number of service components” field is a field for identifying the number of service components associated with the associated service that is transmitted.
  • “Service component description” field includes a TMId field of 2 bits, etc.
  • TMId The "TMId" field, which is a 2-bit field, indicates one of the following four cases.
  • the "TMId” field when the "TMId” field has a value of 00, it may indicate an audio mode of an MSC stream mode. [136] When the "TMId” field has a value of 01, it may indicate a data mode of an MSC stream mode. [137] When the "TMId” field has a value of 10, it may indicate an FIC data channel
  • the video service component or data service components are transmitted after being coded or compressed using other schemes, they can be decoded by a broadcast receiver, as long as the broadcast receiver is configured in accordance with the present invention.
  • the "TMId” field has a value of 00
  • the remaining 14 bits of the "Service component description” field consists of an "ASCTy” field, a "SubChld” field, a "P/S” field, and a "CA flag” field.
  • the "ASCTy(Audio Service Component Type)" field is a field for indicating the type of the associated audio service component.
  • the "SubChId(Sub-channel Identifier)" field is a field for identifying a sub-channel in which the associated service component is transmitted.
  • the "P/S(Primary/Secondary)” field is a field for indicating whether the associated service component that is transmitted is a primary component or a secondary component.
  • the "CA flag” field is a field for indicating whether or not an access control is applied to the associated service component that is transmitted.
  • FIG. 5 is a table illustrating examples of the value of the added "ASCTy"(Audio
  • FIG. 6 is a table illustrating another examples of the value of the added
  • FIG. 7 is a table illustrating another examples of the value of the added
  • ASCTy Audio Service Component Type
  • the reason why the value of the "ASCTy" field is re-defined is to enable the broadcast receiver to decode an audio service component coded in accordance with a scheme other than the MUSICAM audio coding scheme, based on the re-defined
  • the broadcast receiver can decode various audio service components, based on the re-defined
  • ASCTy field values are defined such that, if it is 3, 4, or 5 and so on, it means that the associated audio service component has been compressed in accordance with an alternative audio coding scheme other than the MUSICAM scheme, differently from conventional cases.
  • ASCTy field value
  • the above values are construed only for illustrative purposes.
  • the third embodiment may be implemented through, for example, one of the following three methods as shown in FIGs. 5, 6, and 7.
  • the first method is to transmit, in one sub-channel, service components compressed in accordance with a plurality of alternative audio coding schemes.
  • the alternative audio coding schemes may include, for example, AAC, SBR, and MPEG surround schemes. Of course, other audio coding schemes may be taken into consideration.
  • the case in which the "ASCTy" field has a value of '63' (111111) will be omitted from the following description. This case will be separately described later.
  • the AAC audio coding scheme and SBR audio coding scheme are often collectively called a high efficiency-advanced audio coding (HE-AAC) scheme.
  • HE-AAC high efficiency-advanced audio coding
  • the "ASCTy" field has a value of '4' (000100), as shown in FIG. 5, it means transmission of service components compressed in accordance with the AAC audio coding scheme, SBR audio coding scheme, and MPEG surround scheme, and called background sound.
  • FIG. 5 it means transmission of service components compressed in accordance with the AAC audio coding scheme, SBR audio coding scheme, and MPEG surround scheme, and called multi-channel audio extension.
  • the service components called multi-channel audio extension may mean additional information or the like for providing further-upgraded audio effects.
  • the service components multi-channel audio extension may include information associated with implementation of an additional service such as a 5.1 -channel audio service.
  • the second method is to transmit, in respective sub-channels, service components compressed in accordance with a plurality of alternative audio coding schemes.
  • the alternative audio coding schemes may include, for example, AAC, SBR, and MPEG surround schemes. Of course, other audio coding schemes may be taken into consideration.
  • the "ASCTy" field has a value of '4' (000100), as shown in FIG. 6, it means transmission of a service component compressed in accordance with the SBR audio coding scheme, and called background sound.
  • FIG. 6, it means transmission of a service component compressed in accordance with the MPEG surround scheme, and called 'multi-channel audio extension'.
  • the third method is to transmit, in one sub-channel, a part of service components compressed in accordance with a plurality of alternative audio coding schemes, and to transmit, in another sub-channel, the remaining part of the service components. For example, it may be possible to take into consideration a method for transmitting, in one sub-channel, service components compressed in accordance with the AAC and SBR schemes, and transmitting, in another sub-channel, a service component compressed in accordance with the MPEG surround scheme.
  • the AAC audio coding scheme and SBR audio coding scheme are often collectively called a high efficiency-advanced audio coding (HE-AAC) scheme.
  • HE-AAC high efficiency-advanced audio coding
  • the "ASCTy" field has a value of '4' (000100), as shown in FIG. 7, it means transmission of service components compressed in accordance with the AAC audio coding scheme and SBR audio coding scheme, and called background sound.
  • FIG. 7 it means transmission of a service component compressed in accordance with the MPEG surround scheme, and called multi-channel audio extension.
  • FIGs. 5, 6, and 7 in common, it means transmission of at least one service component in an MPEG-2 transport stream.
  • the value of 63 (111111) is construed only for illustrative purposes.
  • the "ASCTy" field may have other values for this definition.
  • the at least one service component may include at least one of an audio service component, an A/V service component, and a data service component.
  • the present invention has a feature in that it is possible to define an A/V service component or a data service component in the "ASCTy" field, and thus, to transmit the service component as a digital broadcast.
  • FIG. 8 is a table illustrating a procedure in which service components are decoded in the case that an addition of an "ASCTy" field as shown in FIG. 6 is made.
  • the "SId" field has a value of '0x1234' as shown in FIG. 8, it may indicate that the associated service is a KBSl broadcasting service.
  • the KBSl broadcasting service may include a service component compressed in accordance with the AAC scheme, a service component compressed in accordance with the SBR scheme, and a service component compressed in accordance with the MPEG surround scheme.
  • SubChld respectively indicating the sub-channels (paths) respectively used to transmit the service component compressed using the AAC scheme, the service component compressed using the SBR scheme, and the service component compressed using the MPEG surround scheme, for the values of the "ASCTy” field respectively indicating the types of the service components, and the values of the "P/S” field indicating whether each service component is a primary component or a secondary component.
  • the broadcast receiver includes only an AAC decoder, it decodes only the service component compressed using the AAC scheme, and outputs the decoded service component.
  • the broadcast receiver includes an AAC-SBR decoder, it decodes the service components respectively compressed using the AAC scheme and SBR scheme, and outputs the decoded service components.
  • the broadcast receiver includes an AAC-SBR (with MPEG surround) decoder, it decodes the service components compressed using the AAC scheme, SBR scheme, and MPEG surround scheme, and outputs the decoded service components.
  • the "SId" field has a value of '0x1235' as shown in FIG. 8, it may indicate that the associated service is a KBS2 broadcasting service.
  • the KBS2 broadcasting service may include a service component compressed in accordance with the AAC scheme and a service component compressed in accordance with the SBR scheme.
  • SubChld respectively indicating the sub-channels (paths) respectively used to transmit the service component compressed using the AAC scheme and the service component compressed using the SBR scheme, for the values of the "ASCTy” field respectively indicating the types of the service components, and the values of the "P/S” field indicating whether each service component is a primary component or a secondary component.
  • the broadcast receiver includes only an AAC decoder, it decodes only the service component compressed using the AAC scheme, and outputs the decoded service component.
  • the broadcast receiver includes an AAC- SBR decoder, it decodes the service components respectively compressed using the AAC scheme and SBR scheme, and outputs the decoded service components.
  • the "SId" field has a value of '0x5678' as shown in FIG. 8, it may indicate that the associated service is an SBSl broadcasting service.
  • the SBSl broadcasting service may include a service component compressed in accordance with the AAC scheme and a service component compressed in accordance with the MPEG surround scheme.
  • SubChld respectively indicating the sub-channels (paths) respectively used to transmit the service component compressed using the AAC scheme and the service component compressed using the MPEG surround scheme, for the values of the "ASCTy” field respectively indicating the types of the service components, and the values of the "P/S” field indicating whether each service component is a primary component or a secondary component.
  • the broadcast receiver includes only an AAC decoder, it decodes only the service component compressed using the AAC scheme, and outputs the decoded service component.
  • the broadcast receiver includes an AAC- MPEG surround decoder, it decodes the service components compressed using the AAC scheme and MPEG surround scheme, and outputs the decoded service components.
  • the "SId" field has a value of '0x5777' as shown in FIG. 8, it may indicate that the associated service is an SBS2 broadcasting service.
  • the SBS2 broadcasting service may include a service component compressed in accordance with the MUSICAM scheme.
  • the service component may be decoded by the existing MUSICAM decoder, and may then be outputted.
  • an "ASCTy" field value is added in order to enable transmission of a service component compressed using an alternative audio coding scheme, and to enable the broadcast receiver to decode the transmitted service component. Accordingly, it is possible to transmit and decode service components compressed using existing MUSICAM schemes.
  • the present invention has an advantage in that it is compatible with transmission and decoding schemes for conventional digital broadcastings.
  • FIG. 9 is a flowchart illustrating a digital broadcast transmitting method according to the present invention.
  • FIG. 10 is a flowchart illustrating a digital broadcast receiving method according to the present invention.
  • a broadcasting station or the like may insert, into a digital broadcast stream to be transmitted, information indicating that the digital broadcast includes a service component compressed in accordance with an alternative coding scheme, and information indicating that the service component has been compressed in accordance with a specific alternative coding scheme, prior to the transmission of the digital broadcast stream (S701).
  • the alternative coding scheme may be an alternative audio coding scheme, an alternative video coding scheme, an alternative data coding scheme, or the like.
  • the alternative audio coding scheme may be, for example, an advanced audio coding (AAC) scheme, a bit sliced arithmetic coding (BSAC) scheme, or the like.
  • AAC advanced audio coding
  • BSAC bit sliced arithmetic coding
  • the alternative audio coding scheme may additionally include a spectral band replication (SBR) scheme, a moving picture experts group (MPEG) surround scheme, or the like.
  • SBR spectral band replication
  • MPEG moving picture experts group
  • the information about the service components compressed using the above-described alternative audio coding schemes may be transmitted in a fast information channel (FIC).
  • the service components to be transmitted may be defined in the "ASCTy" field of the FIC.
  • the service components may be transmitted in a main service channel (MSC).
  • MSC main service channel
  • the broadcasting station or the like may transmit the resultant broadcast stream to a broadcast receiver or the like (S702).
  • the broadcast receiver receives an associated digital broadcast transmitted from the broadcast station or the like (S703).
  • the broadcast receiver may be an appliance capable of receiving a digital broadcast.
  • the broadcast receiver may be a television, a mobile phone, a DMB appliance, etc.
  • the broadcast receiver determines whether or not the audio service component of the received digital broadcast has been compressed in accordance with an alternative audio coding scheme (S704).
  • the determination (S704) may be achieved by decoding the "ASCTy" field of the
  • the audio service component is decoded by a decoder newly added in accordance with the present invention, and is then output (S705).
  • the newly-added decoder may be, for example, an AAC decoder, an AAC-SBR decoder, an AAC-SBR (with MPEG surround) decoder, etc.
  • the audio service component is decoded by a MUSICAM decoder, and is then output (S706).
  • FIG. 11 is a block diagram illustrating a configuration of the broadcast receiver adapted to receive a digital broadcast according to the present invention.
  • the broadcast receiver which can receive a digital broadcast according to the present invention, and can decode the received digital broadcast, will be described with reference to FIG. 11 (FIGs. 1 to 7 are also referred to).
  • the broadcast receiver 801 includes a user interface 802, a fast information channel (FIC) decoder 803, a controller 804, a tuner 805, a main service channel (MSC) decoder 806, an audio decoder 807, a speaker 808, a data decoder 809, a video decoder 810, and a display device 811.
  • the broadcast receiver 801 may be a television, a mobile phone, or a DMB appliance which can receive a digital broadcast, and can then output the digital broadcast.
  • the user interface 802 functions to transfer, to the controller 804, a command input by the user in association with, for example, channel adjustment, volume adjustment, etc.
  • the tuner 805 designates a desired ensemble, and information about FIC and MSC from the broadcasting station or the like at a frequency corresponding to the designated ensemble, under the control of the controller 804.
  • the FIC decoder 803 receives the FIC information from the tuner 805, and extracts multiplex configuration information (MCI), service information (SI), and an FIC data channel (FIDC) from the FIC information.
  • MCI multiplex configuration information
  • SI service information
  • FIDC FIC data channel
  • the FIC decoder 803 also functions to extract configuration information for sorting each service component, and information about the property of the service component.
  • the MSC decoder 806 receives the MSC information from the tuner 805.
  • the MSC decoder 806 decodes data transmitted through the sub-channel, based on the MCI and SI information sent from the controller 804, and transfers the decoded data to the audio decoder 807.
  • the audio decoder 807 functions to re-configure the data sent from the MSC decoder 806 to a format enabling outputting of an audio signal from a coded bitstream.
  • the audio decoder 807 may include at least one of an AAC decoder, an AAC-SBR decoder, an AAC-MPEG surround decoder, and an AAC-SBR (with MPEG surround) decoder.
  • the audio decoder 807 may additionally include a decoder capable of decoding an audio service component coded in a compression scheme other than the above-described schemes.
  • the speaker 808 functions to output the audio service components decoded by the audio decoder 807.
  • the data decoder 809 can function to re-configure service information decoded from the FIC, and desired data from the bitstream received via the MSC decoder 806.
  • the video decoder 810 functions to restore a video, using a compressed video bitstream and information associated therewith.
  • the display device 811 functions to output the image or the like restored by the video decoder 810.
  • the controller 804 functions to systematically control the functions of the user interface 802, FIC decoder 803, tuner 805, MSC decoder 806, audio decoder 807, data decoder 809, video decoder 810, etc.
  • the controller 804 controls the tuner 805 to be tuned to a channel on which the selected digital broadcast is transmitted.
  • the digital broadcast is an audio broadcast including service components compressed in accordance with alternative audio coding schemes, as shown in FIG. 5, 6, or 7.
  • the alternative audio coding schemes may include, for example, an advanced audio coding (AAC) scheme and a bit sliced arithmetic coding (BSAC) scheme.
  • AAC advanced audio coding
  • BSAC bit sliced arithmetic coding
  • the alternative audio coding schemes may additionally include a spectral band replication (SBR) scheme and a moving picture experts group (MPEG) surround scheme.
  • SBR spectral band replication
  • MPEG moving picture experts group
  • the FIC information as to the audio broadcast which includes information associated with the service components compressed in accordance with the alternative audio coding schemes, is sent to the FIC decoder 803 under the control of the controller 804.
  • the MSC information as to the service components is sent to the MSC decoder 806 under the control of the controller 804.
  • the FIC decoder 803 reads, from the FIC information, the value of the ASCTy field defining the type of an audio service component sent from the tuner 805, and thus, determines the compression type of the audio service component.
  • the controller 804 receives, from the FIC decoder 803, information as to the compression type of the audio service component sent from the tuner 805, and then controls the audio decoder 807, based on the received information, to determine a desired audio decoder.
  • the controller 804 performs a control operation for selecting an ACC decoder as the audio decoder 807.
  • the audio decoder 807 receives, from the audio decoder 807, an audio service component compressed in a specific audio coding scheme, and re-configures the received audio service component to a format enabling outputting of the audio service component through the speaker 808.
  • Audio signals are coded by frames. One or more coded frames form a superframe.
  • the superframe has header information.
  • the coding of audio signals can be performed by selectively using the AAC, SBR, parametric stereo (PS), and MPEG surround (MPS) schemes.
  • the identifiers each informing of whether or not an associated one of the above-described codecs was used may be selectively included in the header of the superframe. Alternatively, a part or all of the identifiers may be included in the header of the superframe.
  • FIG. 12 and FIG. 13 are diagrams of a system according to the present invention.
  • FIG. 12 shows an encoding apparatus 1 and FIG. 13shows a decoding apparatus 2.
  • an encoding apparatus 1 includes at least one of a data grouping part 10, a first data encoding part 20, a second data encoding part 31, a third data encoding part 32, an entropy encoding part 40 and a bitstream multiplexing part 50.
  • the second and third data encoding parts 31 and 32 can be integrated into one data encoding part 30.
  • variable length encoding is performed on data encoded by the second and third data encoding parts 31 and 32 by the entropy encoding part 40.
  • the data grouping part 10 binds input signals by a prescribed unit to enhance data processing efficiency.
  • the data grouping part 10 discriminates data according to data types.
  • the discriminated data is encoded by one of the data encoding parts 20, 31 and 32.
  • the data grouping part 10 discriminates some of data into at least one group for the data processing efficiency.
  • the grouped data is encoded by one of the data encoding parts 20, 31 and 32.
  • a grouping method according to the present invention in which operations of the data grouping part 10 are included, shall be explained in detail with reference to FIGs. 24 to 28 later.
  • Each of the data encoding parts 20, 31 and 32 encodes input data according to a corresponding encoding scheme.
  • Each of the data encoding parts 20, 31 and 32 adopts at least one of a PCM (pulse code modulation) scheme and a differential coding scheme.
  • the first data encoding part 20 adopts the PCM scheme
  • the second data encoding part 31 adopts a first differential coding scheme using a pilot reference value
  • the third data encoding part 32 adopts a second differential coding scheme using a difference from neighbor data, for example.
  • the first differential coding scheme is named pilot based coding (PBC) and the second differential coding scheme is named differential coding (DIFF).
  • PBC pilot based coding
  • DIFF differential coding
  • the entropy encoding part 40 performs variable length encoding according to statistical characteristics of data with reference to an entropy table 41. And, operations of the entropy encoding part 40 shall be explained in detail with reference to FIGs. 29 to 33 later.
  • the bitstream multiplexing part 50 arranges and/or converts the coded data to correspond to a transfer specification and then transfers the arranged/converted data in a bitstream form. Yet, if a specific system employing the present invention does not use the bitstream multiplexing part 50, it is apparent to those skilled in the art that the system can be configured without the bitstream multiplexing part 50.
  • the decoding apparatus 2 is configured to correspond to the above- explained encoding apparatus 1.
  • a bitstream demultiplexing part 60 receives an inputted bitstream and interprets and classifies various information included in the received bitstream according to a preset format.
  • An entropy decoding part 70 recovers the data into the original data before entropy encoding using an entropy table 71.
  • the entropy table 71 is identically configured with the former entropy table 41 of the encoding apparatus 1 shown in FIG. 12.
  • a first data decoding part 80, a second data decoding part 91 and a third data decoding part 92 perform decoding to correspond to the aforesaid first to third data encoding parts 20, 31 and 32, respectively.
  • the second and third data decoding parts 91 and 92 perform differential decoding, it is able to integrate overlapped decoding processes to be handled within one decoding process.
  • a data reconstructing part 95 recovers or reconstructs data decoded by the data decoding parts 80, 91 and 92 into original data prior to data encoding. Occasionally, the decoded data can be recovered into data resulting from converting or modifying the original data.
  • the present invention uses at least two coding schemes together for the efficient execution of data coding and intends to provide an efficient coding scheme using correlation between coding schemes.
  • the present invention intends to provide various kinds of data grouping schemes for the efficient execution of data coding.
  • the present invention intends to provide a data structure including the features of the present invention.
  • various additional configurations should be used as well as the elements shown in FIG. 12 and FIG. 13. For example, data quantization needs to be executed or a controller is needed to control the above process.
  • PCM pulse code modulation
  • PBC pilot based coding
  • DIFF differential coding
  • PCM is a coding scheme that converts an analog signal to a digital signal.
  • the PCM samples analog signals with a preset interval and then quantizes a corresponding result.
  • PCM may be disadvantageous in coding efficiency but can be effectively utilized for data unsuitable for PBC or DIFF coding scheme that will be explained later.
  • the PCM is used together with the PBC or DIFF coding scheme in performing data coding, which shall be explained with reference to FIGs. 20 to 33 later.
  • PBC is a coding scheme that determines a specific reference within a discriminated data group and uses the relation between data as a coding target and the determined reference.
  • a value becoming a reference to apply the PBC can be defined as reference value, pilot, pilot reference value or pilot value. Hereinafter, for convenience of explanation, it is named pilot reference value.
  • pilot reference value a value becoming a reference to apply the PBC
  • pilot reference value a value becoming a reference to apply the PBC
  • pilot reference value pilot reference value
  • a difference value between the pilot reference value and data within a group can be defined as difference or pilot difference.
  • a data group as a unit to apply the PBC indicates a final group having a specific grouping scheme applied by the aforesaid data grouping part 10. Data grouping can be executed in various ways, which shall be explained in detail later.
  • the PBC process according to the present invention includes at least two steps as follows. [285] First of all, a pilot reference value corresponding to a plurality of parameters is selected. In this case, the pilot reference value is decided with reference to a parameter becoming a PBC target.
  • a pilot reference value is set to a value selected from an average value of parameters becoming PBC targets, an approximate value of the average value of the parameters becoming the targets, an intermediate value corresponding to an intermediate level of parameters becoming targets and a most frequently used value among parameters becoming targets.
  • a pilot reference value can be set to a preset default value as well.
  • a pilot value can be decided by a selection within a preset table.
  • temporary pilot reference values are set to pilot reference values selected by at least two of the various pilot reference value selecting methods, coding efficiency is calculated for each case, the temporary pilot reference value corresponding to a case having best coding efficiency is then selected as a final pilot reference value.
  • Ceil[x] is a maximum integer not exceeding x and
  • Floor[x] is a minimum integer exceeding x.
  • a difference value between the selected pilot and a parameter within a group is found. For instance, a difference value is calculated by subtracting a pilot reference value from a parameter value becoming a PBC target. This is explained with reference to FIG. 13 and FIG. 15 as follows.
  • FIG. 14 and FIG. 15 are diagrams to explain PBC coding according to the present invention.
  • pilot reference value is set to 10 in FIG. 15.
  • the result of PBC coding includes the selected pilot reference value and the calculated d[n]. And, these values become targets of entropy coding that will be explained later. Besides, the PBC is more effective in case that deviation of target parameter values is small overall.
  • a target of PBC coding is not specified into one. It is possible to code digital data of various signals by PBC. For instance, it is applicable to audio coding that will be explained later. In the present invention, additional control data processed together with audio data is explained in detail as a target of PBC coding.
  • control data is transferred in addition to a downmixed signal of audio and is then used to reconstruct the audio.
  • control data is defined as spatial information or spatial parameter.
  • the spatial information includes various kinds of spatial parameters such as a channel level difference (hereinafter abbreviated CLD), an inter-channel coherence (hereinafter abbreviated ICC), a channel prediction coefficient (hereinafter abbreviated CPC) and the like.
  • CLD channel level difference
  • ICC inter-channel coherence
  • CPC channel prediction coefficient
  • the CLD is a parameter that indicates an energy difference between two different channels.
  • the CLD has a value ranging between 15 and +15.
  • the ICC is a parameter that indicates a correlation between two different channels.
  • the ICC has a value ranging between 0 and 7.
  • the CPC is a parameter that indicates a prediction coefficient used to generate three channels from two cha nnels. For instance, the CPC has a value ranging between 20 and 30.
  • a gain value used to adjust a gain of signal e.g., ADG
  • ATD arbitrary tree data
  • the ADG is a parameter that is discriminated from the CLD, ICC or CPC.
  • the ADG corresponds to a parameter to adjust a gain of audio to differ from the spatial information such as CLD, ICC CPC and the like extracted from a channel of an audio signal. Yet, for example of use, it is able to process the ADG or ATD in the same manner of the aforesaid CLD to raise efficiency of audio coding.
  • partial parameter means a portion of parameter.
  • n bits are divided into at least two parts. And, it is able to define the two parts as first and second partial parameters, respectively.
  • the second partial parameter excluded in the difference calculation should be transferred as a separate value.
  • a least significant bit is defined as the second partial parameter and a parameter value constructed with the rest (n-1) upper bits can be defined as the first partial parameter.
  • LSB least significant bit
  • the second partial parameter excluded in the difference calculation is separately transferred, and is then taken into consideration in reconstructing a final parameter by a decoding part.
  • the CPC parameter of the aforesaid spatial information is suitable for the application of the PBC scheme. Yet, it is not preferable to apply the CPC parameter to coarse quantization scheme. In case that a quantization scheme is coarse, a deviation between first partial parameters increases.
  • the data coding using partial parameters is applicable to DIFF scheme as well as PBC scheme.
  • a method of processing a signal using partial parameters includes the steps of obtaining a first partial parameter using a reference value corresponding to the first partial parameter and a difference value corresponding to the reference value and deciding a parameter using the first partial parameter and a second partial parameter.
  • the reference value is either a pilot reference value or a difference reference value.
  • the first partial parameter includes partial bits of the parameter and the second partial parameter includes the rest bits of the parameter.
  • the second partial parameter includes a least significant bit of the parameter.
  • the signal processing method further includes the step of reconstructing an audio signal using the decided parameter.
  • the parameter is spatial information including at least one of CLD, ICC, CPC and
  • the parameter is the CPC and if a quantization scale of the parameter is not coarse, it is able to obtain the second partial parameter.
  • a final parameter is decided by twice multiplying the partial parameter and adding the multiplication result to the second partial parameter.
  • An apparatus for processing a signal using partial parameters includes a first parameter obtaining part obtaining a first partial parameter using a reference value corresponding to the first partial parameter and a difference value corresponding to the reference value and a parameter deciding part deciding a parameter using the first partial parameter and a second partial parameter.
  • the signal processing apparatus further includes a second parameter obtaining part obtaining the second partial parameter by receiving the second partial parameter.
  • the first parameter obtaining part, the parameter deciding part and the second partial parameter obtaining part are included within the aforesaid data decoding part 91 or 92.
  • a method of processing a signal using partial parameters includes the steps of dividing a parameter into a first partial parameter and a second partial parameter and generating a difference value using a reference value corresponding to the first partial parameter and the first partial parameter.
  • the signal processing method further includes the step of transferring the difference value and the second partial parameter.
  • An apparatus for processing a signal using partial parameters includes a parameter dividing part dividing a parameter into a first partial parameter and a second partial parameter and a difference value generating part generating a difference value using a reference value corresponding to the first partial parameter and the first partial parameter.
  • the signal processing apparatus further includes a parameter outputting part transferring the difference value and the second partial parameter.
  • the parameter diving part and the difference value generating part are included within the aforesaid data encoding part 31 or 32.
  • PBC coding of the present invention selects a separate pilot reference value and then has the selected pilot reference value included in a bitstream, it is probable that transmission efficiency of the PBC coding becomes lower than that of a DIFF coding scheme that will be explained later. [333] So, the present invention intends to provide an optimal condition to perform PBC coding.
  • PBC coding is applicable. This corresponds to a result in considering efficiency of data coding. It means that DIFF or PCM coding is more efficient than PBC coding if two data exist within a group only.
  • PBC coding is applicable to at least three or more data
  • PBC coding is applied to a case that at least five data exist within a group.
  • a case that PBC coding is most efficiently applicable is a case that there are at least five data becoming targets of data coding and that deviations between the at least five data are small.
  • a minimum number of data suitable for the execution of PBC coding will be decided according to a system and coding environment.
  • a signal processing method if the number of data corresponding to a pilot reference value is obtained and if the number of data bands meets a preset condition, the pilot reference value and a pilot difference value corresponding to the pilot reference value are obtained. Subsequently, the data are obtained using the pilot reference value and the pilot difference value. In particular, the number of the data is obtained using the number of the data bands in which the data are included.
  • one of a plurality of data coding schemes is decided using the number of data and the data are decoded according to the decided data coding scheme.
  • a plurality of the data coding schemes include a pilot coding scheme at least. If the number of the data meets a preset condition, the data coding scheme is decided as the pilot coding scheme.
  • the data decoding process includes the steps of obtaining a pilot reference value corresponding to a plurality of the data and a pilot difference value corresponding to the pilot reference value and obtaining the data using the pilot reference value and the pilot difference value.
  • the data are parameters. And, an audio signal is recovered using the parameters.
  • identification information corresponding to the number of the parameters is received and the number of the parameters is generated using the received identification information.
  • identification information indicating a plurality of the data coding schemes is hierarchically extracted.
  • a first identification information indicating a first data coding scheme is extracted and a second identification information indicating a second data coding scheme is then extracted using the first identification information and the number of the data.
  • the first identification information indicates whether it is a DIFF coding scheme.
  • the second identification information indicates whether it is a pilot coding scheme or a PCM grouping scheme.
  • a pilot difference value is generated using a pilot reference value corresponding to a plurality of the data and the data.
  • the generated pilot difference value is then transferred.
  • the pilot reference value is transferred.
  • data coding schemes are decided according to the number of a plurality of data.
  • the data are then encoded according to the decided data coding schemes.
  • a plurality of the data coding schemes include a pilot coding scheme at least. If the number of the data meets a preset condition, the data coding scheme is decided as the pilot coding scheme.
  • An apparatus for processing a signal includes a number obtaining part obtaining a number of data corresponding to a pilot reference value, a value obtaining part obtaining the pilot reference value and a pilot difference value corresponding to the pilot reference value if the number of the data meets a preset condition, and a data obtaining part obtaining the data using the pilot reference value and the pilot difference value.
  • the number obtaining part, the value obtaining part and the data obtaining part are included in the aforesaid data decoding part 91 or 92.
  • An apparatus for processing a signal according to another embodiment of the present invention includes a scheme deciding part deciding one of a plurality of data coding schemes according to a number of a plurality of data and a decoding part decoding the data according to the decided data coding scheme.
  • a plurality of the data coding schemes include a pilot coding scheme at least.
  • An apparatus for processing a signal includes a value generating part generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data if a number of a plurality of the data meets a preset condition and an output part transferring the generated pilot difference value.
  • the value generating part is included in the aforesaid data encoding part 31 or 32.
  • An apparatus for processing a signal according to another further embodiment of the present invention includes a scheme deciding part deciding a data coding scheme according to a number of a plurality of data and an encoding part encoding the data according to the decided data coding scheme.
  • a plurality of the data coding schemes include a pilot coding scheme at least.
  • a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value are obtained.
  • the data are obtained using the pilot reference value and the pilot difference value.
  • the method may further include a step of decoding at least one of the pilot difference value and the pilot reference value.
  • the PBC applied data are parameters.
  • the method may further include the step of reconstructing an audio signal using the obtained parameters.
  • An apparatus for processing a signal includes a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value and a data obtaining part obtaining the data using the pilot reference value and the pilot difference value.
  • the value obtaining part and the data obtaining part are included in the aforesaid data coding part 91 or 92.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data and outputting the generated pilot difference value.
  • An apparatus for processing a signal includes a value generating part generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data and an output part outputting the generated pilot difference value.
  • a method of processing a signal includes the steps of obtaining a pilot reference value corresponding to a plurality of gains and a pilot difference value corresponding to the pilot reference value and obtaining the gain using the pilot reference value and the pilot difference value. And, the method may further include the step of decoding at least one of the pilot difference value and the pilot reference value. Moreover, the method may further include the step of reconstructing an audio signal using the obtained gain.
  • the pilot reference value may be an average of a plurality of the gains, an averaged intermediate value of a plurality of the gains, a most frequently used value of a plurality of the gains, a value set to a default or one value extracted from a table.
  • the method may further include the step of selecting the gain having highest encoding efficiency as a final pilot reference value after the pilot reference value has been set to each of a plurality of the gains.
  • An apparatus for processing a signal includes a value obtaining part obtaining a pilot reference value corresponding to a plurality of gains and a pilot difference value corresponding to the pilot reference value and a gain obtaining part obtaining the gain using the pilot reference value and the pilot difference value.
  • a method of processing a signal according to another further embodiment of the present invention includes the steps of generating a pilot difference value using a pilot reference value corresponding to a plurality of gains and the gains and outputting the generated pilot difference value.
  • an apparatus for processing a signal includes a value calculating part generating a pilot difference value using a pilot reference value corresponding to a plurality of gains and the gains and an outputting part outputting the generated pilot difference value.
  • DIFF coding is a coding scheme that uses relations between a plurality of data existing within a discriminated data group, which may be called differential coding.
  • a data group which is a unit in applying the DIFF, means a final group to which a specific grouping scheme is applied by the aforesaid data grouping part 10.
  • data having a specific meaning as grouped in the above manner is defined as parameter to be explained. And, this is the same as explained for the PBC.
  • the DIFF coding scheme is a coding scheme that uses difference values between parameters existing within a same group, and more particularly, difference values between neighbor parameters.
  • FIG. 16 is a diagram to explain types of DIFF coding according to the present invention. DIFF coding is discriminated according to a direction in finding a difference value from a neighbor parameter.
  • DIFF coding types can be classified into DIFF in frequency direction
  • DIFF_FREQ DIFF_FREQ
  • DIFF_TIME DIFF in time direction
  • Group- 1 indicates DIFF(DF) calculating a difference value in a frequency axis
  • Group-2 or Group-3 calculates a difference value in a time axis.
  • the DIFF(DT) which calculates a difference value in a time axis, is re-discriminated according to a direction of the time axis to find a difference value.
  • the DIFF(DT) applied to the Group-2 corresponds to a scheme that finds a difference value between a parameter value at a current time and a parameter value at a previous time (e.g., Group- 1). This is called backward time DIFF(DT)
  • the DIFF(DT) applied to the Group-3 corresponds to a scheme that finds a difference value between a parameter value at a current time and a parameter value at a next time (e.g., Group-4). This is called forward time DIFF(DT) (hereinafter abbreviated DT-FORWARD).
  • DT-FORWARD forward time DIFF(DT)
  • the Group- 1 is a DIFF(DF) coding scheme
  • the Group- 1 is a DIFF(DF) coding scheme
  • DIFF(DT-FORWARD) coding scheme Yet, a coding scheme of the Group-4 is not decided.
  • DIFF in frequency axis is defined as one coding scheme (e.g., DIFF(DF)) only, definitions can be made by discriminating it into
  • FIGs. 17 to 19 are diagrams of examples to which DIFF coding scheme is applied.
  • the Group-1 and the Group-2 shown in FIG. 16 are taken as examples for the convenience of explanation.
  • the Group-2 follows
  • FIG. 18 shows results from calculating difference values of the Group-1. Since the
  • Group-1 is coded by the DIFF(DF) coding scheme, difference values are calculated by
  • Formula 2 means that a difference value from a previous parameter is found on a frequency axis. [377] [Formula 2]
  • FIG. 19 shows results from calculating difference values of the Group-2. Since the
  • Group-2 is coded by the DIFF(DF-BACKW ARD) coding scheme, difference values are calculated by Formula 3.
  • Formula 3 means that a difference value from a previous parameter is found on a time axis.
  • the present invention is characterized in compressing or reconstructing data by mixing various data coding schemes. So, in coding a specific group, it is necessary to select one coding scheme from at least three or more data coding schemes. And, identification information for the selected coding scheme should be delivered to a decoding part via bitstream.
  • a method of processing a signal according to one embodiment of the present invention includes the steps of obtaining data coding identification information and data-decoding data according to a data coding scheme indicated by the data coding identification information.
  • the data coding scheme includes a PBC coding scheme at least. And, the PBC coding scheme decodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
  • the data coding scheme further includes a DIFF coding scheme.
  • the DIFF coding scheme corresponds to one of DIFF-DF scheme and DIFF-DT scheme.
  • the DIFF- DT scheme corresponds to one of forward time DIFF-DT(FORW ARD) scheme and backward time DIFF-DT(BACKWARD).
  • the signal processing method further includes the steps of obtaining entropy coding identification information and entropy-decoding the data using an entropy coding scheme indicated by the entropy coding identification information.
  • the entropy-decoded data is data-decoded by the data coding scheme.
  • the signal processing method further includes the step of decoding an audio signal using the data as parameters.
  • An apparatus for processing a signal according to one embodiment of the present invention includes
  • An identification information obtaining part obtaining data coding identification in- formation and a decoding part data-decoding data according to a data coding scheme indicated by the data coding identification information.
  • the data coding scheme includes a PBC coding scheme at least. And, the PBC coding scheme decodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of data-encoding data according to a data coding scheme and generating to transfer data coding identification information indicating the data coding scheme.
  • the data coding scheme includes a PBC coding scheme at least.
  • PBC coding scheme encodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
  • An apparatus for processing a signal according to another embodiment of the present invention includes an encoding part data-encoding data according to a data coding scheme and an outputting part generating to transfer data coding identification information indicating the data coding scheme.
  • the data coding scheme includes a PBC coding scheme at least.
  • PBC coding scheme encodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
  • FIG. 20 is a block diagram to explain a relation in selecting one of at least three coding schemes according to the present invention.
  • identification information to select the per-group coding type for total 100 data groups needs total 140 bits resulting from first information (100 bits) + second information (40 bits).
  • FIG. 21 is a block diagram to explain a relation in selecting one of at least three coding schemes according to a related art.
  • the first data encoding part 53 having the lowest frequency of use is preferentially identified through the 100 bits. So, the rest of 1-bit second information needs total 90 bits more to discriminate the second data encoding part 52 and the third data encoding part 51.
  • identification information to select the per-group coding type for total 100 data groups needs total 190 bits resulting from first information (100 bits) + second information (90 bits).
  • the present invention is characterized in utilizing different identification information instead of discriminating two coding scheme types similar to each other in frequency of use by the same identification information.
  • FIG. 22 and FIG. 23 are flowcharts for the data coding selecting scheme according to the present invention, respectively.
  • DIFF coding is a data coding scheme having highest frequency of use.
  • PBC coding is a data coding scheme having highest frequency of use.
  • the check is performed by first information for identification.
  • the PCM coding As a result of the check, if it is the PCM coding, it is checked whether it is PBC coding (S20). This is performed by second information for identification.
  • second information for identification In case that frequency of use of DIFF coding is 60 times among total 100 times, identification information for a per-group coding type selection for the same 100 data groups needs total 140 bits of first information (100 bits) + second information (40 bits).
  • a presence or non-presence of PCM coding having lowest frequency of use is checked (S30). As mentioned in the foregoing description, the check is performed by first information for identification.
  • a method of processing a signal according to one embodiment of the present invention includes the steps of extracting identification information indicating a plurality of data coding schemes hierarchically and decoding data according to the data coding scheme corresponding to the identification information.
  • the identification information indicating a PBC coding scheme and a
  • DIFF coding scheme included in a plurality of the data coding schemes is extracted from different layers.
  • the data are obtained according to the data coding scheme using a reference value corresponding to a plurality of data and a difference value generated using the data.
  • the reference value is a pilot reference value or a difference reference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of extracting identification information indicating at least three or more data coding schemes hierarchically.
  • the identification information indicating two coding schemes having high frequency of use of the identification information is extracted from different layers.
  • a method of processing a signal according to a further embodiment of the present invention includes the steps of extracting identification information hierarchically according to frequency of use of the identification information indicating a data coding scheme and decoding data according to the data decoding scheme corresponding to the identification information.
  • the identification information is extracted in a manner of extracting first identification information and second identification information hierarchically.
  • the first identification information indicates whether it is a first data coding scheme and the second identification information indicates whether it is a second data coding scheme.
  • the first identification information indicates whether it is a DIFF coding scheme.
  • the second identification information indicates whether it is a pilot coding scheme or a PCM grouping scheme.
  • the first data coding scheme can be a PCM coding scheme.
  • the second data coding scheme can be a PBC coding scheme or a DIFF coding scheme.
  • the data are parameters
  • the signal processing method further includes the step of reconstructing an audio signal using the parameters.
  • An apparatus for processing a signal according to one embodiment of the present invention includes an identifier extracting part (e.g., 710 in FIG. 24) hierarchically extracting identification information discriminating a plurality of data coding schemes and a decoding part decoding data according to the data coding scheme corresponding to the identification information.
  • an identifier extracting part e.g., 710 in FIG. 24
  • a method of processing a signal according to another further embodiment of the present invention includes the steps of encoding data according to a data coding scheme and generating identification information discriminating data coding schemes differing from each other in frequency of use used in encoding the data.
  • the identification information discriminates a PCM coding scheme and a PBC coding scheme from each other.
  • the identification information discriminates a PCM coding scheme and a DIFF coding scheme.
  • an apparatus for processing a signal according to another further embodiment of the present invention includes an encoding part encoding data according to a data coding scheme and an identification information generating part (e.g., 400 in FIG. 22) generating identification information discriminating data coding schemes differing from each other in frequency of use used in encoding the data.
  • an identification information generating part e.g., 400 in FIG. 22
  • PCM, PBC and DIFF of the present invention For instance, it is able to freely select one of the three coding types for each group becoming a target of data coding. So, overall data coding brings a result of using the three coding scheme types in combination with each other. Yet, by considering frequency of use of the three coding scheme types, one of a DIFF coding scheme having optimal frequency of use and the rest of the two coding schemes (e.g., PCM and PBC) is primarily selected. Subsequently, one of the PCM and the PBC is secondarily selected. Yet, as mentioned in the foregoing description, this is to consider transmission efficiency of identification information but is not attributed to similarity of substantial coding schemes.
  • the PBC and DIFF are similar to each other in calculating a difference value. So, coding processes of the PBC and the DIFF are considerably overlapped with each other.
  • a step of reconstructing an original parameter from a difference value in decoding is defined as delta decoding and can be designed to be handled in the same step.
  • the present invention proposes grouping that handles data by binding prescribed data together for efficiency in coding.
  • grouping since a pilot reference value is selected by a group unit, a grouping process needs to be completed as a step prior to executing the PBC coding.
  • the grouping is applied to DIFF coding in the same manner.
  • some schemes of the grouping according to the present invention are applicable to entropy coding as well, which will be explained in a corresponding description part later.
  • Grouping types of the present invention can be classified into external grouping and internal grouping with reference to an executing method of grouping.
  • grouping types of the present invention can be classified into domain grouping, data grouping and channel grouping with reference to a grouping target.
  • grouping types of the present invention can be classified into first grouping, second grouping and third grouping with reference to a grouping execution sequence.
  • grouping types of the present invention can be classified into single groping and multiple grouping with reference to a grouping execution count.
  • the grouping according to the present invention is completed in a manner that various grouping schemes are overlapped with each other in use or used in combination with each other.
  • Internal grouping means that execution of grouping is internally carried out. If internal grouping is carried out in general, a previous group is internally re-grouped to generate a new group or divided groups.
  • FIG. 24 is a diagram to explaining internal grouping according to the present invention.
  • band frequency domain unit
  • sampling data passes through a specific filter, e.g., QMF (quadrature mirror filter), a plurality of sub-bands are generated.
  • a specific filter e.g., QMF (quadrature mirror filter)
  • first frequency grouping is performed to generate first group bands that can be called parameter bands.
  • the first frequency groping is able to generate parameter bands by binding sub-bands together irregularly. So, it is able to configure sizes of the parameter bands non- equivalently. Yet, according to a coding purpose, it is able to configure the parameter bands equivalently.
  • the step of generating the sub-bands can be classified as a sort of grouping.
  • second frequency grouping is performed on the generated parameter bands to generate second group bands that may be called data bands.
  • the second frequency grouping is able to generate data bands by unifying parameter bands with uniform number.
  • a group reference value is decided by taking grouped parameter bands as one group and a difference value is then calculated.
  • a group reference value is decided by taking grouped parameter bands as one group and a difference value is then calculated.
  • detailed operations of the DIFF are the same as explained in the foregoing description.
  • External grouping means a case that execution of grouping is externally carried out.
  • FIG. 25 is a diagram to explaining external grouping according to the present invention.
  • external grouping is carried out by time domain unit (hereinafter named timeslot), for example. So, an external grouping scheme may correspond to a sort of domain grouping occasionally.
  • First time grouping is performed on a frame including sampling data to generate first group timeslots.
  • FIG. 25 exemplarily shows that eight timeslots are generated.
  • the first time grouping has a meaning of dividing a frame into timeslots in equal size as well.
  • At least one of the timeslots generated by the first time grouping is selected.
  • 25 shows a case that timeslots 1, 4, 5, and 8 are selected. According to a coding scheme, it is able to select the entire timeslots in the selecting step.
  • the timeslot(s) excluded from the rearrangement is excluded from final group formation, it is excluded from the PBC or DIFF coding targets.
  • Second time grouping is performed on the selected timeslots to configure a group handled together on a final time axis.
  • timeslots 1 and 2 or timeslots 3 and 4 can configure one group, which is called a timeslot pair.
  • timeslots 1, 2 and 3 can configure one group, which is called a timeslot triple.
  • a single timeslot is able to exist not to configure a group with another timeslot(s).
  • Multiple grouping means a grouping scheme that generates a final group by mixing the internal grouping, the external grouping and various kinds of other groupings together.
  • the individual groping schemes according to the present invention can be applied by being overlapped with each other or in combination with each other. And, the multiple grouping is utilized as a scheme to raise efficiency of various coding schemes.
  • FIG. 26 is a diagram to explain multiple grouping according to the present invention, in which internal grouping and external grouping are mixed.
  • final grouped bands 64 are generated after internal grouping has been completed in frequency domain. And, final timeslots 61, 62 and 63 are generated after external groping has been completed in time domain.
  • reference numbers 61a, 61b, 62a, 62b and 63 indicate data sets, respectively.
  • two data sets 61a and 61b or another two data sets 62a and 62b are able to configure a pair by external grouping.
  • the pair of the data sets is called data pair.
  • P3 is selected for the finally completed data pair 61 or 62 or each data set 63 not configuring the data pair.
  • the PBC coding is then executed using the selected pilot reference values.
  • a DIFF coding type is decided for each of the data sets 61a, 61b, 62a, 62b and 63.
  • a DIFF direction should be decided for each of the data sets and is decided as one of DIFF-DF and DIFF-DT.
  • a process for executing the DIFF coding according to the decided DIFF coding scheme is the same as mentioned in the foregoing description.
  • each of the data sets 61a and 61b configuring a data pair has the same data band number.
  • each of the data sets 62a and 62b configuring a data pair has the same data band number.
  • the data sets belonging to different data pairs e.g., 61a and 62a, respectively may differ from each other in the data band number. This means that different internal grouping can be applied to each data pair.
  • a data band number after second grouping corresponds to a prescribed multiplication of a data band number after first grouping. This is because each data set configuring a data pair has the same data band number.
  • FIG. 27 and FIG. 28 are diagrams to explain mixed grouping according to another embodiments of the present invention, respectively.
  • FIG. 27 and FIG. 28 intensively show mixing of internal groupings. So, it is apparent that external grouping is performed or can be performed in FIG. 27 or FIG. 28.
  • FIG. 27 shows a case that internal grouping is performed again on a case that data bands are generated after completion of the second frequency grouping.
  • the data bands generated by the second frequency grouping are divided into low frequency band and high frequency band.
  • a case of separating the low frequency band and the high frequency band to utilize is called dual mode.
  • data coding is performed by taking the finally generated low or high frequency band as one group. For instance, pilot reference values Pl and P2 are generated for low and high frequency bands, respectively and PBC coding is then performed within the corresponding frequency band.
  • the dual mode is applicable according to characteristics per channel. So, this is called channel grouping. And, the dual mode is differently applicable according to a data type as well.
  • FIG. 28 shows a case that internal grouping is performed again on a case that data bands are generated after completion of the aforesaid second frequency grouping.
  • the data bands generated by the second frequency grouping are divided into low frequency band and high frequency band.
  • the low frequency band is utilized only but the high frequency band needs to be discarded.
  • a case of grouping the low frequency band to utilize only is called low frequency channel (LFE) mode.
  • LFE low frequency channel
  • a pilot reference value Pl is generated for a low frequency band
  • PBC coding is then performed within the corresponding low frequency band. Yet, it is possible to generate new data bands by performing internal grouping on a selected low frequency band. This is to intensively group the low frequency band to represent.
  • the low frequency channel (LFE) mode is applied according to a low frequency channel characteristic and can be called channel grouping.
  • Grouping can be classified into domain grouping and data grouping with reference to targets of the grouping.
  • the domain grouping means a scheme of grouping units of domains on a specific domain (e.g., frequency domain or time domain). And, the domain grouping can be executed through the aforesaid internal grouping and/or external grouping.
  • the data grouping means a scheme of grouping data itself.
  • the data grouping can be executed through the aforesaid internal grouping and/or external grouping.
  • groping can be performed to be usable in entropy coding.
  • the data grouping is used in entropy coding real data in a finally completed grouping state shown in FIG. 26. Namely, data are processed in a manner that two data neighboring to each other in one of frequency direction and time direction are bound together.
  • a method of processing a signal includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group and a difference value corresponding to the group reference value through first grouping and internal grouping for the first grouping and obtaining the data using the group reference value and the difference value.
  • the present invention is characterized in that a number of the data grouped by the first grouping is greater than a number of the data grouped by the internal grouping.
  • the group reference value can be a pilot reference value or a difference reference value.
  • the method according to one embodiment of the present invention further includes the step of decoding at least one of the group reference value and the difference value.
  • the pilot reference value is decided per the group.
  • numbers of the data included in internal groups through the internal grouping are set in advance, respectively. In this case, the numbers of the data included in the internal groups are different from each other.
  • the first grouping and the internal grouping are performed on the data on a frequency domain.
  • the frequency domain may correspond to one of a hybrid domain, a parameter band domain, a data band domain and a channel domain.
  • a first group by the first grouping includes a plurality of internal groups by the internal grouping.
  • the frequency domain of the present invention is discriminated by a frequency band.
  • the frequency band becomes sub-bands by the internal grouping.
  • the sub-bands become parameter bands by the internal grouping.
  • the parameter bands become data bands by the internal grouping. In this case, a number of the parameter bands can be limited to maximum 28. And, the parameter bands are grouped by 2, 5 or 10 into one data band.
  • An apparatus for processing a signal includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group and a difference value corresponding to the group reference value through first grouping and internal grouping for the first grouping and a data obtaining part obtaining the data using the group reference value and the difference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and internal grouping for the first grouping and the data and transferring the generated difference value.
  • an apparatus for processing a signal includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and internal grouping for the first grouping and the data and an outputting part transferring the generated difference value.
  • a method of processing a signal includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through grouping and a difference value corresponding to the group reference value and obtaining the data using the group reference value and the difference value.
  • the group reference value can be one of a pilot reference value and a difference reference value.
  • the grouping may correspond to one of external grouping and external grouping.
  • the grouping may correspond to one of domain grouping and data grouping.
  • the data grouping is performed on a domain group.
  • a time domain included in the domain grouping includes at least one of a timeslot domain, a parameter set domain and a data set domain.
  • a frequency domain included in the domain grouping may include at least one of a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain and a channel domain.
  • One difference reference value will be set from a plurality of the data included in the group. And, at least one of a grouping count, a grouping range and a presence or non-presence of the grouping is decided.
  • An apparatus for processing a signal includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping and the data and transferring the generated difference value.
  • An apparatus for processing a signal includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping and the data and an outputting part transferring the generated difference value.
  • a method of processing a signal includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and a first difference value corresponding to the group reference value and obtaining the data using the group reference value and the first difference value.
  • the group reference value may include a pilot reference value or a difference reference value.
  • the method further includes the step of decoding at least one of the group reference value and the first difference value. And, the first pilot reference value is decided per the group.
  • the method further includes the steps of obtaining a second pilot reference value corresponding to a plurality of the first pilot reference values and a second difference value corresponding to the second pilot reference value and obtaining the first pilot reference value using the second pilot reference value and the second difference value.
  • the second grouping may include external or internal grouping for the first grouping.
  • the grouping is performed on the data on at least one of a time domain and a frequency domain.
  • the grouping is a domain grouping that groups at least one of the time domain and the frequency domain.
  • the time domain may include a timeslot domain, a parameter set domain or a data set domain.
  • the frequency domain may include a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain or a channel domain. And, the grouped data is an index or parameter.
  • the first difference value is entropy-decoded using an entropy table indicated by the index included in one group through the first grouping. And, the data is obtained using the group reference value and the entropy-decoded first difference value.
  • the first difference value and the group reference value are entropy-decoded using an entropy table indicated by the index included in one group through the first grouping. And, the data is obtained using the entropy-decoded group reference value and the entropy-decoded first difference value.
  • An apparatus for processing a signal includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and the data and transferring the generated difference value.
  • An apparatus for processing a signal includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and the data and an outputting part transferring the generated difference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and a difference value corresponding to the group reference value and obtaining the data using the group reference value and the difference value.
  • a first data number corresponding to a number of the data grouped by the first grouping is smaller than a second data number corresponding to a number of the data grouped by the external grouping.
  • a multiplication relation exists between the first data number and the second data number.
  • the group reference value may include a pilot reference value or a difference reference value.
  • the method further includes the step of decoding at least one of the group reference value and the difference value.
  • the pilot reference value is decoded per the group.
  • the grouping is performed on the data on at least one of a time domain and a frequency domain.
  • the time domain may include a timeslot domain, a parameter set domain or a data set domain.
  • the frequency domain may include a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain or a channel domain.
  • the method further includes the step of reconstructing the audio signal using the obtained data as parameters.
  • the external grouping may include paired parameters.
  • An apparatus for processing a signal includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
  • a method of processing a signal according to a further embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and the data and transferring the generated difference value.
  • an apparatus for processing a signal includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and the data and an outputting part transferring the generated difference value.
  • a method of processing a signal includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and a difference value corresponding to the group reference value and obtaining the data using the group reference value and the difference value.
  • a number of the data included in the internal grouping is smaller than a number of the data included in the data grouping.
  • the data correspond to parameters.
  • the internal grouping is performed on a plurality of the data-grouped data entirely.
  • the internal grouping can be performed per a parameter band.
  • the internal grouping can be performed on a plurality of the data-grouped data partially. In this case, the internal grouping can be performed per a channel of each of a plurality of the data-grouped data.
  • the group reference value can include a pilot reference value or a difference reference value.
  • the method may further include the step of decoding at least one of the group reference value and the difference value.
  • the pilot reference value is decided per the group.
  • the data grouping and the internal grouping are performed on the data on a frequency domain.
  • the frequency domain may include one of a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain and a channel domain.
  • grouping information for at least one of the data grouping and the internal grouping is used.
  • the grouping information includes at least one of a position of each group, a number of each group, a presence or non-presence of applying the group reference value per a group, a number of the group reference values, a codec scheme of the group reference value and a presence or non-presence of obtaining the group reference value.
  • An apparatus for processing a signal includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and the data and transferring the generated difference value.
  • an apparatus for processing a signal includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and the data and an outputting part transferring the generated difference value.
  • Entropy coding means a process for performing variable length coding on a result of the data coding.
  • entropy coding processes occurrence probability of specific data in a statistical way. For instance, transmission efficiency is raised overall in a manner of allocating less bits to data having high frequency of occurrence in probability and more bits to data having low frequency of occurrence in probability.
  • the present invention intends to propose an efficient entropy coding method, which is different from the general entropy coding, interconnected with the PBC coding and the DIFF coding.
  • entropy table is defined as a codebook. And, an encoding part and a decoding part use the same table.
  • the present invention proposes an entropy coding method and a unique entropy table to process various kinds of data coding results efficiently.
  • Entropy coding of the present invention is classified into two types. One is to derive one index (index 1) through one entropy table, and the other is to derive two consecutive indexes (index 1 and index 2) through one entropy table.
  • the former is named ID (one-dimensional) entropy coding and the latter is named 2D (two-dimensional) entropy coding.
  • FIG. 29 is an exemplary diagram of ID and 2D entropy table according to the present invention.
  • an entropy table of the present invention basically includes an index field, a length field and a codeword field.
  • specific data e.g., pilot reference value, difference value, etc.
  • the corresponding data corresponding to index
  • the codeword turns into a bitstream and is then transferred to a decoding part.
  • An entropy decoding part having received the codeword decides the entropy table having used for the corresponding data and then derives an index value using the corresponding codeword and a bit length configuring the codeword within the decided table.
  • the present invention represents a codeword as hexadecimal.
  • a positive sign (+) or a negative sign (-) of an index value derived by ID or 2D entropy coding is omitted. So, it is necessary to assign the sign after completion of the ID or 2D entropy coding.
  • the sign is assigned differently according to ID or 2D.
  • the ID entropy table in which indexes are derived one by one, is usable for all data coding results. Yet, the 2D entropy table, in which two indexes are derived each, has a restricted use for a specific case.
  • 2D entropy table has a restricted use in part. And, a use of the 2D entropy table is restricted on a pilot reference value calculated as a result of PBC coding.
  • entropy coding of the present invention is characterized in utilizing a most efficient entropy coding scheme in a manner that entropy coding is interconnected with the result of data coding. This is explained in detail as follows.
  • FIG. 30 is an exemplary diagram of two methods for 2D entropy coding according to the present invention.
  • 2D entropy coding is a process for deriving two indexes neighboring to each other. So, the 2D entropy coding can be discriminated according to a direction of the two consecutive indexes.
  • 2D-Freqeuncy Pairing (hereinafter abbreviated 2D-FP).
  • 2D-TP 2D-Time Pairing
  • the 2D-FP and the 2D-TP are able to configure separate index tables, respectively.
  • An encoder has to decide a most efficient entropy coding scheme according to a result of data decoding.
  • a reference value corresponding to a plurality of data and a difference value corresponding to the reference value are obtained.
  • the difference value is entropy-decoded.
  • the data is then obtained using the reference value and the entropy-decoded difference value.
  • the method further includes the step of entropy-decoding the reference value. And, the method may further include the step of obtaining the data using the entropy- decoded reference value and the entropy-decoded difference value.
  • the method can further include the step of obtaining entropy coding identification information. And, the entropy coding is performed according to an entropy coding scheme indicated by the entropy coding identification information.
  • the entropy coding scheme is one of a ID coding scheme and a multidimensional coding scheme (e.g., 2D coding scheme).
  • the multi-dimensional coding scheme is one of a frequency pair (FP) coding scheme and a time pair (TP) coding scheme.
  • the reference value may include one of a pilot reference value and a difference reference value.
  • the signal processing method can further include the step of reconstructing the audio signal using the data as parameters.
  • An apparatus for processing a signal includes a value obtaining part obtaining a reference value corresponding to a plurality of data and a difference value corresponding to the reference value, an entropy decoding part entropy-decoding the difference value, and a data obtaining part obtaining the data using the reference value and the entropy-decoded difference value.
  • the value obtaining part is included in the aforesaid bitstream demultiplexing part 60 and the data obtaining part is included within the aforesaid data decoding part 91 or 92.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a reference value corresponding to a plurality of data and the data, entropy-encoding the generated difference value, and outputting the entropy-encoded difference value.
  • the reference value is entropy-encoded.
  • the entropy-encoded reference value is transferred.
  • the method further includes the step of generating an entropy coding scheme used for the entropy encoding. And, the generated entropy coding scheme is transferred.
  • An apparatus for processing a signal includes a value generating part generating a difference value using a reference value corresponding to a plurality of data and the data, an entropy encoding part entropy-encoding the generated difference value, and an outputting part outputting the entropy-encoded difference value.
  • the value generating part is included within the aforesaid data encoding part 31 or 32.
  • the outputting part is included within the aforesaid bitstream multiplexing part 50.
  • a method of processing a signal includes the steps of obtaining data corresponding to a plurality of data coding schemes, deciding an entropy table for at least one of a pilot reference value and a pilot difference value included in the data using an entropy table identifier unique to the data coding scheme, and entropy-decoding at least one of the pilot reference value and the pilot difference value using the entropy table.
  • the entropy table identifier is unique to one of a pilot coding scheme, a frequency differential coding scheme and a time differential coding scheme.
  • the entropy table identifier is unique to each of the pilot reference value and the pilot difference value.
  • the entropy table is unique to the entropy table identifier and includes one of a pilot table, a frequency differential table and a time differential table.
  • the entropy table is not unique to the entropy table identifier and one of a frequency differential table and a time differential table can be shared.
  • the entropy table corresponding to the pilot reference value is able to use a frequency differential table.
  • the pilot reference value is entropy-decoded by the ID entropy coding scheme.
  • the entropy coding scheme includes a ID entropy coding scheme and a 2D entropy coding scheme.
  • the 2D entropy coding scheme includes a frequency pair (2D-FP) coding scheme and a time pair (2D-TP) coding scheme.
  • the present method is able to reconstruct the audio signal using the data as parameters.
  • An apparatus for processing a signal includes a value obtaining part obtaining a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value and an entropy decoding part entropy-decoding the pilot difference value. And, the apparatus includes a data obtaining part obtaining the data using the pilot reference value and the entropy-decoded pilot difference value.
  • a method of processing a signal according to a further embodiment of the present invention includes the steps of generating a plot difference value using a pilot reference value corresponding to a plurality of data and the data, entropy-encoding the generated pilot difference value, and transferring the entropy-encoded pilot difference value.
  • a table used for the entropy encoding may include a pilot dedicated table.
  • the method further includes the step of entropy-encoding the pilot reference value.
  • the method further includes the step of generating an entropy coding scheme used for the entropy encoding. And, the generated entropy coding scheme is transferred.
  • An apparatus for processing a signal includes a value generating part generating a plot difference value using a pilot reference value corresponding to a plurality of data and the data, an entropy encoding part entropy-encoding the generated pilot difference value, and an outputting part transferring the entropy-encoded pilot difference value.
  • the present invention has proposed three kinds of data coding schemes. Yet, entropy coding is not performed on the data according to the PCM scheme. Relations between PBC coding and entropy coding and relations between DIF coding and entropy coding are separately explained in the following description.
  • FIG. 31 is a diagram of an entropy coding scheme for PBC coding result according to the present invention.
  • a group to which PBC coding will be applied is decided.
  • FIG. 31 for convenience of explanation, a case of a pair on a time axis and a case of non-pair on a time axis are taken as examples.
  • Entropy coding after completion of PBC coding is explained as follows.
  • ID entropy coding is performed on one pilot reference value becoming an entropy coding target, and ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
  • the present invention relates to a case that one pilot reference value is generated for one group for example, ID entropy coding should be performed. Yet, in another embodiment of the present invention, if at least two pilot reference values are generated from one group, it may be possible to perform 2D entropy coding on consecutive pilot reference values.
  • ID entropy coding is performed on one pilot reference value becoming an entropy coding target, and ID entropy coding, 2D-FP entropy coding or 2D-TP entropy coding can be performed on the rest difference values.
  • FIG. 32 is a diagram of entropy coding scheme for DIFF coding result according to the present invention.
  • FIG. 32 For convenience of explanation, a case of a pair on a time axis and a case of non-pair on a time axis are taken as examples. And, FIG. 32 shows a case that a data set as a unit of data coding is discriminated into DIFF- DT in time axis direction and DIFF-DF in frequency axis direction according to DIFF coding direction.
  • Entropy coding after completion of DIFF coding is explained as follows. [638] First of all, a case that DIFF coding is performed on non-pairs is explained. In case of non-pairs, one data set exists on a time axis. And, the data set may become DIFF- DF or DIFF-DT according to DIF coding direction.
  • a reference value becomes a parameter value within a first band 82a.
  • ID entropy coding is performed on the reference value and ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
  • a data set to find a difference value may be a neighbor data set failing to configure a data pair or a data set within another audio frame.
  • DIFF-DF/DF (87), if each of the data sets is non-paired and DIFF-DF, if all available entropy coding schemes are executable.
  • each reference value within the corresponding data set becomes a parameter value within a first band 82b or 82c and ID entropy coding is performed on the reference value.
  • ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
  • ID entropy coding should be performed on a parameter value within a last band 83b or 83c failing to configure a pair. Since two data sets configure a pair, 2D-TP entropy coding can be performed. In this case, 2D-TP entropy coding is sequentially performed on bands ranging from a next band excluding the first band 82b or 82c within the corresponding data set to a last band.
  • DIFF-DT/DT DIFF-DT/DT
  • ID entropy coding or 2D-Fp entropy coding can be performed on all the difference values within each of the data sets.
  • 2D-TP entropy coding is executable.
  • 2D-TP entropy coding is sequentially performed on bands ranging from a first band to a last band within the corresponding data set.
  • FIG. 32 shows an example of DIFF-DF/DT.
  • all entropy coding schemes applicable according to corresponding coding types can be basically performed on each o the data sets.
  • ID entropy coding is performed on a parameter value within a first band 82d with a reference value within the corresponding data set (DIFF-DF). And, ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
  • ID entropy coding Even if 2D-FP is performed within a corresponding data set (DIFF-DF), after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last band 83d failing to configure a pair.
  • 2D-TP entropy coding is executable. In this case, 2D-TP entropy coding is sequentially performed on bands ranging from a next band excluding a first band including the first band 82d to a last band. [662] If the 2D-TP entropy coding is performed, a last band failing to configure a pair is not generated. [663] Once the entropy coding scheme per data is decided, a codeword is generated using a corresponding entropy table. [664] 2-3. Entropy Coding and Grouping
  • a decoding part receives one codeword resulting from grouping the two indexes included in the bitstream and the extracts two index values using the applied entropy table.
  • a method of processing a signal according to one embodiment of the present invention includes the steps of obtaining difference information, entropy-decoding the difference information according to an entropy coding scheme including time grouping and frequency grouping, and data-decoding the difference information according to a data decoding scheme including a pilot difference, a time difference and a frequency difference. And, detailed relations between data coding and entropy coding are the same as explained in the foregoing description.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of obtaining a digital signal, entropy-decoding the digital signal according to an entropy coding scheme, and data-decoding the entropy-decoded digital signal according to one of a plurality of data coding schemes including a pilot coding scheme at least.
  • the entropy coding scheme can be decided according to the data coding scheme.
  • An apparatus for processing a signal according to another embodiment of the present invention includes a signal obtaining part obtaining a digital signal, an entropy decoding part entropy-decoding the digital signal according to an entropy coding scheme, and a data decoding part data-decoding the entropy-decoded digital signal according to one of a plurality of data coding schemes including a pilot coding scheme at least.
  • a method of processing a signal according to a further embodiment of the present invention includes the steps of data-encoding a digital signal by a data coding scheme, entropy-encoding the data-encoded digital signal by an entropy coding scheme, and transferring the entropy-encoded digital signal.
  • the entropy coding scheme can be decided according to the data coding scheme.
  • an apparatus for processing a signal includes a data encoding part data-encoding a digital signal by a data coding scheme and an entropy encoding part entropy-encoding the data-encoded digital signal by an entropy coding scheme. And, the apparatus may further include an outputting part transferring the entropy-encoded digital signal.
  • An entropy table for entropy coding is automatically decided according to a data coding scheme and a type of data becoming an entropy coding target.
  • ID entropy table to which a table name hcodPilot_CLD is given is used for entropy coding.
  • a data type is a CPC parameter
  • data coding is DIFF-DF
  • an entropy coding target is a first band value
  • ID entropy table to which a table name hcodFirstband_CPC is given is used for entropy coding.
  • ID entropy table to which a table name hcodFirstband_CPC is given is used for entropy coding.
  • a data type is an ICC parameter
  • a data coding scheme is PBC
  • entropy coding is performed by 2D-TP
  • 2D-PC/TP entropy table to which a table name hcod2D_ICC_PC_TP_LL is given is used for entropy coding.
  • LAV largest absolute value
  • a data type is an ICC parameter
  • a data coding scheme is DIF-DF
  • entropy coding is performed by 2D-FP
  • 2D-FP entropy table to which a table name hcod2D_ICC_DF_FP_LL is given is used for entropy coding.
  • entropy tables for data having attributes similar to each other can be shared to use. For representative example, if a data type is ADG or ATD, it is able to apply the CLD entropy table. And, a first band entropy table can be applied to a pilot reference value of PBC coding.
  • FIG. 33 is a diagram to explain a method of selecting an entropy table according to the present invention.
  • a plurality of entropy tables are shown in (a) of FIG. 33, and a table to select the entropy tables is shown in (b) of FIG. 33.
  • the entropy tables may include entropy tables (e.g., tables 1 to 4) applicable in case that a data type is xxx, entropy tables (e.g., tables 5 to 8) applicable in case that a data type is yyy, PBC dedicated entropy tables (e.g., tables k to k+1), escape entropy tables (e.g., tables n-2 ⁇ n-1), and an LAV index entropy table (e.g., table n).
  • entropy tables e.g., tables 1 to 4
  • entropy tables e.g., tables 5 to 8
  • PBC dedicated entropy tables e.g., tables k to k+1
  • escape entropy tables e.g., tables n-2 ⁇ n-1
  • LAV index entropy table e.g., table n
  • a table is configured by giving a codeword to each index that can occur in corresponding data, if so, a size of the table considerably increases. And, it is inconvenient to manage indexes that are unnecessary or barely occur. In case of a 2D entropy table, those problems bring more inconvenience due to too many occurrences. To solve those problems, the largest absolute value (LAV) is used.
  • LAV largest absolute value
  • at least one LAV having high frequency of occurrence in probability is selected within the range and is configured into a separate table.
  • a CLD entropy table it is able to provide a table of
  • LAV table for another data type (e.g., ICC, CPC, etc.) in the same manner of the CLD table.
  • LAV for each data has a different value because a range per data type varies.
  • the present invention employs an LAV index to select an entropy table using LAV.
  • LAV value per data type is discriminated by LAV index.
  • the present invention is characterized in using an entropy table for LAV index separately. This means that LAV index itself is handled as a target of entropy coding.
  • the table-n in (a) of FIG. 33 is used as an LAV index entropy table
  • the LAV Index entropy table 9 Ie in Table 1 is applied to a case of four kinds of LAV Indexes. And, it is apparent that transmission efficiency can be more enhanced if there are more LAV Indexes.
  • a method of processing a signal includes the steps of obtaining index information, entropy-decoding the index information, and identifying a content corresponding to the entropy-decoded index information.
  • the index information is information for indexes having characteristics of frequency of use with probability.
  • the index information is entropy- decoded using the index dedicated entropy table 9 Ie.
  • the content is classified according to a data type and is used for data decoding.
  • the content may become grouping information.
  • the grouping information is information for grouping of a plurality of data.
  • an index of the entropy table is a largest absolute value (LAV) among indexes included in the entropy table.
  • the entropy table is used in performing 2D entropy decoding on parameters.
  • An apparatus for processing a signal includes an information obtaining part obtaining index information, a decoding part entropy-decoding the index information, and an identifying part identifying a content corresponding to the entropy-decoded index information.
  • a method of processing a signal according to another embodiment of the present invention includes the steps of generating index information to identify a content, entropy-encoding the index information, and transferring the entropy-encoded index information.
  • An apparatus for processing a signal includes an information generating part generating index information to identify a content, an encoding part entropy-encoding the index information, and an information outputting part transferring the entropy-encoded index information.
  • a method of processing a signal includes the steps of obtaining a difference value and index information, entropy-decoding the index information, identifying an entropy table corresponding to the entropy-decoded index information, and entropy-decoding the difference value using the identified entropy table.
  • the reference value may include a pilot reference value or a difference reference value.
  • the index information is entropy-decoded using an index dedicated entropy table.
  • the entropy table is classified according to a type of each of a plurality of the data.
  • the data are parameters
  • the method further includes the step of reconstructing an audio signal using the parameters.
  • the method further includes the steps of obtaining the reference value and entropy-decoding the reference value using the entropy table dedicated to the reference value.
  • An apparatus for processing a signal includes an inputting part obtaining a difference value and index information, an index decoding part entropy-decoding the index information, a table identifying part identifying an entropy table corresponding to the entropy-decoded index information, and a data decoding part entropy-decoding the difference value using the identified entropy table.
  • the apparatus further includes a data obtaining part obtaining data using a reference value corresponding to a plurality of data and the decoded difference value.
  • a method of processing a signal includes the steps of generating a difference value using a reference value corresponding to a plurality of data and the data, entropy-encoding the difference value using an entropy table, and generating index information to identify the entropy table.
  • the method further includes the steps of entropy-encoding the index information and transferring the entropy-encoded index information and the difference value.
  • an apparatus for processing a signal includes a value generating part generating a difference value using a reference value corresponding to a plurality of data and the data, a value encoding part entropy-encoding the difference value using an entropy table, an information generating part generating index information to identify the entropy table, and an index encoding part entropy-encoding the index information. And, the apparatus further includes an information outputting part transferring the entropy-encoded index information and the difference value.
  • FIG. 34 is a hierarchical diagram of a data structure according to the present invention.
  • a data structure according to the present invention includes a header 100 and a plurality off frames 101 and 102.
  • Configuration information applied to the lower frames 101 and 102 in common is included in the header 100.
  • the configuration information includes grouping information utilized for the aforesaid grouping.
  • the grouping information includes a first time grouping information
  • the configuration information within the header 100 is called main configuration information and an information portion recorded in the frame is called payload.
  • the first time grouping information 100a within the header 100 becomes bsFrameLength field that designates a number of timeslots within a frame.
  • the first frequency grouping information 100b becomes bsFreqRes field that designates a number of parameter bands within a frame.
  • the channel grouping information 100c means OttmodeLFE-bsOttBands field and bsTttDualmode-bsTttBandsLow field.
  • the OttmodeLFE-bsOttBands field is the information designating a number of parameter bands applied to LFE channel.
  • thebsTttDualmode-bsTttBandsLow field is the information designating a number of parameter bands of a low frequency band within a dual mode having both low and high frequency bands.
  • Ye, the bsTttDualmode-bsTttBandsLow field can be classified not as channel grouping information but as frequency grouping information.
  • Each of the frames 101 and 102 includes a frame information (Frame Info) 101a applied to all groups within a frame in common and a plurality of groups 101b and 101c.
  • Frame Info frame information
  • the frame information 101a includes a time selection information 103a, a second time grouping information 103b and a second frequency grouping information 103c. Besides, the frame information 101a is called sub-configuration information applied to each frame.
  • the time selection information 103a within the frame information 101a includes bsNumParamset field, bsParamslot field and bsDataMode filed.
  • the bsNumParamset field is information indicating a number of parameter sets existing within an entire frame.
  • the bsParamslot field is information designating a position of a timeslot where a parameter set exists.
  • the bsDataMode field is information designating an encoding and decoding processing method of each parameter set.
  • a decoding part replaces the corresponding parameter set by a default value.
  • a decoding part maintains a decoding value of a previous parameter set.
  • a decoding part calculates a corresponding parameter set by interpolation between parameter sets.
  • a method of processing a signal using the bsDataMode field includes the steps of obtaining mode information, obtaining a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value according to data attribute indicated by the mode information, and obtaining the data using the pilot reference value and the pilot difference value.
  • the data are parameters
  • the method further includes the step of reconstructing an audio signal using the parameters.
  • the pilot difference value is obtained.
  • the mode information further includes at least one of a default mode, a previous mode and an interpolation mode.
  • the signal processing method uses a first parameter (e.g., dataset) to identify a number of the read modes and a second parameter (e.g., setidx) to obtain the pilot difference value based on the first variable.
  • a first parameter e.g., dataset
  • a second parameter e.g., setidx
  • An apparatus for processing a signal using the bsDataMode field includes an information obtaining part obtaining mode information, a value obtaining part obtaining a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value according to data attribute indicated by the mode information, and a data obtaining part obtaining the data using the pilot reference value and the pilot difference value.
  • the information obtaining part, the value obtaining part and the data obtaining part are provided within the aforesaid data decoding part 91 or 92.
  • a method of processing a signal using the bsDataMode field includes the steps of generating mode information indicating attribute of data, generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data, and transferring the generated difference value. And, the method further includes the step of encoding the generated difference value.
  • An apparatus for processing a signal using the bsDataMode field includes an information generating part generating mode information indicating attribute of data, a value generating part generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data, and an outputting part transferring the generated difference value. And, the value generating part is provided within the aforesaid data encoding part 31 or 32.
  • the second time grouping information 103b within the frame information 101a includes bsDatapair field.
  • the second frequency grouping information within the frame information 101a includes bsFreqResStride field.
  • the bsFreqResStride field is the information to second-group the parameter bad first-grouped by the bsFreqRes field as the first frequency grouping information 100b. Namely, a data band is generated by binding parameters amounting to a stride designated by the bsFreqResStride field. So, parameter values are given per the data band.
  • Each of the groups 101b and 101c includes data coding type information 104a, entropy coding type information 104b, codeword 104c and side data 104d.
  • the data coding type information 104a within each of the groups 101b and 101c includes bsPCMCoding field, bsPilotCoding field, bsDiffType field and bd- DifftimeDirection field.
  • the bsPCMCoding field is information to identify whether data coding of the corresponding group is PCM scheme or DIFF scheme.
  • the bsDifftype field is information to designate a coding direction in case that DIFF scheme is applied. And, the bsDiffType field designates either DF: DIFF-FREQ or DT: DIFF-TIME.
  • the bsDiffTimeDirection field is information to designate whether a coding direction on a time axis is FORWARD or BACKWARD in case that the bsDiffType field is DT.
  • the entropy coding type information 104b within each of the groups 101b and 101c includes bsCodingScheme field and bsPairing field.
  • the bsCodingScheme field is the information to designate whether entropy coding is ID or 2D.
  • the bsPairing field is the information whether a direction for extracting two indexes is a frequency direction (FP: Frequency Pairing) or a time direction (TP: Time Pairing) in case that the bsCodingScheme field designates 2D.
  • FP Frequency Pairing
  • TP Time Pairing
  • the codeword 104c within each of the groups 101b and 101c includes bsCodeW field.
  • the bsCodeW field designates a codeword on a table applied for entropy coding. So, most of the aforesaid data become targets of entropy coding. In this case, they are transferred by the bsCodeW field. For instance, a pilot reference value and LAV Index value of PBC coding, which become targets of entropy coding, are transferred by the bsCodeW field.
  • the side data 104d within each of the groups 101b and 101c includes bsLsb field and bsSign field.
  • the side data 104d includes other data, which are entropy-coded not to be transferred by the bsCodeW field, as well as the bsLsb field and the bsSign field.
  • the bsLsb field is a field applied to the aforesaid partial parameter and is the side information transferred only if a data type is CPC and in case of non-coarse quantization.
  • the bsSign field is the information to designate a sign of an index extracted in case of applying ID entropy coding.
  • a signal processing data structure includes a payload part having at least one of data coding information including pilot coding information at least per a frame and entropy coding information and a header part having main configuration information for the payload part.
  • the main configuration information includes a first time information part having time information for entire frames and a first frequency information part having frequency information for the entire frames.
  • the main configuration information further includes a first internal grouping information part having information for internal-grouping a random group including a plurality of data per frame.
  • the frame includes a first data part having at least one of the data coding information and the entropy coding information and a frame information part having sub-configuration information for the first data part.
  • the sub-configuration information includes a second time information part having time information for entire groups. And, the sub-configuration information further includes an external grouping information part having information for external grouping for a random group including a plurality of data per the group. Moreover, the sub-configuration information further includes a second internal grouping information part having information for internal-grouping the random group including a plurality of the data.
  • the group includes the data coding information having information for a data coding scheme, the entropy coding information having information for an entropy coding scheme, a reference value corresponding to a plurality of data, and a second data part having a difference value generated using the reference value and the data.
  • FIG. 35 is a block diagram of an apparatus for audio compression and recovery according to one embodiment of the present invention.
  • an apparatus for audio compression and recovery according to one embodiment of the present invention includes an audio compression part 105-400 and an audio recovery part 500-800.
  • the audio compression part 105-400 includes a downmixing part 105, a core coding part 200, a spatial information coding part 300 and a multiplexing part 400.
  • the downmixing part 105 includes a channel downmixing part 110 and a spatial information generating part 120.
  • inputs of the channel downmixing part 110 are an audio signal of N multi-channels (X , X ,.., X ) and the audio signal.
  • the channel downmixing part 110 outputs a signal downmixed into channels of which number is smaller than that of channels of the inputs.
  • An output of the downmixing part 105 is downmixed into one or two channels, a specific number of channels according to a separate downmixing command, or a specific number of channels preset according to system implementation.
  • the core coding part 200 performs core coding on the output of the channel downmixing part 110, i.e., the downmixed audio signal.
  • the core coding is carried out in a manner of compressing an input using various transform schemes such as a discrete transform scheme and the like.
  • the spatial information generating part 120 extracts spatial information from the multi-channel audio signal.
  • the spatial information generating part 120 then transfers the extracted spatial information to the spatial information coding part 300.
  • the spatial information coding part 300 performs data coding and entropy coding on the inputted spatial information.
  • the spatial information coding part 300 performs at least one of PCM, PBC and DIFF. In some cases, the spatial information coding part
  • a decoding scheme by a spatial information decoding part 700 can be decided according to which data coding scheme is used by the spatial information coding part 300. And, the spatial information coding part 300 will be explained in detail with reference to FIG. 36 later.
  • An output of the core coding part 200 and an output of the spatial information coding part 300 are inputted to the multiplexing part 400.
  • the multiplexing part 400 multiplexes the two inputs into a bitstream and then transfers the bitstream to the audio recovery part 500 to 800.
  • the audio recovery part 500 to 800 includes a demultiplexing part 500, a core decoding part 600, a spatial information decoding part 700 and a multi-channel generating part 800.
  • the demultiplexing part 500 demultiplexes the received bitstream into an audio part and a spatial information part.
  • the audio part is a compressed audio signal
  • the spatial information part is a compressed spatial information.
  • the core decoding part 600 receives the compressed audio signal from the demultiplexing part 500.
  • the core decoding part 600 generates a downmixed audio signal by decoding the compressed audio signal.
  • the spatial information decoding part 700 receives the compressed spatial information from the demultiplexing part 500.
  • the spatial information decoding part 700 generates the spatial information by decoding the compressed spatial information.
  • identification information indicating various grouping information and coding information included in the data structure shown in FIG. 34 is extracted from the received bitstream.
  • a specific decoding scheme is selected from at least one or more decoding schemes according to the identification information.
  • the spatial information is generated by decoding the spatial information according to the selected decoding scheme.
  • the decoding scheme by the spatial information decoding part 700 can be decided according to what data coding scheme is used by the spatial information coding part 300. And, the spatial information decoding part 700 is will be explained in detail with reference to FIG. 37 later.
  • the multi-channel generating part 800 receives an output of the core coding part
  • the multi-channel generating part 800 generates an audio signal of N multi-channels Yl, Y2, , YN from the two received outputs.
  • the audio compression part 105-400 provides an identifier indicating what data coding scheme is used by the spatial information coding part 300 to the audio recovery part 500-800.
  • the audio recovery part 500-800 includes a means for parsing the identification information.
  • the spatial information decoding part 700 decides a decoding scheme with reference to the identification information provided by the audio compression part 105-400.
  • the means for parsing the identification information indicating the coding scheme is provided to the spatial information decoding part 700.
  • FIG. 36 is a detailed block diagram of a spatial information encoding part according to one embodiment of the present invention, in which spatial information is named a spatial parameter.
  • a coding part includes a PCM coding part 310, a DIFF (differential coding) part 320 and a Huffman coding part 330.
  • the Huffman coding part 330 corresponds to one embodiment of performing the aforesaid entropy coding.
  • the PCM coding part 310 includes a grouped PCM coding part 311 and a PCB part
  • the grouped PCM coding part 311 PCM-codes spatial parameters.
  • the grouped PCM coding part 311 is able to PCM-codes spatial parameters by a group part.
  • the PBC part 312 performs the aforesaid PBC on spatial parameters.
  • the DIFF part 320 performs the aforesaid DIFF on spatial parameters.
  • PBC part 312 and the DIFF part 320 selectively operates for coding of spatial parameters. And, its control means is not separately shown in the drawing.
  • PBC is once performed on spatial parameters.
  • the PBC can be further performed N-times (N>1) on a result of the first PBC.
  • the PBC is at least once carried out on a pilot value or difference values as a result of performing the first PBC.
  • it is preferable that the PBC is carried out on the difference values only except the pilot value since the second PBC.
  • the DIFF part 320 includes a DIFF_FREQ coding part 321 performing
  • DIFF_FREQ on a spatial parameter
  • DIFF_TIME coding parts 322 and 323 performing DIFF_TIME on spatial parameters.
  • DIFF part 320 one selected from the group consisting of the DIFF_FREQ coding part 321 and the DIFF_TIME coding parts 322 and 323 carries out the processing for an inputted spatial parameter.
  • DIFF_TIME_FORWARD part 322 performing DIFF_TIME_FORWARD on a spatial parameter
  • a DIFF_TIME_BACKWARD part 323 performing DIFF_TIME_BACKWARD on a spatial parameter.
  • DIFF_TIME_FORWARD part 322 and the DIFF_TIME_BACKWARD 323 carries out a data coding process on an inputted spatial parameter.
  • the DIFF coding performed by each of the internal elements 321, 322 and 323 of the DIFF part 320 has been explained in detail in the foregoing description, of which explanation will be omitted in the following description.
  • the Huffman coding part 330 performs Huffman coding on at least one of an output of the PBC part 312 and an output of the DIF part 320.
  • the Huffman coding part 330 includes a 1 -dimension Huffman coding part
  • HUFF_1D part processing data to be coded and transmitted one by one and a 2-dimension Huffman coding part (hereinafter ab- breviated HUFF_2D parts 332 and 333 processing data to be coded and transmitted by a unit of two combined data.
  • a selected one of the HUFF_1D part 331 and the HUFF_2D parts 332 and 333 in the Huffman coding part 330 performs a Huffman coding processing on an input.
  • the HUFF_2D parts 332 and 333 are classified into a frequency pair
  • 2-Dimension Huffman coding part (hereinafter abbreviated HUFF_2D_FREQ_PAIR part) 332 performing Huffman coding on a data pair bound together based on a frequency and a time pair 2-Dimension Huffman coding part (hereinafter abbreviated HUFF_2D_TIME_PAIR part) 333 performing Huffman coding on a data pair bound together based on a time.
  • a selected one of the HUFF_2D_FREQ_PAIR part 332 and the HUFF_2D_TIME_PAIR part 333 performs a Huffman coding processing on an input.
  • Huffman coding part 330 will explained in detail in the following description.
  • an output of the Huffman coding part 330 is multiplexed with an output of the grouped PCM coding part 311 to be transferred.
  • a spatial information coding part In a spatial information coding part according to the present invention, various kinds of identification information generated from data coding and entropy coding are inserted into a transport bitstream. And, the transport bitstream is transferred to a spatial information decoding part shown in FIG. 37.
  • FIG. 37 is a detailed block diagram of a spatial information decoding part according to one embodiment of the present invention.
  • a spatial information decoding part receives a transport bitstream including spatial information and then generates the spatial information by decoding the received transport bitstream.
  • a spatial information decoding part 700 includes an identifier extracting (flags parsing part) 710, a PCM decoding part 720, a Huffman decoding part 730 and a differential decoding part 740.
  • the identifier parsing part 710 of the spatial information decoding part extracts various identifiers from a transport bitstream and then parses the extracted identifiers. This means that various kinds of the information mentioned in the foregoing description of FIG. 34 are extracted.
  • the spatial information decoding part is able to know what kind of coding scheme is used for a spatial parameter using an output of the identifier parsing part 710 and then decides a decoding scheme corresponding to the recognized coding scheme. Besides, the execution of the identifier parsing part 710 can be performed by the aforesaid demultiplexing part 500 as well.
  • the PCM decoding part 720 includes a grouped PCM decoding part 721 and a pilot based decoding part 722.
  • the grouped PCM decoding part 721 generates spatial parameters by performing
  • the grouped PCM decoding part 721 generates spatial parameters of a group part by decoding a transport bitstream.
  • the pilot based decoding part 722 generates spatial parameter values by performing pilot based decoding on an output of the Huffman decoding part 730. This corresponds to a case that a pilot value is included in an output of the Huffman decoding part 730.
  • the pilot based decoding part 722 is able to include a pilot extracting part (not shown in the drawing) to directly extract a pilot value from a transport bitstream. So, spatial parameter values are generated using the pilot value extracted by the pilot extracting part and difference values that are the outputs of the Huffman decoding part 730.
  • the Huffman decoding part 730 performs Huffman decoding on a transport bitstream.
  • the Huffman decoding part 730 includes a 1 -Dimension Huffman decoding part (hereinafter abbreviated HUFF_1D decoding part) 731 outputting a data value one by one by performing 1 -Dimension Huffman decoding on a transport bitstream and 2-Dimension Huffman decoding parts (hereinafter abbreviated HUFF_2D decoding parts) 732 and 733 outputting a pair of data values each by performing 2-Dimension Huffman decoding on a transport bitstream.
  • HUFF_1D decoding part a 1 -Dimension Huffman decoding part
  • 2-Dimension Huffman decoding parts hereinafter abbreviated HUFF_2D decoding parts
  • the identifier parsing part 710 extracts an identifier (e.g., bsCodingScheme) indicating whether a Huffman decoding scheme indicates HUFF_1D or HUFF_2D from a transport bitstream and then recognizes the used Huffman coding scheme by parsing the extracted identifier. So, either HUFF_1D or HUFF_2D decoding corresponding to each case is decided as a Huffman decoding scheme.
  • an identifier e.g., bsCodingScheme
  • the HUFF_1D decoding part 731 performs HUFF_1D decoding and each of the
  • HUFF_2D decoding parts 732 and 733 performs HUF_2D decoding.
  • the identifier parsing part 710 further extracts an identifier (e.g., bsParsing) indicating whether the HUFF_2D scheme is HUFF_2D_FREQ_PAIR or HUFF_2D_TIME_PAIR and then parses the extracted identifier. So, the identifier parsing part 710 is able to recognize whether two data configuring one pair are bound together based on frequency or time.
  • an identifier e.g., bsParsing
  • HUFF_2D_FREQ_PAIR decoding frequency pair 2-Dimension Huffman decoding
  • HUFF_2D_TIME_PAIR decoding time pair 2-Dimension Huffman decoding
  • the HUFF_2D_FREQ_PAIR part 732 performs HUFF_2D_FREQ_PAIR decoding and the HUFF_2D_TIME_PAIR part
  • An output of the Huffman decoding part 730 is transferred to the pilot based decoding part 722 or the differential decoding part 740 based on an output of the identifier parsing part 710.
  • the differential decoding part 740 generates spatial parameter values by performing differential decoding on an output of the Huffman decoding part 730.
  • the identifier parsing part 710 extracts an identifier (e.g., bsDiffType) indicating whether a DIFF scheme is DIF_FREQ or DIF_TIME from a transport bit stream from a transport bitstream and then recognizes the used DIFF scheme by parsing the extracted identifier. So, one of the DIFF_FREQ decoding and DIFF_TIME decoding corresponding to the respective cases is decided as a differential decoding scheme.
  • an identifier e.g., bsDiffType
  • the DIFF_FREQ decoding part 741 performs DIFF_FREQ decoding and each of the DIFF_TIME decoding parts 742 and 743 performs DIF_TIME decoding.
  • the identifier parsing part 710 further extracts an identifier (e.g., bsDiffTimeDirection) indicating whether the DIFF_TIME is DIFF_TIME_FORWARD or DIFF_TIME_BACKWARD from a transport bitstream and then parses the extracted identifier.
  • an identifier e.g., bsDiffTimeDirection
  • the identifier parsing part 710 reads a first identifier (e.g., bsPCMCoding) indicating which one of PCM and DIFF is used in coding a spatial parameter.
  • a first identifier e.g., bsPCMCoding
  • the identifier parsing part 710 further reads a second identifier (e.g., bsPilotCoding) indicating which one of PCM and PBC is used for coding of a spatial parameter.
  • a second identifier e.g., bsPilotCoding
  • the spatial information decoding part performs decoding corresponding to the PBC.
  • the spatial in- formation decoding part performs decoding corresponding to the PCM.
  • the spatial information decoding part performs a decoding processing that corresponds to the DIFF.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for processing a signal compressed in accordance with a specific alternative coding scheme are disclosed. In detail, a coding method for signal compression and signal restoration using a specific alternative coding scheme, and an apparatus therefor are disclosed. Data coding and entropy coding according to the present invention are executed under the condition in which they have a co-relation with each other. Grouping is executed for an enhancement in coding efficiency. The methof for signal processing includes obtaining data coding identification information from a signal, and data-decoding data in accordance with a data coding scheme indicated by the data coding identification information. The data coding scheme includes at least a pilot coding scheme. The pilot coding scheme includes decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. The pilot difference value is generated using the data and the pilot reference value.

Description

Description
METHOD AND APPARATUS FOR SIGNAL PROCESSING
Technical Field
[1] The present invention relates to a method and apparatus for signal processing. More particularly, the present invention relates to a coding method for signal compression and signal restoration in an alternative coding scheme, an apparatus therefor, a method transmitting the resultant digital broadcast signal, a data structure of the digital broadcast signal, and a broadcast receiver for the digital broadcast signal. Background Art
[2] The present invention relates to digital broadcasting. Recently, research for appliances capable of transmitting audio broadcasts, video broadcasts, data broadcasts, etc. in accordance with a digital scheme other than an analog scheme, and appliances capable of transmitting and displaying the transmitted broadcasts have been actively conducted. Currently, several appliances are commercially available.
[3] The broadcasting scheme for transmitting audio broadcasts, video broadcasts, data broadcasts, etc. in accordance with a digital scheme is generically called digital broadcasting.
[4] Examples of digital broadcasting are digital audio broadcasting and digital multimedia broadcasting. Such digital broadcasting has various advantages. For example, the digital broadcasting can inexpensively provide diverse multimedia information services, and can be used for mobile broadcasting in accordance with an appropriate frequency band allocation. Also, it is possible to create new earning sources, and to provide new vital power to the receiver markets, and thus, to obtain vast industrial effects.
[5] In conventional digital audio broadcasting, it is possible to transmit audio services, for example, seven audio services, in a frequency band of about 1.5 MHz. All the seven audio services are transmitted in a state of being compressed in accordance with a "masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM)" audio coding scheme.
[6] In other conventional digital broadcasting, for example, conventional digital multimedia broadcasting, digital multimedia broadcasting (DMB) and audio services, for example, one DMB service and three audio services, may be transmitted in a frequency band of about 1.5 MHz. In this case, of course, the three audio services are transmitted in a state of being compressed in accordance with the MUSICAM audio coding scheme.
[7] However, conventional methods for transmission of digital broadcasts have the following problems.
[8] First, there is no conventional coding scheme having a compression rate higher than those of recently-developed or practically-used audio compression techniques. For this reason, there is a problem in that the number of audio services transmittable in a limited frequency band is relatively small.
[9] Furthermore, when a broadcast stream compressed using a plurality of different codec scheme is transmitted, conventional cases have a problem in that there is no broadcast receiver capable of decoding the transmitted broadcast stream. This is because the existing broadcast receivers can decode only a broadcast stream compressed in accordance with a single, particular audio codec scheme.
[10] Second, there is a problem in that conventional digital broadcast receivers cannot output an audio signal coded using a plurality of audio coding schemes.
[11] To this date, many techniques associated with signal compression and signal restoration have been proposed. Generally, objects, to which such techniques are applicable, are various data including audio and video data. In particular, signal compression or restoration techniques have been advanced to achieve an enhancement in picture quality or sound quality while achieving an increase in compression rate. Also, many efforts have been made to achieve an enhancement in transmission efficiency, in order to enable the techniques to be suited to various communication environments.
[12] However, it is public opinion that there is still a margin for effective enhancement of transmission efficiency. Accordingly, there is a demand for concrete research for optimization of signal transmission efficiency, even in complex communication environments, through development of a new signal processing scheme. Disclosure of Invention Technical Problem
[13] Accordingly, the present invention is directed to a signal processing method and apparatus that substantially obviate one or more of the problems due to limitations and disadvantages of the related art.
[14] An object of the present invention devised to solve the above-mentioned problems is to provide a method for transmitting a digital broadcast signal and a data structure which enable transmission of an increased number of broadcast signals in a limited frequency band, and a broadcast receiver therefor.
[15] Another object of the present invention is to provide a digital broadcast signal transmitting method and a data structure which enable decoding of services coded in accordance with at least one alternative coding scheme and outputting of the decoded services, and a broadcast receiver therefor. [16] Another object of the present invention devised to solve the above-mentioned problems is to provide a method for signal processing and apparatus capable of achieving an optimal signal transmission efficiency.
[17] Another object of the present invention is to provide an efficient data coding method and an apparatus therefor.
[18] Another object of the present invention is to provide encoding and decoding methods capable of optimizing the transmission efficiency of control data used for restoration of audio, and an apparatus therefor.
[19] Another object of the present invention is to provide a medium including data encoded in accordance with the above-described encoding method.
[20] Another object of the present invention is to provide a data structure for efficiently transmitting the encoded data.
[21] Still another object of the present invention is to provide a system including the decoding apparatus. Technical Solution
[22] To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, a method for signal processing includes obtaining data coding identification information from a signal and data- decoding data in accordance with a data coding scheme indicated by the data coding identification information. And, the data coding scheme includes at least a pilot coding scheme; the pilot coding scheme comprises decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
[23] Also, the data coding scheme further includes a differential coding scheme, the differential coding scheme is one of a frequency differential coding scheme and a time differential coding scheme and the time differential coding scheme is one of a forward time differential coding scheme and a backward time differential coding scheme.
[24] The method further includes obtaining entropy coding identification information and entropy-decoding the data using an entropy coding scheme indicated by the entropy coding identification information. Also, the data decoding step comprises executing the data-decoding for the entropy-decoded data using the data coding scheme. The entropy decoding scheme is one of a one-dimensional coding scheme or a multi-dimensional coding scheme, and the multi-dimensional coding scheme is one of a frequency pair coding scheme and a time pair coding scheme.
[25] The method further includes decoding an audio signal, using the data as a parameter.
[26] In order to achieve these objects, an apparatus for signal processing includes an identification information obtaining part for obtaining data coding identification information from a signal and a decoding part for data-decoding data in accordance with a data coding scheme indicated by the data coding identification information, wherein the data coding scheme comprises at least a pilot coding scheme; the pilot coding scheme comprises decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
[27] In order to achieve these objects, a method for signal processing includes data- encoding data in accordance with a data coding scheme and generating and transferring data coding identification information indicating the data coding scheme. The data coding scheme includes at least a pilot coding scheme, the pilot coding scheme comprises encoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value and the pilot difference value is generated using the data and the pilot reference value.
[28] In order to achieve these objects, an apparatus for signal processing includes an encoding part for data-encoding data in accordance with a data coding scheme and an outputting part for generating and transferring data coding identification information indicating the data coding scheme. The data coding scheme includes at least a pilot coding scheme; the pilot coding scheme comprises encoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
Advantageous Effects
[29] The present invention provides an effect of an enhancement in data transmission efficiency in that it is possible to transmit an increased number of audio services in a limited frequency band. In addition, the present invention provides an effect capable of securing a desired compatibility in that it is possible to decode an audio service coded in accordance with one or more coding schemes, and to receive and output audio services coded in a conventional masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM) scheme.
[30] Accordingly, the present invention enables efficient data coding and entropy coding, thereby enabling data compression and recovery with high transmission efficiency. Brief Description of the Drawings
[31] FIG. 1 is a diagram schematically illustrating fast information channel (FIC) and main service channel (MSC) structures for digital broadcasting according to the present invention;
[32] FIG. 2 is a diagram illustrating a structure of a fast information block (FIB) in digital broadcasting; [33] FIG. 3 is a diagram illustrating a structure of a fast information group (FIG) in digital broadcasting; [34] FIG. 4 is a diagram illustrating a service organization in the case in which the type of the FIG is 0, and an "Extension" field is 2; [35] FIG. 5 is a table illustrating examples of the value of an added "audio service component type (ASCTy) "field; [36] FIG. 6 is a table illustrating another examples of the value of the added
"ASCTy"field; [37] FIG. 7 is a table illustrating another examples of the value of the added
"ASCTy"field; [38] FIG. 8 is a table illustrating a procedure in which service components are decoded in the case that an addition of an "ASCTy" field as shown in FIG. 6 is made; [39] FIG. 9 is a flowchart illustrating a digital broadcast transmitting method according to the present invention; [40] FIG. 10 is a flowchart illustrating a digital broadcast receiving method according to the present invention; [41] FIG. 11 is a block diagram illustrating a configuration of the broadcast receiver adapted to receive a digital broadcast according to the present invention; [42] FIG. 12 and FIG. 13 are block diagrams of a system according to the present invention; [43] FIG. 14 and FIG. 15 are diagrams to explain PBC coding according to the present invention; [44] FIG. 16 is a diagram to explain types of DIFF coding according to the present invention;
[45] FIGs. 17 to 19 are diagrams of examples to which DIFF coding scheme is applied;
[46] FIG. 20 is a block diagram to explain a relation in selecting one of at least three coding schemes according to the present invention; [47] FIG. 21 is a block diagram to explain a relation in selecting one of at least three coding schemes according to a related art; [48] FIG. 22 and FIG. 23 are flowcharts for the data coding selecting scheme according to the present invention, respectively; [49] FIG. 24 is a diagram to explaining internal grouping according to the present invention; [50] FIG. 25 is a diagram to explaining external grouping according to the present invention; [51] FIG. 26 is a diagram to explain multiple grouping according to the present invention; [52] FIG. 27 and FIG. 28 are diagrams to explain mixed grouping according to another embodiments of the present invention, respectively;
[53] FIG. 29 is an exemplary diagram of ID and 2D entropy table according to the present invention;
[54] FIG. 30 is an exemplary diagram of two methods for 2D entropy coding according to the present invention;
[55] FIG. 31 is a diagram of entropy coding scheme for PBC coding result according to the present invention;
[56] FIG. 32 is a diagram of entropy coding scheme for DIFF coding result according to the present invention;
[57] FIG. 33 is a diagram to explain a method of selecting an entropy table according to the present invention;
[58] FIG. 34 is a hierarchical diagram of a data structure according to the present invention;
[59] FIG. 35 is a block diagram of an apparatus for audio compression and recovery according to one embodiment of the present invention;
[60] FIG. 36 is a detailed block diagram of a spatial information encoding part according to one embodiment of the present invention; and
[61] FIG. 37 is a detailed block diagram of a spatial information decoding part according to one embodiment of the present invention. Best Mode for Carrying Out the Invention
[62] Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The preferred embodiments described in the specification and shown in the drawings are illustrative only and are not intended to represent all aspects of the invention, such that various equivalents and modifications can be made without departing from the spirit of the invention.
[63] It should also be noted that most terms disclosed in the present invention correspond to general terms well known in the art, but some terms have been selected by the applicant as necessary and will hereinafter be disclosed in the following description of the present invention. Therefore, it is preferable that the terms defined by the applicant be understood on the basis of their meanings in the present invention.
[64] The term "coding" used herein should be construed as including both an encoding procedure and a decoding procedure. Of course, it can be appreciated that a specific coding procedure is applicable only to one of the encoding and decoding procedures. In this case, a description thereof will be separately given. The term "coding"will also be referred to as "codec". [65]
[66] [Summary of Invention]
[67] In order to accomplish the above-described aspects, the present invention provides a method for transmitting a digital broadcast signal in a digital broadcasting control method, including inserting, into a broadcast stream, at least one service component compressed in accordance with an alternative coding scheme; inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme and transmitting, to a broadcast receiver, the broadcast stream including the at least one service component and the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
[68] Preferably, the alternative coding scheme comprises an alternative audio coding scheme. Preferably, the alternative audio coding scheme comprises at least one of an advanced audio coding (AAC) scheme and a bit sliced arithmetic coding (BSAC) scheme.
[69] Preferably, the alternative extension audio coding scheme additionally comprises at least one of a spectral band replication (SBR) scheme, a parametric stereo (PS) scheme, and a moving picture experts group (MPEG) surround scheme.
[70] The alternative coding scheme may use the alternative audio coding scheme alone, or may use a combination of the alternative audio coding scheme with at least one alternative extension audio coding scheme.
[71] Preferably, the alternative audio coding scheme comprises an audio coding scheme having a higher compression rate than a masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM) scheme.
[72] The step of inserting, into a broadcast stream, at least one service component compressed in accordance with an alternative coding scheme, preferably comprises including the at least one service component in a main service channel (MSC) of the broadcast stream.
[73] The step of inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme preferably comprises including the information in a fast information channel (FIC) of the broadcast stream.
[74] The step of inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme preferably comprises including, in the FIC of the broadcast stream, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, and including, in an audio superframe of the MSC, information indicating that the at least one service component has been compressed in accordance with a specific alternative extension audio coding scheme.
[75] The audio superframe may include a header and one or more frames. The audio superframe may also include a syncword for detection, and a cyclic redundancy check (CRC) for the header. The audio superframe may further include identifiers respectively informing of whether or not associated alternative extension audio coding schemes, for example, SBR, PS, and MPEG surround schemes were used.
[76] The identifiers respectively informing of whether or not the associated alternative extension audio coding schemes were used may be selectively included. For example, the PS scheme may be used only when the number of AAC-coded channels is mono. In this case, accordingly, the identifier associated with the PS scheme, namely, 'ps_flag' may be included in the header only when the number of AAC-coded channels corresponds to mono. The PS scheme may also be used only when the SBR scheme is used. In this case, accordingly, the identifier 'ps_flag' associated with the PS scheme may be included in the header only when the SBR scheme is used.
[77] The PS and MPEG surround schemes may be simultaneously used. Also, there may be the case wherein one of the PS and MPEG surround schemes is used, and the other is not used. Accordingly, only when the PS scheme was not used, whether or not the MPEG surround scheme was used may be informed of. Information about whether or not the MPEG surround scheme was used may not be expressed in the form of a simple ON/OFF expression, but may be expressed in the form of bits expressing one of diverse modes associated with the MPEG surround scheme.
[78] The detailed mode of the MPEG surround scheme can be identified through config. information present in an MPEG surround payload.
[79] The MPEG surround mode may be partially determined, based on the number of
AAC channels and whether or not the PS scheme was used.
[80] For example, when the number of AAC channels corresponds to stereo, it may be determined that 515 or the like cannot be used for the MPEG surround mode.
[81] Also, in the case in which the number of AAC channels corresponds to mono, and simultaneous transmission of PS and MPEG surround is possible, PS may be ignored upon decoding of MPEG surround. In this case, the MPEG surround mode must be 515 or the like. On the other hand, when decoding of MPEG surround follows decoding of PS, the MPEG surround mode may be 525 or the like.
[82] Preferably, the inclusion of the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme in the FIC of the broadcast stream comprises defining, in an audio service component type (ASCTy) field, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
[83] Preferably, the alternative coding scheme comprises an alternative audio coding scheme, as described above.
[84] In order to accomplish the above-described aspects, the present invention also provides a data structure of a digital broadcast signal including at least one service component compressed in accordance with an alternative coding scheme and information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme.
[85] Preferably, the information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme is defined for a decoding operation in a broadcast receiver.
[86] Preferably, the information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme is defined in a fast information channel (FIC).
[87] Preferably, the information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme is defined in an audio service component type (ASCTy) field.
[88] Preferably, such an audio service component type (ASCTy) field defines information indicating that at least one service component, which is transmitted, has been compressed in accordance with a specific alternative coding scheme selected from at least one alternative coding scheme.
[89] Preferably, the alternative coding scheme includes an alternative audio coding scheme.
[90] In order to accomplish the above-described aspects, the present invention further provides a digital broadcast receiver for receiving a digital broadcast including a tuner for receiving a broadcast stream containing at least one service component compressed in accordance with an alternative coding scheme, and information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme; a determinator for determining, based on the information, the alternative coding scheme used to compress the at least one service component included in the received broadcast stream and a controller for decoding the at least one service component compressed in accordance with the alternative coding scheme, using a corresponding decoding scheme selected based on the result of the determination by the determinator.
[91] Preferably, the determinator executes the determination, using an FIC decoder and an MSC decoder.
[92] The tuner preferably receives a broadcast stream including an MSC containing the at least one service component, and an FIC containing the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
[93] The broadcast stream, which contains, in the FIC thereof, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, preferably defines the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, in an ASCTy field of the FIC.
[94] Preferably, the alternative coding scheme comprises an alternative audio coding scheme.
[95] Thus, in accordance with the present invention, it is possible to transmit an increased number of audio services in a limited frequency band, and to decode audio services coded in accordance with various coding schemes, for outputting of the audio services.
[96] Hereinafter, preferred embodiments of the present invention capable of concretely accomplishing the above-described aspects will be described with reference to the annexed drawings.
[97] For clear description, the following description will be classified into a brief description of a method for transmitting an increased number of audio services in an MSC in accordance with the present invention (First Embodiment), a description of the concept of an FIC (Second Embodiment), a description of the field value of an ASCTy field added in accordance with the present invention (Third Embodiment), a description of a procedure for decoding a service component (Fourth Embodiment), a description of a method for transmitting and receiving a digital broadcast signal (Fifth Embodiment), and a description of a broadcast receiver for receiving a digital broadcast, and decoding the received digital broadcast in accordance with the present invention (Sixth Embodiment).
[98] Also, the descriptions of the following embodiments will be given in conjunction with a digital broadcast signal transmitting method, a data structure, and a broadcast receiver therefor which are applicable to the case in which an alternative coding scheme is applied to audio signals.
[99] However, the technique of the present invention for alternative coding of broadcast signals may be applied to the case in which a video signal or data signal is transmitted as a broadcast signal, in order to achieve alternative coding of the video signal or data signal. Accordingly, transmission of video signals or data signals after alternative coding thereof using the following embodiments also falls under the scope of the present invention.
[100] In other words, the description of the invention for alternatively coding an audio signal, transmitting the resultant signal, and decoding the transmitted signal is given only for illustrative purposes, and inventions for alternatively coding a video signal or data signal, transmitting the resultant signal, and decoding the transmitted signal also fall under the scope of the present invention.
[101] - First Embodiment -
[102] FIG. 1 is a diagram schematically illustrating fast information channel (FIC) and main service channel (MSC) structures for digital broadcasting according to the present invention.
[103] Hereinafter, a method for transmitting an increased number of audio services in an
MSC in accordance with the present invention will be described in brief. Details of the method according to the present invention can be understood through the second to sixth embodiments which will be described later.
[104] As shown in FIG. 1, in accordance with the present invention, it is possible to transmit, in a main service channel (MSC), not only an audio service compressed in accordance with a MUSICAM audio coding scheme, but also several audio services compressed in accordance with an alternative audio coding scheme.
[105] For reference, in FIG. 1, the audio service compressed in accordance with the
MUSICAM audio coding scheme is referred to as MUSICAM audio (MA), and the audio service compressed in accordance with the alternative audio coding scheme is referred to as alternative audio (AA).
[106] Here, MSC means a channel used to transmit an audio service component, a data service component, or the like. The MSC is a data channel divided into a number of coded sub-channels, namely, a time-interleaved data channel. Each sub-channel transmits one or more service components. The organizations of the sub-channels and service components are referred to as a multiplex configuration.
[107] Where an advanced audio coding (AAC) scheme or a bit sliced arithmetic coding
(BSAC) scheme is used as the alternative audio coding scheme, it is possible to provide a greatly-increased number of audio services, as compared to conventional cases in which audio services are transmitted using only the MUSICAM audio coding scheme.
[108] Furthermore, it is possible to add at least one of a spectral band replication (SBR) scheme and a moving picture experts group (MPEG) surround scheme.
[109] For example, although conventional cases can transmit one DMB service and three audio services compressed using the MUSICAM audio coding scheme in a frequency band of 1.5 MHz, it is possible to transmit one DMB service, one audio service compressed using the MUSICAM audio coding scheme, and three or four audio services compressed using the alternative audio coding scheme in accordance with the present invention. [110] Thus, in accordance with the present invention, one or two audio services can be additionally transmitted. Accordingly, it is possible to achieve an enhancement in audio service transmission efficiency at the side of the broadcasting station, and to increase the opportunity to select an increased number of audio services at users side.
[I l l] Of course, it is necessary to add new information to the fast information channel
(FIC), in order to enable the broadcast receiver to decode the newly-added audio services compressed in accordance with the alternative audio coding scheme, and to output the decoded audio services. The FIC, which is changed in accordance with the present invention, will be described in detail in conjunction with the third and fourth embodiments.
[112] The FIC means a channel for enabling the broadcast receiver to more rapidly access various information associated with digital audio broadcasting. For example, the FIC is used to transmit multiplex configuration information (MCI) and service information (SI).
[113] For better understanding of the third and fourth embodiments, the basic concept of the FIC will be described in conjunction with the second embodiment.
[114] - Second Embodiment -
[115] FIG. 2 is a diagram illustrating a structure of a fast information block (FIB) in digital broadcasting.
[116] FIG. 3 is a diagram illustrating a structure of a fast information group (FIG) in digital broadcasting.
[117] FIG. 4 is a diagram illustrating a service organization in the case in which the type of the FIG is 0, and an "Extension" field is 2.
[118] Hereinafter, the concepts of the FIC, FIB, and FIG will be described in brief with reference to FIGs. 2 to 4. Of course, this description is given only for better understanding of the third to sixth embodiments of the present invention.
[119] The FIC shown in FIG. 1 includes FIBs.
[120] As shown in FIG. 2, each FIB consists of 215 bits. The FIB includes an FIB data field and a cyclic redundancy check (CRC).
[121] The FIB data field includes one or more FIGs, and an end marker, and a padding.
[122] Each FIG has an FIG header consisting of information about FIG type and information about length, and an FIG data field.
[123] Meanwhile, when the type of an FIG is 0, the application of the FIG may mean information about the MCI and SI.
[124] When the FIG type is 0, the FIG data field includes a "Current/next (C/N)" field, an
"other ensemble (OE)" field, a "Program/data(P/D)" field, an "Extension" field, etc, as shown in FIG. 3.
[125] When the FIG type is 0, the "Extension" field may be re-defined by 32 different ap- plications. [126] For example, when the FIG type is 0, and the "Extension" field has a value of 2, a basic service organization is defined. [127] In the basic service organization, as shown in FIG. 4, one field, which is carried in one FIG, includes all service descriptions applied to one service (for example, a service k). [128] "Service Identification (SId)" field consists of 16 or 32 bits, and functions to specify the kind of the associated service that is transmitted. [129] For example, when the "P/D" field has a value of 0, the "SId" field includes a
"Country Id" field and a "Service reference" field. On the other hand, when the "P/
D(Program/Data)" field has a value of 1, the "SId" field includes an "ECC" field, a
"Country Id" field and a "Service reference" field. [130] "Local flag" field is a field indicating whether the service that is transmitted is usable in a specific area served by a specific ensemble or in all areas. [131] "CAId(Conditional Access Identifier)" field is a field for identifying an access control system (ACS) used for the associated service that is transmitted. [132] "Number of service components" field is a field for identifying the number of service components associated with the associated service that is transmitted. [133] "Service component description" field includes a TMId field of 2 bits, etc.
[134] The "TMId" field, which is a 2-bit field, indicates one of the following four cases.
[135] That is, when the "TMId" field has a value of 00, it may indicate an audio mode of an MSC stream mode. [136] When the "TMId" field has a value of 01, it may indicate a data mode of an MSC stream mode. [137] When the "TMId" field has a value of 10, it may indicate an FIC data channel
(FIDC) mode. [138] On the other hand, when the "TMId" field has a value of 11, it may indicate a data mode of a packet mode. [139] Fields associated with the "TMId" field will be described in detail hereinafter in conjunction with the case in which the "TMId" field has a value of 00. [140] Although the following description will be given only in conjunction with the case in which the "TMId" field has a value of 00, the present invention is applicable to the case in which the "TMId" field has a value other than the value of 00. [141] That is, even when the video service component or data service components are transmitted after being coded or compressed using other schemes, they can be decoded by a broadcast receiver, as long as the broadcast receiver is configured in accordance with the present invention. [142] When the "TMId" field has a value of 00, the remaining 14 bits of the "Service component description" field consists of an "ASCTy" field, a "SubChld" field, a "P/S" field, and a "CA flag" field. [143] The "ASCTy(Audio Service Component Type)" field is a field for indicating the type of the associated audio service component. [144] The "SubChId(Sub-channel Identifier)" field is a field for identifying a sub-channel in which the associated service component is transmitted. [145] The "P/S(Primary/Secondary)" field is a field for indicating whether the associated service component that is transmitted is a primary component or a secondary component. [146] The "CA flag" field is a field for indicating whether or not an access control is applied to the associated service component that is transmitted. [147] - Third Embodiment -
[148] FIG. 5 is a table illustrating examples of the value of the added "ASCTy"(Audio
Service Component Type) field. [149] FIG. 6 is a table illustrating another examples of the value of the added
"ASCTy" (Audio Service Component Type) field. [150] FIG. 7 is a table illustrating another examples of the value of the added
"ASCTy" (Audio Service Component Type) field. [151] Hereinafter, the value of the added "ASCTy" field will be re-defined in accordance with the present invention, with reference to FIGs. 5, 6, and 7 (FIG. 4 is also additionally referred to). [152] The reason why the value of the "ASCTy" field is re-defined is to enable the broadcast receiver to decode an audio service component coded in accordance with a scheme other than the MUSICAM audio coding scheme, based on the re-defined
"ASCTy" field value, and thus, to output an audio broadcast. Thus, the broadcast receiver can decode various audio service components, based on the re-defined
"ASCTy" field values. [153] In accordance with the present invention, the "ASCTy" field value is defined such that, if it is 3, 4, or 5 and so on, it means that the associated audio service component has been compressed in accordance with an alternative audio coding scheme other than the MUSICAM scheme, differently from conventional cases. [154] Of course, the above values are construed only for illustrative purposes. The
"ASCTy" field value may be set to other values. [155] The third embodiment may be implemented through, for example, one of the following three methods as shown in FIGs. 5, 6, and 7. [156] The first method is to transmit, in one sub-channel, service components compressed in accordance with a plurality of alternative audio coding schemes. The alternative audio coding schemes may include, for example, AAC, SBR, and MPEG surround schemes. Of course, other audio coding schemes may be taken into consideration. [157] The case in which the "ASCTy" field has a value of '63' (111111) will be omitted from the following description. This case will be separately described later. [158] For example, when the "ASCTy" field has a value of '3' (000011), as shown in FIG.
5, it means transmission of service components compressed in accordance with the AAC audio coding scheme, SBR audio coding scheme, and MPEG surround scheme, and called foreground sound.
[159] The AAC audio coding scheme and SBR audio coding scheme are often collectively called a high efficiency-advanced audio coding (HE-AAC) scheme.
[160] When the "ASCTy" field has a value of '4' (000100), as shown in FIG. 5, it means transmission of service components compressed in accordance with the AAC audio coding scheme, SBR audio coding scheme, and MPEG surround scheme, and called background sound.
[161] On the other hand, when the "ASCTy" field has a value of '5' (000101), as shown in
FIG. 5, it means transmission of service components compressed in accordance with the AAC audio coding scheme, SBR audio coding scheme, and MPEG surround scheme, and called multi-channel audio extension.
[162] However, the service components called multi-channel audio extension may mean additional information or the like for providing further-upgraded audio effects.
[163] For example, the service components multi-channel audio extension may include information associated with implementation of an additional service such as a 5.1 -channel audio service.
[164] The second method is to transmit, in respective sub-channels, service components compressed in accordance with a plurality of alternative audio coding schemes. The alternative audio coding schemes may include, for example, AAC, SBR, and MPEG surround schemes. Of course, other audio coding schemes may be taken into consideration.
[165] The case in which the "ASCTy" field has a value of '63' (111111) will be omitted from the following description. This case will be separately described later.
[166] For example, when the "ASCTy" field has a value of '3' (000011), as shown in FIG.
6, it means transmission of a service component compressed in accordance with the AAC audio coding scheme, and called foreground sound.
[167] When the "ASCTy" field has a value of '4' (000100), as shown in FIG. 6, it means transmission of a service component compressed in accordance with the SBR audio coding scheme, and called background sound.
[168] On the other hand, when the "ASCTy" field has a value of '5' (000101), as shown in
FIG. 6, it means transmission of a service component compressed in accordance with the MPEG surround scheme, and called 'multi-channel audio extension'. [169] The third method is to transmit, in one sub-channel, a part of service components compressed in accordance with a plurality of alternative audio coding schemes, and to transmit, in another sub-channel, the remaining part of the service components. For example, it may be possible to take into consideration a method for transmitting, in one sub-channel, service components compressed in accordance with the AAC and SBR schemes, and transmitting, in another sub-channel, a service component compressed in accordance with the MPEG surround scheme.
[170] The case in which the "ASCTy" field has a value of '63' (111111) will be omitted from the following description. This case will be separately described later.
[171] For example, when the "ASCTy" field has a value of '3' (000011), as shown in FIG.
7, it means transmission of service components compressed in accordance with the AAC audio coding scheme and SBR audio coding scheme, and called foreground sound.
[172] The AAC audio coding scheme and SBR audio coding scheme are often collectively called a high efficiency-advanced audio coding (HE-AAC) scheme.
[173] When the "ASCTy" field has a value of '4' (000100), as shown in FIG. 7, it means transmission of service components compressed in accordance with the AAC audio coding scheme and SBR audio coding scheme, and called background sound.
[174] On the other hand, when the ASCTy field has a value of '5' (000101), as shown in
FIG. 7, it means transmission of a service component compressed in accordance with the MPEG surround scheme, and called multi-channel audio extension.
[175] Meanwhile, when the "ASCTy" field has a value of '63' (111111), as shown in
FIGs. 5, 6, and 7 in common, it means transmission of at least one service component in an MPEG-2 transport stream. Of course, the value of 63 (111111) is construed only for illustrative purposes. The "ASCTy" field may have other values for this definition.
[176] The at least one service component may include at least one of an audio service component, an A/V service component, and a data service component.
[177] That is, the present invention has a feature in that it is possible to define an A/V service component or a data service component in the "ASCTy" field, and thus, to transmit the service component as a digital broadcast.
[178] Now, a procedure for decoding service components compressed in the above- described manner will be schematically described in conjunction with the fourth embodiment.
[179] - Fourth Embodiment -
[180] FIG. 8 is a table illustrating a procedure in which service components are decoded in the case that an addition of an "ASCTy" field as shown in FIG. 6 is made.
[181] Hereinafter, a procedure for decoding at least one service component compressed using different audio coding schemes in the case that an "ASCTy" field value is newly defined, in accordance with the present invention, will be schematically described with reference to FIG. 8 (FIGs. 4 and 6 are also additionally referred to).
[182] The following description will be given in conjunction with the case in which it is assumed that audio coding has been performed for one ensemble in accordance with an "ASCTy" value defined as shown in FIG. 6. Although no description will be given of examples associated with FIGs. 5 and 7, they can be appreciated by those skilled in the art.
[183] Specific numerical values and terms to be described in the following description are construed only for illustrative purposes.
[184] When the "SId" field has a value of '0x1234' as shown in FIG. 8, it may indicate that the associated service is a KBSl broadcasting service.
[185] The KBSl broadcasting service may include a service component compressed in accordance with the AAC scheme, a service component compressed in accordance with the SBR scheme, and a service component compressed in accordance with the MPEG surround scheme.
[186] Furthermore, as shown in FIG. 8, re-definition may be made for the values of the
"SubChld" field respectively indicating the sub-channels (paths) respectively used to transmit the service component compressed using the AAC scheme, the service component compressed using the SBR scheme, and the service component compressed using the MPEG surround scheme, for the values of the "ASCTy" field respectively indicating the types of the service components, and the values of the "P/S" field indicating whether each service component is a primary component or a secondary component.
[187] Where the broadcast receiver includes only an AAC decoder, it decodes only the service component compressed using the AAC scheme, and outputs the decoded service component. Where the broadcast receiver includes an AAC-SBR decoder, it decodes the service components respectively compressed using the AAC scheme and SBR scheme, and outputs the decoded service components. On the other hand, where the broadcast receiver includes an AAC-SBR (with MPEG surround) decoder, it decodes the service components compressed using the AAC scheme, SBR scheme, and MPEG surround scheme, and outputs the decoded service components.
[188] On the other hand, when the "SId" field has a value of '0x1235' as shown in FIG. 8, it may indicate that the associated service is a KBS2 broadcasting service.
[189] The KBS2 broadcasting service may include a service component compressed in accordance with the AAC scheme and a service component compressed in accordance with the SBR scheme.
[190] In the case in which the "SId" field has a value of '0x1235' there is no service component compressed in accordance with the MPEG surround scheme, differently from the case in which the SId field has a value of '0x1234'
[191] In this case, as shown in FIG. 8, re-definition may be made for the values of the
"SubChld" field respectively indicating the sub-channels (paths) respectively used to transmit the service component compressed using the AAC scheme and the service component compressed using the SBR scheme, for the values of the "ASCTy" field respectively indicating the types of the service components, and the values of the "P/S" field indicating whether each service component is a primary component or a secondary component.
[192] Where the broadcast receiver includes only an AAC decoder, it decodes only the service component compressed using the AAC scheme, and outputs the decoded service component. On the other hand, where the broadcast receiver includes an AAC- SBR decoder, it decodes the service components respectively compressed using the AAC scheme and SBR scheme, and outputs the decoded service components.
[193] On the other hand, when the "SId" field has a value of '0x5678' as shown in FIG. 8, it may indicate that the associated service is an SBSl broadcasting service.
[194] The SBSl broadcasting service may include a service component compressed in accordance with the AAC scheme and a service component compressed in accordance with the MPEG surround scheme.
[195] In the case in which the "SId" field has a value of '0x5678', there is no service component compressed in accordance with the SBR scheme, differently from the case in which the "SId" field has a value of '0x1235' In this case, however, there is a service component compressed in accordance with the MPEG surround scheme.
[196] Furthermore, as shown in FIG. 8, re-definition may be made for the values of the
"SubChld" field respectively indicating the sub-channels (paths) respectively used to transmit the service component compressed using the AAC scheme and the service component compressed using the MPEG surround scheme, for the values of the "ASCTy" field respectively indicating the types of the service components, and the values of the "P/S" field indicating whether each service component is a primary component or a secondary component.
[197] Where the broadcast receiver includes only an AAC decoder, it decodes only the service component compressed using the AAC scheme, and outputs the decoded service component. On the other hand, where the broadcast receiver includes an AAC- MPEG surround decoder, it decodes the service components compressed using the AAC scheme and MPEG surround scheme, and outputs the decoded service components.
[198] When the "SId" field has a value of '0x5777' as shown in FIG. 8, it may indicate that the associated service is an SBS2 broadcasting service.
[199] The SBS2 broadcasting service may include a service component compressed in accordance with the MUSICAM scheme.
[200] In the case in which the "SId" field has a value of '0x5777' there is only a service component compressed in accordance with the MUSICAM scheme, differently from the above-described cases.
[201] In this case, the service component may be decoded by the existing MUSICAM decoder, and may then be outputted.
[202] In accordance with the present invention, an "ASCTy" field value is added in order to enable transmission of a service component compressed using an alternative audio coding scheme, and to enable the broadcast receiver to decode the transmitted service component. Accordingly, it is possible to transmit and decode service components compressed using existing MUSICAM schemes.
[203] That is, the present invention has an advantage in that it is compatible with transmission and decoding schemes for conventional digital broadcastings.
[204] - Fifth Embodiment -
[205] FIG. 9 is a flowchart illustrating a digital broadcast transmitting method according to the present invention.
[206] FIG. 10 is a flowchart illustrating a digital broadcast receiving method according to the present invention.
[207] The digital broadcast transmitting and receiving methods according to the present invention will be described with reference to FIGs. 9 and 10 (FIGs. 1 to 6 are also additionally referred to).
[208] First, a method transmitting a digital broadcast in accordance with the present invention will be described.
[209] As shown in FIG. 9, a broadcasting station or the like may insert, into a digital broadcast stream to be transmitted, information indicating that the digital broadcast includes a service component compressed in accordance with an alternative coding scheme, and information indicating that the service component has been compressed in accordance with a specific alternative coding scheme, prior to the transmission of the digital broadcast stream (S701).
[210] The alternative coding scheme may be an alternative audio coding scheme, an alternative video coding scheme, an alternative data coding scheme, or the like.
[211] The alternative audio coding scheme may be, for example, an advanced audio coding (AAC) scheme, a bit sliced arithmetic coding (BSAC) scheme, or the like.
[212] The alternative audio coding scheme may additionally include a spectral band replication (SBR) scheme, a moving picture experts group (MPEG) surround scheme, or the like.
[213] Although not shown, the information about the service components compressed using the above-described alternative audio coding schemes may be transmitted in a fast information channel (FIC). In particular, the service components to be transmitted may be defined in the "ASCTy" field of the FIC.
[214] On the other hand, the service components may be transmitted in a main service channel (MSC).
[215] The broadcasting station or the like may transmit the resultant broadcast stream to a broadcast receiver or the like (S702).
[216] Now, a method for receiving a digital broadcast in accordance with the present invention will be described.
[217] As shown in FIG. 10, the broadcast receiver receives an associated digital broadcast transmitted from the broadcast station or the like (S703).
[218] The broadcast receiver may be an appliance capable of receiving a digital broadcast.
For example, the broadcast receiver may be a television, a mobile phone, a DMB appliance, etc.
[219] The broadcast receiver determines whether or not the audio service component of the received digital broadcast has been compressed in accordance with an alternative audio coding scheme (S704).
[220] The determination (S704) may be achieved by decoding the "ASCTy" field of the
FIC.
[221] When it is determined, based on the result of the determination (S704), that the received audio service component has been compressed in accordance with an alternative audio coding scheme, the audio service component is decoded by a decoder newly added in accordance with the present invention, and is then output (S705).
[222] The newly-added decoder may be, for example, an AAC decoder, an AAC-SBR decoder, an AAC-SBR (with MPEG surround) decoder, etc.
[223] On the other hand, when it is determined, based on the result of the determination
(S704), that the received audio service component has been compressed in accordance with the existing MUSICAM scheme other than an alternative audio coding scheme, the audio service component is decoded by a MUSICAM decoder, and is then output (S706).
[224] - Sixth Embodiment -
[225] FIG. 11 is a block diagram illustrating a configuration of the broadcast receiver adapted to receive a digital broadcast according to the present invention.
[226] Hereinafter, the broadcast receiver, which can receive a digital broadcast according to the present invention, and can decode the received digital broadcast, will be described with reference to FIG. 11 (FIGs. 1 to 7 are also referred to).
[227] The broadcast receiver 801 according to the present invention includes a user interface 802, a fast information channel (FIC) decoder 803, a controller 804, a tuner 805, a main service channel (MSC) decoder 806, an audio decoder 807, a speaker 808, a data decoder 809, a video decoder 810, and a display device 811. [228] The broadcast receiver 801 may be a television, a mobile phone, or a DMB appliance which can receive a digital broadcast, and can then output the digital broadcast. [229] The user interface 802 functions to transfer, to the controller 804, a command input by the user in association with, for example, channel adjustment, volume adjustment, etc. [230] The tuner 805 designates a desired ensemble, and information about FIC and MSC from the broadcasting station or the like at a frequency corresponding to the designated ensemble, under the control of the controller 804. [231] The FIC decoder 803 receives the FIC information from the tuner 805, and extracts multiplex configuration information (MCI), service information (SI), and an FIC data channel (FIDC) from the FIC information. The FIC decoder 803 also functions to extract configuration information for sorting each service component, and information about the property of the service component. [232] The MSC decoder 806 receives the MSC information from the tuner 805. When a sub-channel is detected, the MSC decoder 806 decodes data transmitted through the sub-channel, based on the MCI and SI information sent from the controller 804, and transfers the decoded data to the audio decoder 807. [233] The audio decoder 807 functions to re-configure the data sent from the MSC decoder 806 to a format enabling outputting of an audio signal from a coded bitstream. [234] In particular, in association with the present invention, the audio decoder 807 may include at least one of an AAC decoder, an AAC-SBR decoder, an AAC-MPEG surround decoder, and an AAC-SBR (with MPEG surround) decoder. Of course, the audio decoder 807 may additionally include a decoder capable of decoding an audio service component coded in a compression scheme other than the above-described schemes. [235] The speaker 808 functions to output the audio service components decoded by the audio decoder 807. [236] The data decoder 809 can function to re-configure service information decoded from the FIC, and desired data from the bitstream received via the MSC decoder 806. [237] When the user of the broadcast receiver 801 or the like selects a video service, the video decoder 810 functions to restore a video, using a compressed video bitstream and information associated therewith. [238] The display device 811 functions to output the image or the like restored by the video decoder 810. [239] The controller 804 functions to systematically control the functions of the user interface 802, FIC decoder 803, tuner 805, MSC decoder 806, audio decoder 807, data decoder 809, video decoder 810, etc.
[240] Hereinafter, the procedure, in which the constituent elements of the broadcast receiver operate to implement the present invention, will be described in more detail.
[241] When a command for selecting a desired digital broadcast is input through the user interface 802, the controller 804 controls the tuner 805 to be tuned to a channel on which the selected digital broadcast is transmitted.
[242] Of course, it may be assumed that the digital broadcast is an audio broadcast including service components compressed in accordance with alternative audio coding schemes, as shown in FIG. 5, 6, or 7.
[243] The alternative audio coding schemes may include, for example, an advanced audio coding (AAC) scheme and a bit sliced arithmetic coding (BSAC) scheme.
[244] The alternative audio coding schemes may additionally include a spectral band replication (SBR) scheme and a moving picture experts group (MPEG) surround scheme.
[245] The FIC information as to the audio broadcast, which includes information associated with the service components compressed in accordance with the alternative audio coding schemes, is sent to the FIC decoder 803 under the control of the controller 804. On the other hand, the MSC information as to the service components is sent to the MSC decoder 806 under the control of the controller 804.
[246] The FIC decoder 803 reads, from the FIC information, the value of the ASCTy field defining the type of an audio service component sent from the tuner 805, and thus, determines the compression type of the audio service component.
[247] The controller 804 receives, from the FIC decoder 803, information as to the compression type of the audio service component sent from the tuner 805, and then controls the audio decoder 807, based on the received information, to determine a desired audio decoder.
[248] For example, when the compression type of the audio service component sent from the tuner 805 corresponds to the AAC scheme, the controller 804 performs a control operation for selecting an ACC decoder as the audio decoder 807.
[249] The audio decoder 807 receives, from the audio decoder 807, an audio service component compressed in a specific audio coding scheme, and re-configures the received audio service component to a format enabling outputting of the audio service component through the speaker 808.
[250] Audio signals are coded by frames. One or more coded frames form a superframe.
In this case, the superframe has header information. The coding of audio signals can be performed by selectively using the AAC, SBR, parametric stereo (PS), and MPEG surround (MPS) schemes. The identifiers each informing of whether or not an associated one of the above-described codecs was used may be selectively included in the header of the superframe. Alternatively, a part or all of the identifiers may be included in the header of the superframe.
[251] Hereinafter, a signal coding procedure according to the present invention will be described in conjunction with data coding and entropy coding, respectively. The data coding and entropy coding have a co-relation with each other. This will be described in detail later. Various data grouping methods according to the present invention for execution of efficient data coding and entropy coding will also be described. The grouping method, which will be described later, has an independently-effective technical idea irrespective of a specific data coding scheme and a specific entropy coding scheme. A concrete example, to which the data coding and entropy coding according to the present invention are applied, will be described in conjunction with an audio coding method using spatial information (for example, "ISO/IEC 23003, MPEG Surround").
[252] FIG. 12 and FIG. 13 are diagrams of a system according to the present invention.
FIG. 12 shows an encoding apparatus 1 and FIG. 13shows a decoding apparatus 2.
[253] Referring to FIG. 12, an encoding apparatus 1 according to the present invention includes at least one of a data grouping part 10, a first data encoding part 20, a second data encoding part 31, a third data encoding part 32, an entropy encoding part 40 and a bitstream multiplexing part 50.
[254] Optionally, the second and third data encoding parts 31 and 32 can be integrated into one data encoding part 30. For instance, variable length encoding is performed on data encoded by the second and third data encoding parts 31 and 32 by the entropy encoding part 40. The above elements are explained in detail as follows.
[255] The data grouping part 10 binds input signals by a prescribed unit to enhance data processing efficiency.
[256] For instance, the data grouping part 10 discriminates data according to data types.
And, the discriminated data is encoded by one of the data encoding parts 20, 31 and 32. The data grouping part 10 discriminates some of data into at least one group for the data processing efficiency. And, the grouped data is encoded by one of the data encoding parts 20, 31 and 32. Besides, a grouping method according to the present invention, in which operations of the data grouping part 10 are included, shall be explained in detail with reference to FIGs. 24 to 28 later.
[257] Each of the data encoding parts 20, 31 and 32 encodes input data according to a corresponding encoding scheme. Each of the data encoding parts 20, 31 and 32 adopts at least one of a PCM (pulse code modulation) scheme and a differential coding scheme. In particular, the first data encoding part 20 adopts the PCM scheme, the second data encoding part 31 adopts a first differential coding scheme using a pilot reference value, and the third data encoding part 32 adopts a second differential coding scheme using a difference from neighbor data, for example.
[258] Hereinafter, for convenience of explanation, the first differential coding scheme is named pilot based coding (PBC) and the second differential coding scheme is named differential coding (DIFF). And, operations of the data encoding parts 20, 31 and 32 shall be explained in detail with reference to FIGs. 14 to 19 later.
[259] Meanwhile, the entropy encoding part 40 performs variable length encoding according to statistical characteristics of data with reference to an entropy table 41. And, operations of the entropy encoding part 40 shall be explained in detail with reference to FIGs. 29 to 33 later.
[260] The bitstream multiplexing part 50 arranges and/or converts the coded data to correspond to a transfer specification and then transfers the arranged/converted data in a bitstream form. Yet, if a specific system employing the present invention does not use the bitstream multiplexing part 50, it is apparent to those skilled in the art that the system can be configured without the bitstream multiplexing part 50.
[261] Meanwhile, the decoding apparatus 2 is configured to correspond to the above- explained encoding apparatus 1.
[262] For instance, referring to FIG. 13, a bitstream demultiplexing part 60 receives an inputted bitstream and interprets and classifies various information included in the received bitstream according to a preset format.
[263] An entropy decoding part 70 recovers the data into the original data before entropy encoding using an entropy table 71. In this case, it is apparent that the entropy table 71 is identically configured with the former entropy table 41 of the encoding apparatus 1 shown in FIG. 12.
[264] A first data decoding part 80, a second data decoding part 91 and a third data decoding part 92 perform decoding to correspond to the aforesaid first to third data encoding parts 20, 31 and 32, respectively.
[265] In particular, in case that the second and third data decoding parts 91 and 92 perform differential decoding, it is able to integrate overlapped decoding processes to be handled within one decoding process.
[266] A data reconstructing part 95 recovers or reconstructs data decoded by the data decoding parts 80, 91 and 92 into original data prior to data encoding. Occasionally, the decoded data can be recovered into data resulting from converting or modifying the original data.
[267] By the way, the present invention uses at least two coding schemes together for the efficient execution of data coding and intends to provide an efficient coding scheme using correlation between coding schemes.
[268] And, the present invention intends to provide various kinds of data grouping schemes for the efficient execution of data coding. [269] Moreover, the present invention intends to provide a data structure including the features of the present invention. [270] In applying the technical idea of the present invention to various systems, it is apparent to those skilled in the art that various additional configurations should be used as well as the elements shown in FIG. 12 and FIG. 13. For example, data quantization needs to be executed or a controller is needed to control the above process. [271]
[272] [DATA CODING]
[273] PCM (pulse code modulation), PBC (pilot based coding) and DIFF (differential coding) applicable as data coding schemes of the present invention are explained in detail as follows. Besides, efficient selection and correlation of the data coding schemes shall be subsequently explained as well. [274] 1. PCM (pulse code modulation)
[275] PCM is a coding scheme that converts an analog signal to a digital signal. The PCM samples analog signals with a preset interval and then quantizes a corresponding result.
PCM may be disadvantageous in coding efficiency but can be effectively utilized for data unsuitable for PBC or DIFF coding scheme that will be explained later. [276] In the present invention, the PCM is used together with the PBC or DIFF coding scheme in performing data coding, which shall be explained with reference to FIGs. 20 to 33 later.
[277] 2. PBC (pilot based coding)
[278] 2-1. Concept of PBC
[279] PBC is a coding scheme that determines a specific reference within a discriminated data group and uses the relation between data as a coding target and the determined reference. [280] A value becoming a reference to apply the PBC can be defined as reference value, pilot, pilot reference value or pilot value. Hereinafter, for convenience of explanation, it is named pilot reference value. [281] And, a difference value between the pilot reference value and data within a group can be defined as difference or pilot difference. [282] Moreover, a data group as a unit to apply the PBC indicates a final group having a specific grouping scheme applied by the aforesaid data grouping part 10. Data grouping can be executed in various ways, which shall be explained in detail later. [283] In the present invention, data grouped in the above manner to have a specific meaning is defined as parameter to explain. This is just for convenience of explanation and can be replaced by a different terminology. [284] The PBC process according to the present invention includes at least two steps as follows. [285] First of all, a pilot reference value corresponding to a plurality of parameters is selected. In this case, the pilot reference value is decided with reference to a parameter becoming a PBC target.
[286] For instance, a pilot reference value is set to a value selected from an average value of parameters becoming PBC targets, an approximate value of the average value of the parameters becoming the targets, an intermediate value corresponding to an intermediate level of parameters becoming targets and a most frequently used value among parameters becoming targets. And, a pilot reference value can be set to a preset default value as well. Moreover, a pilot value can be decided by a selection within a preset table.
[287] Alternatively, in the present invention, temporary pilot reference values are set to pilot reference values selected by at least two of the various pilot reference value selecting methods, coding efficiency is calculated for each case, the temporary pilot reference value corresponding to a case having best coding efficiency is then selected as a final pilot reference value.
[288] The approximate value of the average is Ceil[P] or Floor[P] when the average is P.
In this case, Ceil[x] is a maximum integer not exceeding x and Floor[x] is a minimum integer exceeding x.
[289] Yet, it is also possible to select an arbitrary fixed default value without referring to parameters becoming PBC targets.
[290] For another instance, as mentioned in the foregoing description, after several values selectable as pilots have been randomly and plurally selected, a value showing the best coding efficiency can be selected as an optimal pilot.
[291] Secondly, a difference value between the selected pilot and a parameter within a group is found. For instance, a difference value is calculated by subtracting a pilot reference value from a parameter value becoming a PBC target. This is explained with reference to FIG. 13 and FIG. 15 as follows.
[292] FIG. 14 and FIG. 15 are diagrams to explain PBC coding according to the present invention.
[293] For instance, it is assumed that a plurality of parameters (e.g., 10 parameters) exist within one group to have the following parameter values, X[n] = 11, 12, 9, 12, 10, 8, 12, 9, 10, 9, respectively.
[294] If a PBC scheme is selected to encode the parameters within the group, a pilot reference value should be selected in the first place. In this example, it can be seen that the pilot reference value is set to 10 in FIG. 15.
[295] As mentioned in the foregoing description, it is able to select the pilot reference value by the various methods of selecting a pilot reference value.
[296] Difference values by PBC are calculated according to Formula 1. [297] [Formula 1]
[298] d[n] = x[n]-P, where n = 0, 1, , 9.
[299] In this case, P indicates a pilot reference value (= 10) and x[n] is a target parameter of data coding.
[300] A result of PBC according to Formula 1 corresponds to d[n] = 1, 2, -1, 2, 0, -2, 2, -
1, 0, -1. Namely, the result of PBC coding includes the selected pilot reference value and the calculated d[n]. And, these values become targets of entropy coding that will be explained later. Besides, the PBC is more effective in case that deviation of target parameter values is small overall.
[301] 2-2. PBC Objects
[302] A target of PBC coding is not specified into one. It is possible to code digital data of various signals by PBC. For instance, it is applicable to audio coding that will be explained later. In the present invention, additional control data processed together with audio data is explained in detail as a target of PBC coding.
[303] The control data is transferred in addition to a downmixed signal of audio and is then used to reconstruct the audio. In the following description, the control data is defined as spatial information or spatial parameter.
[304] The spatial information includes various kinds of spatial parameters such as a channel level difference (hereinafter abbreviated CLD), an inter-channel coherence (hereinafter abbreviated ICC), a channel prediction coefficient (hereinafter abbreviated CPC) and the like.
[305] In particular, the CLD is a parameter that indicates an energy difference between two different channels. For instance, the CLD has a value ranging between 15 and +15. The ICC is a parameter that indicates a correlation between two different channels. For instance, the ICC has a value ranging between 0 and 7. And, the CPC is a parameter that indicates a prediction coefficient used to generate three channels from two cha nnels. For instance, the CPC has a value ranging between 20 and 30.
[306] As a target of PBC coding, a gain value used to adjust a gain of signal, e.g., ADG
(arbitrary downmix gain) can be included.
[307] And, ATD (arbitrary tree data) applied to an arbitrary channel conversion box of a downmixed audio signal can become a PBC coding target. In particular, the ADG is a parameter that is discriminated from the CLD, ICC or CPC. Namely, the ADG corresponds to a parameter to adjust a gain of audio to differ from the spatial information such as CLD, ICC CPC and the like extracted from a channel of an audio signal. Yet, for example of use, it is able to process the ADG or ATD in the same manner of the aforesaid CLD to raise efficiency of audio coding.
[308] As another target of PBC coding, a partial parameter can be taken into consideration. In the present invention, partial parameter means a portion of parameter. [309] For instance, assuming that a specific parameter is represented as n bits, the n bits are divided into at least two parts. And, it is able to define the two parts as first and second partial parameters, respectively. In case of attempting to perform PBC coding, it is able to find a difference value between a first partial parameter value and a pilot reference value. Yet, the second partial parameter excluded in the difference calculation should be transferred as a separate value.
[310] In more particular, for instance, in case of n bits indicating a parameter value, a least significant bit (LSB) is defined as the second partial parameter and a parameter value constructed with the rest (n-1) upper bits can be defined as the first partial parameter. In this case, it is able to perform PBC on the first partial parameter only. This is because coding efficiency can be enhanced due to small deviations between the first partial parameter values constructed with the (n-1) upper bits.
[311] The second partial parameter excluded in the difference calculation is separately transferred, and is then taken into consideration in reconstructing a final parameter by a decoding part. Alternatively, it is also possible to obtain a second partial parameter by a predetermined scheme instead of transferring the second partial parameter separately.
[312] PBC coding using characteristics of the partial parameters is restrictively utilized according to a characteristic of a target parameter.
[313] For instance, as mentioned in the foregoing description, deviations between the first partial parameters should be small. If the deviation is big, it is unnecessary to utilize the partial parameters. It may even degrade coding efficiency.
[314] According to an experimental result, the CPC parameter of the aforesaid spatial information is suitable for the application of the PBC scheme. Yet, it is not preferable to apply the CPC parameter to coarse quantization scheme. In case that a quantization scheme is coarse, a deviation between first partial parameters increases.
[315] Besides, the data coding using partial parameters is applicable to DIFF scheme as well as PBC scheme.
[316] In case of applying the partial parameter concept to the CPC parameter, a signal processing method and apparatus for reconstruction are explained as follows.
[317] For instance, a method of processing a signal using partial parameters according to the present invention includes the steps of obtaining a first partial parameter using a reference value corresponding to the first partial parameter and a difference value corresponding to the reference value and deciding a parameter using the first partial parameter and a second partial parameter.
[318] In this case, the reference value is either a pilot reference value or a difference reference value. And, the first partial parameter includes partial bits of the parameter and the second partial parameter includes the rest bits of the parameter. Moreover, the second partial parameter includes a least significant bit of the parameter. [319] The signal processing method further includes the step of reconstructing an audio signal using the decided parameter.
[320] The parameter is spatial information including at least one of CLD, ICC, CPC and
ADG.
[321] If the parameter is the CPC and if a quantization scale of the parameter is not coarse, it is able to obtain the second partial parameter.
[322] And, a final parameter is decided by twice multiplying the partial parameter and adding the multiplication result to the second partial parameter.
[323] An apparatus for processing a signal using partial parameters according to the present invention includes a first parameter obtaining part obtaining a first partial parameter using a reference value corresponding to the first partial parameter and a difference value corresponding to the reference value and a parameter deciding part deciding a parameter using the first partial parameter and a second partial parameter.
[324] The signal processing apparatus further includes a second parameter obtaining part obtaining the second partial parameter by receiving the second partial parameter.
[325] And, the first parameter obtaining part, the parameter deciding part and the second partial parameter obtaining part are included within the aforesaid data decoding part 91 or 92.
[326] A method of processing a signal using partial parameters according to the present invention includes the steps of dividing a parameter into a first partial parameter and a second partial parameter and generating a difference value using a reference value corresponding to the first partial parameter and the first partial parameter.
[327] And, the signal processing method further includes the step of transferring the difference value and the second partial parameter.
[328] An apparatus for processing a signal using partial parameters according to the present invention includes a parameter dividing part dividing a parameter into a first partial parameter and a second partial parameter and a difference value generating part generating a difference value using a reference value corresponding to the first partial parameter and the first partial parameter.
[329] And, the signal processing apparatus further includes a parameter outputting part transferring the difference value and the second partial parameter.
[330] Moreover, the parameter diving part and the difference value generating part are included within the aforesaid data encoding part 31 or 32.
[331] 2-3. PBC Conditions
[332] In aspect that PBC coding of the present invention selects a separate pilot reference value and then has the selected pilot reference value included in a bitstream, it is probable that transmission efficiency of the PBC coding becomes lower than that of a DIFF coding scheme that will be explained later. [333] So, the present invention intends to provide an optimal condition to perform PBC coding.
[334] If the number of data experimentally becoming targets of data coding within a group is at least three or higher, PBC coding is applicable. This corresponds to a result in considering efficiency of data coding. It means that DIFF or PCM coding is more efficient than PBC coding if two data exist within a group only.
[335] Although PBC coding is applicable to at least three or more data, it is preferable that PBC coding is applied to a case that at least five data exist within a group. In other words, a case that PBC coding is most efficiently applicable is a case that there are at least five data becoming targets of data coding and that deviations between the at least five data are small. And, a minimum number of data suitable for the execution of PBC coding will be decided according to a system and coding environment.
[336] Data becoming a target of data coding is given for each data band. This will be explained through a grouping process that will be described later. So, for example, the present invention proposes that at least five data bands are required for the application of PBC coding in MPEG audio surround coding that will be explained later.
[337] Hereinafter, a signal processing method and apparatus using the conditions for the execution of PBC are explained as follows.
[338] In a signal processing method according to one embodiment of the present invention, if the number of data corresponding to a pilot reference value is obtained and if the number of data bands meets a preset condition, the pilot reference value and a pilot difference value corresponding to the pilot reference value are obtained. Subsequently, the data are obtained using the pilot reference value and the pilot difference value. In particular, the number of the data is obtained using the number of the data bands in which the data are included.
[339] In a signal processing method according to another embodiment of the present invention, one of a plurality of data coding schemes is decided using the number of data and the data are decoded according to the decided data coding scheme. A plurality of the data coding schemes include a pilot coding scheme at least. If the number of the data meets a preset condition, the data coding scheme is decided as the pilot coding scheme.
[340] And, the data decoding process includes the steps of obtaining a pilot reference value corresponding to a plurality of the data and a pilot difference value corresponding to the pilot reference value and obtaining the data using the pilot reference value and the pilot difference value.
[341] Moreover, in the signal processing method, the data are parameters. And, an audio signal is recovered using the parameters. In the signal processing method, identification information corresponding to the number of the parameters is received and the number of the parameters is generated using the received identification information. By considering the number of the data, identification information indicating a plurality of the data coding schemes is hierarchically extracted.
[342] In the step of extracting the identification information, a first identification information indicating a first data coding scheme is extracted and a second identification information indicating a second data coding scheme is then extracted using the first identification information and the number of the data. In this case, the first identification information indicates whether it is a DIFF coding scheme. And, the second identification information indicates whether it is a pilot coding scheme or a PCM grouping scheme.
[343] In a signal processing method according to another embodiment of the present invention, if the number of a plurality of data meets a preset condition, a pilot difference value is generated using a pilot reference value corresponding to a plurality of the data and the data. The generated pilot difference value is then transferred. In the signal processing method, the pilot reference value is transferred.
[344] In a signal processing method according to a further embodiment of the present invention, data coding schemes are decided according to the number of a plurality of data. The data are then encoded according to the decided data coding schemes. In this case, a plurality of the data coding schemes include a pilot coding scheme at least. If the number of the data meets a preset condition, the data coding scheme is decided as the pilot coding scheme.
[345] An apparatus for processing a signal according to one embodiment of the present invention includes a number obtaining part obtaining a number of data corresponding to a pilot reference value, a value obtaining part obtaining the pilot reference value and a pilot difference value corresponding to the pilot reference value if the number of the data meets a preset condition, and a data obtaining part obtaining the data using the pilot reference value and the pilot difference value. In this case, the number obtaining part, the value obtaining part and the data obtaining part are included in the aforesaid data decoding part 91 or 92.
[346] An apparatus for processing a signal according to another embodiment of the present invention includes a scheme deciding part deciding one of a plurality of data coding schemes according to a number of a plurality of data and a decoding part decoding the data according to the decided data coding scheme. In this case, a plurality of the data coding schemes include a pilot coding scheme at least.
[347] An apparatus for processing a signal according to a further embodiment of the present invention includes a value generating part generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data if a number of a plurality of the data meets a preset condition and an output part transferring the generated pilot difference value. In this case, the value generating part is included in the aforesaid data encoding part 31 or 32.
[348] An apparatus for processing a signal according to another further embodiment of the present invention includes a scheme deciding part deciding a data coding scheme according to a number of a plurality of data and an encoding part encoding the data according to the decided data coding scheme. In this case, a plurality of the data coding schemes include a pilot coding scheme at least.
[349] 2-4. PBC Signal Processing Method
[350] A signal processing method and apparatus using PBC coding features according to the present invention are explained as follows.
[351] In a signal processing method according to one embodiment of the present invention, a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value are obtained. Subsequently, the data are obtained using the pilot reference value and the pilot difference value. And, the method may further include a step of decoding at least one of the pilot difference value and the pilot reference value. In this case, the PBC applied data are parameters. And, the method may further include the step of reconstructing an audio signal using the obtained parameters.
[352] An apparatus for processing a signal according to one embodiment of the present invention includes a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value and a data obtaining part obtaining the data using the pilot reference value and the pilot difference value. In this case, the value obtaining part and the data obtaining part are included in the aforesaid data coding part 91 or 92.
[353] A method of processing a signal according to another embodiment of the present invention includes the steps of generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data and outputting the generated pilot difference value.
[354] An apparatus for processing a signal according to another embodiment of the present invention includes a value generating part generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data and an output part outputting the generated pilot difference value.
[355] A method of processing a signal according to a further embodiment of the present invention includes the steps of obtaining a pilot reference value corresponding to a plurality of gains and a pilot difference value corresponding to the pilot reference value and obtaining the gain using the pilot reference value and the pilot difference value. And, the method may further include the step of decoding at least one of the pilot difference value and the pilot reference value. Moreover, the method may further include the step of reconstructing an audio signal using the obtained gain.
[356] In this case, the pilot reference value may be an average of a plurality of the gains, an averaged intermediate value of a plurality of the gains, a most frequently used value of a plurality of the gains, a value set to a default or one value extracted from a table. And, the method may further include the step of selecting the gain having highest encoding efficiency as a final pilot reference value after the pilot reference value has been set to each of a plurality of the gains.
[357] An apparatus for processing a signal according to a further embodiment of the present invention includes a value obtaining part obtaining a pilot reference value corresponding to a plurality of gains and a pilot difference value corresponding to the pilot reference value and a gain obtaining part obtaining the gain using the pilot reference value and the pilot difference value.
[358] A method of processing a signal according to another further embodiment of the present invention includes the steps of generating a pilot difference value using a pilot reference value corresponding to a plurality of gains and the gains and outputting the generated pilot difference value.
[359] And, an apparatus for processing a signal according to another further embodiment of the present invention includes a value calculating part generating a pilot difference value using a pilot reference value corresponding to a plurality of gains and the gains and an outputting part outputting the generated pilot difference value.
[360] 3. DIFF (Differential coding)
[361] DIFF coding is a coding scheme that uses relations between a plurality of data existing within a discriminated data group, which may be called differential coding. In this case, a data group, which is a unit in applying the DIFF, means a final group to which a specific grouping scheme is applied by the aforesaid data grouping part 10. In the present invention, data having a specific meaning as grouped in the above manner is defined as parameter to be explained. And, this is the same as explained for the PBC.
[362] In particular, the DIFF coding scheme is a coding scheme that uses difference values between parameters existing within a same group, and more particularly, difference values between neighbor parameters.
[363] Types and detailed application examples of the DIFF coding schemes are explained in detail with reference to FIGs. 16 to 19 as follows.
[364] 3-1. DIFF Types
[365] FIG. 16 is a diagram to explain types of DIFF coding according to the present invention. DIFF coding is discriminated according to a direction in finding a difference value from a neighbor parameter.
[366] For instance, DIFF coding types can be classified into DIFF in frequency direction
(hereinafter abbreviated DIFF_FREQ or DF) and DIFF in time direction (hereinafter abbreviated DIFF_TIME or DT). [367] Referring to FIG. 16, Group- 1 indicates DIFF(DF) calculating a difference value in a frequency axis, while Group-2 or Group-3 calculates a difference value in a time axis. [368] As can be seen in FIG. 16, the DIFF(DT), which calculates a difference value in a time axis, is re-discriminated according to a direction of the time axis to find a difference value. [369] For instance, the DIFF(DT) applied to the Group-2 corresponds to a scheme that finds a difference value between a parameter value at a current time and a parameter value at a previous time (e.g., Group- 1). This is called backward time DIFF(DT)
(hereinafter abbreviated DT-B ACKWARD). [370] For instance, the DIFF(DT) applied to the Group-3 corresponds to a scheme that finds a difference value between a parameter value at a current time and a parameter value at a next time (e.g., Group-4). This is called forward time DIFF(DT) (hereinafter abbreviated DT-FORWARD). [371] Hence, as shown in FIG. 16, the Group- 1 is a DIFF(DF) coding scheme, the Group-
2 is a DIFF(DT-BACKWARD) coding scheme, and the Group-3 is a
DIFF(DT-FORWARD) coding scheme. Yet, a coding scheme of the Group-4 is not decided. [372] In the present invention, although DIFF in frequency axis is defined as one coding scheme (e.g., DIFF(DF)) only, definitions can be made by discriminating it into
DIFF(DF-TOP) and DIFF(DF-BOTTOM) as well. [373] 3-2. Examples of DIFF Applications
[374] FIGs. 17 to 19 are diagrams of examples to which DIFF coding scheme is applied.
[375] In FIG. 17, the Group-1 and the Group-2 shown in FIG. 16 are taken as examples for the convenience of explanation. The Group-1 follows DIFF(DF) coding scheme and its parameter value is x[n] = 11, 12, 9, 12, 10, 8, 12, 9, 10, 9. The Group-2 follows
DIFF(DF-BACKWARD) coding scheme and its parameter value is y[n] = 10, 13, 8,
11, 10, 7, 14, 8, 10, 8. [376] FIG. 18 shows results from calculating difference values of the Group-1. Since the
Group-1 is coded by the DIFF(DF) coding scheme, difference values are calculated by
Formula 2. Formula 2 means that a difference value from a previous parameter is found on a frequency axis. [377] [Formula 2]
[378] d[0] = x[0]
[379] d[n] = x[n] x[n-l], where n = 1, 2, , 9.
[380] In particular, the DIFF(DF) result of the Group- 1 by Formula 2 is d[n] = -11, 1, -3,
3, -2, -2, 4, -3, 1, -1. [381] FIG. 19 shows results from calculating difference values of the Group-2. Since the
Group-2 is coded by the DIFF(DF-BACKW ARD) coding scheme, difference values are calculated by Formula 3. Formula 3 means that a difference value from a previous parameter is found on a time axis.
[382] [Formula 3]
[383] d[n] = y[n] x[n], where n = 1, 2, , 9.
[384] In particular, the DIFF(DF-B ACKWARD) result of the Group-2 by Formula 3 is d[n] = -l, 1, -1, -1, 0, 01, 2, -1, 0, -1.
[385] 4. Selection for Data coding Scheme
[386] The present invention is characterized in compressing or reconstructing data by mixing various data coding schemes. So, in coding a specific group, it is necessary to select one coding scheme from at least three or more data coding schemes. And, identification information for the selected coding scheme should be delivered to a decoding part via bitstream.
[387] A method of selecting a data coding scheme and a coding method and apparatus using the same according to the present invention are explained as follows.
[388] A method of processing a signal according to one embodiment of the present invention includes the steps of obtaining data coding identification information and data-decoding data according to a data coding scheme indicated by the data coding identification information.
[389] In this case, the data coding scheme includes a PBC coding scheme at least. And, the PBC coding scheme decodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
[390] The data coding scheme further includes a DIFF coding scheme. The DIFF coding scheme corresponds to one of DIFF-DF scheme and DIFF-DT scheme. And, the DIFF- DT scheme corresponds to one of forward time DIFF-DT(FORW ARD) scheme and backward time DIFF-DT(BACKWARD).
[391] The signal processing method further includes the steps of obtaining entropy coding identification information and entropy-decoding the data using an entropy coding scheme indicated by the entropy coding identification information.
[392] In the data decoding step, the entropy-decoded data is data-decoded by the data coding scheme.
[393] And, the signal processing method further includes the step of decoding an audio signal using the data as parameters.
[394] An apparatus for processing a signal according to one embodiment of the present invention includes
[395] An identification information obtaining part obtaining data coding identification in- formation and a decoding part data-decoding data according to a data coding scheme indicated by the data coding identification information.
[396] In this case, the data coding scheme includes a PBC coding scheme at least. And, the PBC coding scheme decodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
[397] A method of processing a signal according to another embodiment of the present invention includes the steps of data-encoding data according to a data coding scheme and generating to transfer data coding identification information indicating the data coding scheme.
[398] In this case, the data coding scheme includes a PBC coding scheme at least. The
PBC coding scheme encodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
[399] An apparatus for processing a signal according to another embodiment of the present invention includes an encoding part data-encoding data according to a data coding scheme and an outputting part generating to transfer data coding identification information indicating the data coding scheme.
[400] In this case, the data coding scheme includes a PBC coding scheme at least. The
PBC coding scheme encodes the data using a pilot reference value corresponding to a plurality of data and a pilot difference value. And, the pilot difference value is generated using the data and the pilot reference value.
[401] A method of selecting a data coding scheme and a method of transferring coding selection identification information by optimal transmission efficiency according to the present invention are explained as follows.
[402] 4- 1. Data Coding Identifying Method Considering Frequency of Use
[403] FIG. 20 is a block diagram to explain a relation in selecting one of at least three coding schemes according to the present invention.
[404] Referring to FIG. 20, it is assumed that there exist first to third data encoding parts
53, 52 and 51 , that frequency of use of the first data encoding part 53 is lowest, and that frequency of use of the third data encoding part 51 is highest.
[405] For convenience of explanation, with reference to total 100, it is assumed that frequency of use of the first data encoding part 53 is 10, that frequency of use of the second data encoding part 52 is 30, and that frequency of use of the third data encoding part 51 is 60. In particular, for 100 data groups, it can be regarded PCM scheme is applied 10 times, PBC scheme is applied 30 times, and DIFF scheme is applied 60 times.
[406] On the above assumptions, a number of bits necessary for identification information to identify three kinds of coding schemes is calculated in a following manner.
[407] For example, according to FIG. 20, since 1-bit first information is used, 100 bits are used as the first information to identify coding schemes of total 100 groups. Since the third data encoding part 51 having the highest frequency of use is identified through the 100 bits, the rest of 1-bit second information is able to discriminate the first data encoding part 53 and the second data encoding part 52 using 40 bits only.
[408] Hence, identification information to select the per-group coding type for total 100 data groups needs total 140 bits resulting from first information (100 bits) + second information (40 bits).
[409] FIG. 21 is a block diagram to explain a relation in selecting one of at least three coding schemes according to a related art.
[410] Like FIG. 20, for convenience of explanation, with reference to total 100, it is assumed that frequency of use of the first data encoding part 53 is 10, that frequency of use of the second data encoding part 52 is 30, and that frequency of use of the third data encoding part 51 is 60.
[411] In FIG. 21, a number of bits necessary for identification information to identify three coding scheme types is calculated in a following manner.
[412] First of all, according to FIG. 21, since 1-bit first information is used, 100 bits are used as the first information to identify coding schemes of total 100 groups.
[413] The first data encoding part 53 having the lowest frequency of use is preferentially identified through the 100 bits. So, the rest of 1-bit second information needs total 90 bits more to discriminate the second data encoding part 52 and the third data encoding part 51.
[414] Hence, identification information to select the per-group coding type for total 100 data groups needs total 190 bits resulting from first information (100 bits) + second information (90 bits).
[415] Comparing the case shown in FIG. 20 and the case shown in FIG. 21, it can be seen that the data coding selection identification information shown in FIG. 20 is more advantageous in transmission efficiency.
[416] Namely, in case that there exist at least three or more data coding schemes, the present invention is characterized in utilizing different identification information instead of discriminating two coding scheme types similar to each other in frequency of use by the same identification information.
[417] For instance, in case that the first data encoding part 51 and the second data encoding part 52, as shown in FIG. 21, are classified as the same identification information, data transmission bits increase to lower transmission efficiency.
[418] In case that there exist at least three data coding types, the present invention is characterized in discriminating a data coding scheme having highest frequency of use by first information. So, by second information, the rest of the two coding schemes having low frequency of use each are discriminated. [419] FIG. 22 and FIG. 23 are flowcharts for the data coding selecting scheme according to the present invention, respectively. [420] In FIG. 22, it is assumed that DIFF coding is a data coding scheme having highest frequency of use. In FIG. 23, it is assumed that PBC coding is a data coding scheme having highest frequency of use. [421] Referring to FIG. 22, a presence or non-presence of PCM coding having lowest frequency of use is checked (SlO). As mentioned in the foregoing description, the check is performed by first information for identification. [422] As a result of the check, if it is the PCM coding, it is checked whether it is PBC coding (S20). This is performed by second information for identification. [423] In case that frequency of use of DIFF coding is 60 times among total 100 times, identification information for a per-group coding type selection for the same 100 data groups needs total 140 bits of first information (100 bits) + second information (40 bits). [424] Referring to FIG. 23, like FIG. 22, a presence or non-presence of PCM coding having lowest frequency of use is checked (S30). As mentioned in the foregoing description, the check is performed by first information for identification. [425] As a result of the check, if it is the PCM coding, it is checked whether it is DIFF coding (S40). This is performed by second information for identification. [426] In case that frequency of use of DIFF coding is 80 times among total 100 times, identification information for a per-group coding type selection for the same 100 data groups needs total 120 bits of first information (100 bits) + second information (20 bits). [427] A method of identifying a plurality of data coding schemes and a signal processing method and apparatus using the same according to the present invention are explained as follows. [428] A method of processing a signal according to one embodiment of the present invention includes the steps of extracting identification information indicating a plurality of data coding schemes hierarchically and decoding data according to the data coding scheme corresponding to the identification information. [429] In this case, the identification information indicating a PBC coding scheme and a
DIFF coding scheme included in a plurality of the data coding schemes is extracted from different layers. [430] In the decoding step, the data are obtained according to the data coding scheme using a reference value corresponding to a plurality of data and a difference value generated using the data. In this case, the reference value is a pilot reference value or a difference reference value.
[431] A method of processing a signal according to another embodiment of the present invention includes the steps of extracting identification information indicating at least three or more data coding schemes hierarchically. In this case, the identification information indicating two coding schemes having high frequency of use of the identification information is extracted from different layers.
[432] A method of processing a signal according to a further embodiment of the present invention includes the steps of extracting identification information hierarchically according to frequency of use of the identification information indicating a data coding scheme and decoding data according to the data decoding scheme corresponding to the identification information.
[433] In this case, the identification information is extracted in a manner of extracting first identification information and second identification information hierarchically. The first identification information indicates whether it is a first data coding scheme and the second identification information indicates whether it is a second data coding scheme.
[434] The first identification information indicates whether it is a DIFF coding scheme.
And, the second identification information indicates whether it is a pilot coding scheme or a PCM grouping scheme.
[435] The first data coding scheme can be a PCM coding scheme. And, the second data coding scheme can be a PBC coding scheme or a DIFF coding scheme.
[436] The data are parameters, and the signal processing method further includes the step of reconstructing an audio signal using the parameters.
[437] An apparatus for processing a signal according to one embodiment of the present invention includes an identifier extracting part (e.g., 710 in FIG. 24) hierarchically extracting identification information discriminating a plurality of data coding schemes and a decoding part decoding data according to the data coding scheme corresponding to the identification information.
[438] A method of processing a signal according to another further embodiment of the present invention includes the steps of encoding data according to a data coding scheme and generating identification information discriminating data coding schemes differing from each other in frequency of use used in encoding the data.
[439] In this case, the identification information discriminates a PCM coding scheme and a PBC coding scheme from each other. In particular, the identification information discriminates a PCM coding scheme and a DIFF coding scheme.
[440] And, an apparatus for processing a signal according to another further embodiment of the present invention includes an encoding part encoding data according to a data coding scheme and an identification information generating part (e.g., 400 in FIG. 22) generating identification information discriminating data coding schemes differing from each other in frequency of use used in encoding the data.
[441] 4-2. Inter-Data-Coding Relations
[442] First of all, there exist mutually independent and/or dependent relations between
PCM, PBC and DIFF of the present invention. For instance, it is able to freely select one of the three coding types for each group becoming a target of data coding. So, overall data coding brings a result of using the three coding scheme types in combination with each other. Yet, by considering frequency of use of the three coding scheme types, one of a DIFF coding scheme having optimal frequency of use and the rest of the two coding schemes (e.g., PCM and PBC) is primarily selected. Subsequently, one of the PCM and the PBC is secondarily selected. Yet, as mentioned in the foregoing description, this is to consider transmission efficiency of identification information but is not attributed to similarity of substantial coding schemes.
[443] In aspect of similarity of coding schemes, the PBC and DIFF are similar to each other in calculating a difference value. So, coding processes of the PBC and the DIFF are considerably overlapped with each other. In particular, a step of reconstructing an original parameter from a difference value in decoding is defined as delta decoding and can be designed to be handled in the same step.
[444] In the course of executing PBC or DIFF coding, there may exist a parameter deviating from its range. In this case, it is necessary to code and transfer the corresponding parameter by separate PCM.
[445] [Grouping]
[446] 1. Concept of Grouping
[447] The present invention proposes grouping that handles data by binding prescribed data together for efficiency in coding. In particular, in case of PBC coding, since a pilot reference value is selected by a group unit, a grouping process needs to be completed as a step prior to executing the PBC coding. The grouping is applied to DIFF coding in the same manner. And, some schemes of the grouping according to the present invention are applicable to entropy coding as well, which will be explained in a corresponding description part later.
[448] Grouping types of the present invention can be classified into external grouping and internal grouping with reference to an executing method of grouping.
[449] Alternatively, grouping types of the present invention can be classified into domain grouping, data grouping and channel grouping with reference to a grouping target.
[450] Alternatively, grouping types of the present invention can be classified into first grouping, second grouping and third grouping with reference to a grouping execution sequence.
[451] Alternatively, grouping types of the present invention can be classified into single groping and multiple grouping with reference to a grouping execution count.
[452] Yet, the above grouping classifications are made for convenience in transferring the concept of the present invention, which does not put limitation on its terminologies of use.
[453] The grouping according to the present invention is completed in a manner that various grouping schemes are overlapped with each other in use or used in combination with each other.
[454] In the following description, the groping according to the present invention is explained by being discriminated into internal grouping and external grouping. Subsequently, multiple grouping, in which various grouping types coexist, will be explained. And, concepts of domain grouping and data grouping will be explained.
[455] 2. Internal Grouping
[456] Internal grouping means that execution of grouping is internally carried out. If internal grouping is carried out in general, a previous group is internally re-grouped to generate a new group or divided groups.
[457] FIG. 24 is a diagram to explaining internal grouping according to the present invention.
[458] Referring to FIG. 24, internal grouping according to the present invention is carried out by frequency domain unit (hereinafter named band), for example. So, an internal grouping scheme may correspond to a sort of domain grouping occasionally.
[459] If sampling data passes through a specific filter, e.g., QMF (quadrature mirror filter), a plurality of sub-bands are generated. In the sub-band mode, first frequency grouping is performed to generate first group bands that can be called parameter bands. The first frequency groping is able to generate parameter bands by binding sub-bands together irregularly. So, it is able to configure sizes of the parameter bands non- equivalently. Yet, according to a coding purpose, it is able to configure the parameter bands equivalently. And, the step of generating the sub-bands can be classified as a sort of grouping.
[460] Subsequently, second frequency grouping is performed on the generated parameter bands to generate second group bands that may be called data bands. The second frequency grouping is able to generate data bands by unifying parameter bands with uniform number.
[461] According to a purpose of the coding after completion of the grouping, it is able to execute coding by parameter band unit corresponding to the first group band or by data band unit corresponding to the second group band.
[462] For instance, in applying the aforesaid PBC coding, it is able to select a pilot reference value (a sort of group reference value) by taking grouped parameter bands as one group or by taking grouped data bands as one group. The PBC is carried out using the selected pilot reference value and detailed operations of the PBC are the same as explained in the foregoing description.
[463] For another instance, in applying the aforesaid DIFF coding, a group reference value is decided by taking grouped parameter bands as one group and a difference value is then calculated. Alternatively, it is also possible to decide a group reference value by taking grouped data bands as one group and to calculate a difference value. And, detailed operations of the DIFF are the same as explained in the foregoing description.
[464] If the first and/or frequency grouping is applied to actual coding, it is necessary to transfer corresponding information, which will be explained with reference to FIG. 34 later.
[465] 3. External Grouping
[466] External grouping means a case that execution of grouping is externally carried out.
If external grouping is carried out in general, a previous group is externally re-grouped to generate a new group or combined groups.
[467] FIG. 25 is a diagram to explaining external grouping according to the present invention.
[468] Referring to FIG. 25, external grouping according to the present invention is carried out by time domain unit (hereinafter named timeslot), for example. So, an external grouping scheme may correspond to a sort of domain grouping occasionally.
[469] First time grouping is performed on a frame including sampling data to generate first group timeslots. FIG. 25 exemplarily shows that eight timeslots are generated. The first time grouping has a meaning of dividing a frame into timeslots in equal size as well.
[470] At least one of the timeslots generated by the first time grouping is selected. FIG.
25 shows a case that timeslots 1, 4, 5, and 8 are selected. According to a coding scheme, it is able to select the entire timeslots in the selecting step.
[471] The selected timeslots 1, 4, 5, and 8 are then rearranged into timeslots 1, 2, 3 and 4.
Yet, according to an object of coding, it is able to rearrange the selected timeslots 1, 4, 5, and 8 in part. In this case, since the timeslot(s) excluded from the rearrangement is excluded from final group formation, it is excluded from the PBC or DIFF coding targets.
[472] Second time grouping is performed on the selected timeslots to configure a group handled together on a final time axis.
[473] For instance, timeslots 1 and 2 or timeslots 3 and 4 can configure one group, which is called a timeslot pair. For another instance, timeslots 1, 2 and 3 can configure one group, which is called a timeslot triple. And, a single timeslot is able to exist not to configure a group with another timeslot(s). [474] In case that the first and second time groupings are applied to actual coding, it is necessary to transfer corresponding information, which will be explained with reference to FIG. 34 later.
[475] 4. Multiple Grouping
[476] Multiple grouping means a grouping scheme that generates a final group by mixing the internal grouping, the external grouping and various kinds of other groupings together. As mentioned in the foregoing description, the individual groping schemes according to the present invention can be applied by being overlapped with each other or in combination with each other. And, the multiple grouping is utilized as a scheme to raise efficiency of various coding schemes.
[477] 4-1. Mixing Internal Grouping and External Grouping
[478] FIG. 26 is a diagram to explain multiple grouping according to the present invention, in which internal grouping and external grouping are mixed.
[479] Referring to FIG. 26, final grouped bands 64 are generated after internal grouping has been completed in frequency domain. And, final timeslots 61, 62 and 63 are generated after external groping has been completed in time domain.
[480] One individual timeslot after completion of grouping is named a data set. In FIG.
26, reference numbers 61a, 61b, 62a, 62b and 63 indicate data sets, respectively.
[481] In particular two data sets 61a and 61b or another two data sets 62a and 62b are able to configure a pair by external grouping. The pair of the data sets is called data pair.
[482] After completion of the multiple grouping, PBC or DIFF coding application is executed.
[483] For instance, in case of executing the PBC coding, a pilot reference value Pl, P2 or
P3 is selected for the finally completed data pair 61 or 62 or each data set 63 not configuring the data pair. The PBC coding is then executed using the selected pilot reference values.
[484] For instance, in case of executing the DIFF coding, a DIFF coding type is decided for each of the data sets 61a, 61b, 62a, 62b and 63. As mentioned in the foregoing description, a DIFF direction should be decided for each of the data sets and is decided as one of DIFF-DF and DIFF-DT. A process for executing the DIFF coding according to the decided DIFF coding scheme is the same as mentioned in the foregoing description.
[485] In order to configure a data pair by executing external grouping in multiple grouping, equivalent internal grouping should be performed on each of the data sets configuring the data pair.
[486] For instance, each of the data sets 61a and 61b configuring a data pair has the same data band number. And, each of the data sets 62a and 62b configuring a data pair has the same data band number. Yet, there is no problem in that the data sets belonging to different data pairs, e.g., 61a and 62a, respectively may differ from each other in the data band number. This means that different internal grouping can be applied to each data pair.
[487] In case of configuring a data pair, it is able to perform first grouping by internal grouping and second groping by external grouping.
[488] For instance, a data band number after second grouping corresponds to a prescribed multiplication of a data band number after first grouping. This is because each data set configuring a data pair has the same data band number.
[489] 4-2. Mixing Internal Grouping and Internal Grouping
[490] FIG. 27 and FIG. 28 are diagrams to explain mixed grouping according to another embodiments of the present invention, respectively. In particular, FIG. 27 and FIG. 28 intensively show mixing of internal groupings. So, it is apparent that external grouping is performed or can be performed in FIG. 27 or FIG. 28.
[491] For instance, FIG. 27 shows a case that internal grouping is performed again on a case that data bands are generated after completion of the second frequency grouping. In particular, the data bands generated by the second frequency grouping are divided into low frequency band and high frequency band. In case of specific coding, it is necessary to utilize the low frequency band or the high frequency band separately. In particular, a case of separating the low frequency band and the high frequency band to utilize is called dual mode.
[492] So, in case of dual mode, data coding is performed by taking the finally generated low or high frequency band as one group. For instance, pilot reference values Pl and P2 are generated for low and high frequency bands, respectively and PBC coding is then performed within the corresponding frequency band.
[493] The dual mode is applicable according to characteristics per channel. So, this is called channel grouping. And, the dual mode is differently applicable according to a data type as well.
[494] For instance, FIG. 28 shows a case that internal grouping is performed again on a case that data bands are generated after completion of the aforesaid second frequency grouping. Namely, the data bands generated by the second frequency grouping are divided into low frequency band and high frequency band. In case of specific coding, the low frequency band is utilized only but the high frequency band needs to be discarded. In particular, a case of grouping the low frequency band to utilize only is called low frequency channel (LFE) mode.
[495] In the low frequency channel (LFE) mode, data coding is performed by taking the finally generated low frequency band as one group.
[496] For instance, a pilot reference value Pl is generated for a low frequency band and
PBC coding is then performed within the corresponding low frequency band. Yet, it is possible to generate new data bands by performing internal grouping on a selected low frequency band. This is to intensively group the low frequency band to represent.
[497] And, the low frequency channel (LFE) mode is applied according to a low frequency channel characteristic and can be called channel grouping.
[498] 5. Domain Grouping and Data Grouping
[499] Grouping can be classified into domain grouping and data grouping with reference to targets of the grouping.
[500] The domain grouping means a scheme of grouping units of domains on a specific domain (e.g., frequency domain or time domain). And, the domain grouping can be executed through the aforesaid internal grouping and/or external grouping.
[501] And, the data grouping means a scheme of grouping data itself. The data grouping can be executed through the aforesaid internal grouping and/or external grouping.
[502] In a special case of data grouping, groping can be performed to be usable in entropy coding. For instance, the data grouping is used in entropy coding real data in a finally completed grouping state shown in FIG. 26. Namely, data are processed in a manner that two data neighboring to each other in one of frequency direction and time direction are bound together.
[503] Yet, in case that the data grouping is carried out in the above manner, data within a final group are re-grouped in part. So, PBC or DIFF coding is not applied to the data- grouped group (e.g., two data) only. Besides, an entropy coding scheme corresponding to the data grouping will be explained later.
[504] 6. Signal Processing Method Using Grouping
[505] 6-1. Signal Processing Method Using Internal Grouping At Least
[506] A signal processing method and apparatus using the aforesaid grouping scheme according to the present invention are explained as follows.
[507] A method of processing a signal according to one embodiment of the present invention includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group and a difference value corresponding to the group reference value through first grouping and internal grouping for the first grouping and obtaining the data using the group reference value and the difference value.
[508] The present invention is characterized in that a number of the data grouped by the first grouping is greater than a number of the data grouped by the internal grouping. In this case, the group reference value can be a pilot reference value or a difference reference value.
[509] The method according to one embodiment of the present invention further includes the step of decoding at least one of the group reference value and the difference value. In this case, the pilot reference value is decided per the group. [510] And, numbers of the data included in internal groups through the internal grouping are set in advance, respectively. In this case, the numbers of the data included in the internal groups are different from each other.
[511] The first grouping and the internal grouping are performed on the data on a frequency domain. In this case, the frequency domain may correspond to one of a hybrid domain, a parameter band domain, a data band domain and a channel domain.
[512] And, the present invention is characterized in that a first group by the first grouping includes a plurality of internal groups by the internal grouping.
[513] The frequency domain of the present invention is discriminated by a frequency band. The frequency band becomes sub-bands by the internal grouping. The sub-bands become parameter bands by the internal grouping. The parameter bands become data bands by the internal grouping. In this case, a number of the parameter bands can be limited to maximum 28. And, the parameter bands are grouped by 2, 5 or 10 into one data band.
[514] An apparatus for processing a signal according to one embodiment of the present invention includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group and a difference value corresponding to the group reference value through first grouping and internal grouping for the first grouping and a data obtaining part obtaining the data using the group reference value and the difference value.
[515] A method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and internal grouping for the first grouping and the data and transferring the generated difference value.
[516] And, an apparatus for processing a signal according to another embodiment of the present invention includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and internal grouping for the first grouping and the data and an outputting part transferring the generated difference value.
[517] 6-2. Signal Processing Method Using Multiple Grouping
[518] A signal processing method and apparatus using the aforesaid grouping scheme according to the present invention are explained as follows.
[519] A method of processing a signal according to one embodiment of the present invention includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through grouping and a difference value corresponding to the group reference value and obtaining the data using the group reference value and the difference value. [520] In this case, the group reference value can be one of a pilot reference value and a difference reference value.
[521] And, the grouping may correspond to one of external grouping and external grouping.
[522] Moreover, the grouping may correspond to one of domain grouping and data grouping.
[523] The data grouping is performed on a domain group. And, a time domain included in the domain grouping includes at least one of a timeslot domain, a parameter set domain and a data set domain.
[524] A frequency domain included in the domain grouping may include at least one of a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain and a channel domain.
[525] One difference reference value will be set from a plurality of the data included in the group. And, at least one of a grouping count, a grouping range and a presence or non-presence of the grouping is decided.
[526] An apparatus for processing a signal according to one embodiment of the present invention includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
[527] A method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping and the data and transferring the generated difference value.
[528] An apparatus for processing a signal according to another embodiment of the present invention includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping and the data and an outputting part transferring the generated difference value.
[529] A method of processing a signal according to another embodiment of the present invention includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and a first difference value corresponding to the group reference value and obtaining the data using the group reference value and the first difference value.
[530] In this case, the group reference value may include a pilot reference value or a difference reference value.
[531] The method further includes the step of decoding at least one of the group reference value and the first difference value. And, the first pilot reference value is decided per the group.
[532] The method further includes the steps of obtaining a second pilot reference value corresponding to a plurality of the first pilot reference values and a second difference value corresponding to the second pilot reference value and obtaining the first pilot reference value using the second pilot reference value and the second difference value.
[533] In this case, the second grouping may include external or internal grouping for the first grouping.
[534] The grouping is performed on the data on at least one of a time domain and a frequency domain. In particular, the grouping is a domain grouping that groups at least one of the time domain and the frequency domain.
[535] The time domain may include a timeslot domain, a parameter set domain or a data set domain.
[536] The frequency domain may include a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain or a channel domain. And, the grouped data is an index or parameter.
[537] The first difference value is entropy-decoded using an entropy table indicated by the index included in one group through the first grouping. And, the data is obtained using the group reference value and the entropy-decoded first difference value.
[538] The first difference value and the group reference value are entropy-decoded using an entropy table indicated by the index included in one group through the first grouping. And, the data is obtained using the entropy-decoded group reference value and the entropy-decoded first difference value.
[539] An apparatus for processing a signal according to another embodiment of the present invention includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
[540] A method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and the data and transferring the generated difference value.
[541] An apparatus for processing a signal according to another embodiment of the present invention includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through grouping including first grouping and second grouping and the data and an outputting part transferring the generated difference value.
[542] A method of processing a signal according to another embodiment of the present invention includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and a difference value corresponding to the group reference value and obtaining the data using the group reference value and the difference value.
[543] In this case, a first data number corresponding to a number of the data grouped by the first grouping is smaller than a second data number corresponding to a number of the data grouped by the external grouping. And, a multiplication relation exists between the first data number and the second data number.
[544] The group reference value may include a pilot reference value or a difference reference value.
[545] The method further includes the step of decoding at least one of the group reference value and the difference value.
[546] The pilot reference value is decoded per the group.
[547] The grouping is performed on the data on at least one of a time domain and a frequency domain. The time domain may include a timeslot domain, a parameter set domain or a data set domain. And, the frequency domain may include a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain or a channel domain.
[548] The method further includes the step of reconstructing the audio signal using the obtained data as parameters. And, the external grouping may include paired parameters.
[549] An apparatus for processing a signal according to another embodiment of the present invention includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
[550] A method of processing a signal according to a further embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and the data and transferring the generated difference value.
[551] And, an apparatus for processing a signal according to a further embodiment of the present invention includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through first grouping and external grouping for the first grouping and the data and an outputting part transferring the generated difference value.
[552] 6.3. Signal Processing Method Using Data Grouping At Least
[553] A signal processing method and apparatus using the aforesaid grouping scheme according to the present invention are explained as follows.
[554] A method of processing a signal according to one embodiment of the present invention includes the steps of obtaining a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and a difference value corresponding to the group reference value and obtaining the data using the group reference value and the difference value.
[555] In this case, a number of the data included in the internal grouping is smaller than a number of the data included in the data grouping. And, the data correspond to parameters.
[556] The internal grouping is performed on a plurality of the data-grouped data entirely.
In this case, the internal grouping can be performed per a parameter band.
[557] The internal grouping can be performed on a plurality of the data-grouped data partially. In this case, the internal grouping can be performed per a channel of each of a plurality of the data-grouped data.
[558] The group reference value can include a pilot reference value or a difference reference value.
[559] The method may further include the step of decoding at least one of the group reference value and the difference value. In this case, the pilot reference value is decided per the group.
[560] The data grouping and the internal grouping are performed on the data on a frequency domain.
[561] The frequency domain may include one of a sample domain, a sub-band domain, a hybrid domain, a parameter band domain, a data band domain and a channel domain. In obtaining the data, grouping information for at least one of the data grouping and the internal grouping is used.
[562] The grouping information includes at least one of a position of each group, a number of each group, a presence or non-presence of applying the group reference value per a group, a number of the group reference values, a codec scheme of the group reference value and a presence or non-presence of obtaining the group reference value.
[563] An apparatus for processing a signal according to one embodiment of the present invention includes a value obtaining part obtaining a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and a difference value corresponding to the group reference value and a data obtaining part obtaining the data using the group reference value and the difference value.
[564] A method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and the data and transferring the generated difference value.
[565] And, an apparatus for processing a signal according to another embodiment of the present invention includes a value generating part generating a difference value using a group reference value corresponding to a plurality of data included in one group through data grouping and internal grouping for the data grouping and the data and an outputting part transferring the generated difference value.
[566]
[567] [Entropy Coding]
[568] 1. Concept of Entropy Coding
[569] Entropy coding according to the present invention means a process for performing variable length coding on a result of the data coding.
[570] In general, entropy coding processes occurrence probability of specific data in a statistical way. For instance, transmission efficiency is raised overall in a manner of allocating less bits to data having high frequency of occurrence in probability and more bits to data having low frequency of occurrence in probability.
[571] And, the present invention intends to propose an efficient entropy coding method, which is different from the general entropy coding, interconnected with the PBC coding and the DIFF coding.
[572] 1-1. Entropy Table
[573] First of all, a predetermined entropy table is necessary for entropy coding. The entropy table is defined as a codebook. And, an encoding part and a decoding part use the same table.
[574] The present invention proposes an entropy coding method and a unique entropy table to process various kinds of data coding results efficiently.
[575] 1-2. Entropy coding Types (1D/2D)
[576] Entropy coding of the present invention is classified into two types. One is to derive one index (index 1) through one entropy table, and the other is to derive two consecutive indexes (index 1 and index 2) through one entropy table. The former is named ID (one-dimensional) entropy coding and the latter is named 2D (two-dimensional) entropy coding.
[577] FIG. 29 is an exemplary diagram of ID and 2D entropy table according to the present invention. Referring to FIG. 29, an entropy table of the present invention basically includes an index field, a length field and a codeword field. [578] For instance, if specific data (e.g., pilot reference value, difference value, etc.) is calculated through the aforesaid data coding, the corresponding data (corresponding to index) has a codeword designated through the entropy table. The codeword turns into a bitstream and is then transferred to a decoding part.
[579] An entropy decoding part having received the codeword decides the entropy table having used for the corresponding data and then derives an index value using the corresponding codeword and a bit length configuring the codeword within the decided table. In this case, the present invention represents a codeword as hexadecimal.
[580] A positive sign (+) or a negative sign (-) of an index value derived by ID or 2D entropy coding is omitted. So, it is necessary to assign the sign after completion of the ID or 2D entropy coding.
[581] In the present invention, the sign is assigned differently according to ID or 2D.
[582] For instance, in case of ID entropy coding, if a corresponding index is not 0, a separate 1-bit sign bit (e.g., bsSign) is allocated and transferred.
[583] In case of 2D entropy coding, since two indexes are consecutively extracted, whether to allocate a sign bit is decided in a manner of programming a relation between the two extracted indexes. In this case, the program uses an added value of the two extracted indexes, a difference value between the two extracted indexes and a maximum absolute value (lav) within a corresponding entropy table. This is able to reduce a number of transmission bits, compared to a case that a sign bit is allocated to each index in case of a simple 2D.
[584] The ID entropy table, in which indexes are derived one by one, is usable for all data coding results. Yet, the 2D entropy table, in which two indexes are derived each, has a restricted use for a specific case.
[585] For instance, if data coding is not a pair through the aforesaid grouping process, the
2D entropy table has a restricted use in part. And, a use of the 2D entropy table is restricted on a pilot reference value calculated as a result of PBC coding.
[586] Therefore, as mentioned in the foregoing description, entropy coding of the present invention is characterized in utilizing a most efficient entropy coding scheme in a manner that entropy coding is interconnected with the result of data coding. This is explained in detail as follows.
[587] 1-3. 2D Method (Time Pairing/Frequency Paring)
[588] FIG. 30 is an exemplary diagram of two methods for 2D entropy coding according to the present invention. 2D entropy coding is a process for deriving two indexes neighboring to each other. So, the 2D entropy coding can be discriminated according to a direction of the two consecutive indexes.
[589] For instance, a case that two indexes are neighbor to each other in frequency direction is called 2D-Freqeuncy Pairing (hereinafter abbreviated 2D-FP). And, a case that two indexes are neighbor to each other in time direction is called 2D-Time Pairing (hereinafter abbreviated 2D-TP).
[590] Referring to FIG. 30, the 2D-FP and the 2D-TP are able to configure separate index tables, respectively. An encoder has to decide a most efficient entropy coding scheme according to a result of data decoding.
[591] A method of deciding entropy coding interconnected with data coding efficiently is explained in the following description.
[592] 1-4. Entropy Coding Signal Processing Method
[593] A method of processing a signal using entropy coding according to the present invention is explained as follows.
[594] In a method of processing a signal according to one embodiment of the present invention, a reference value corresponding to a plurality of data and a difference value corresponding to the reference value are obtained. Subsequently, the difference value is entropy-decoded. The data is then obtained using the reference value and the entropy-decoded difference value.
[595] The method further includes the step of entropy-decoding the reference value. And, the method may further include the step of obtaining the data using the entropy- decoded reference value and the entropy-decoded difference value.
[596] The method can further include the step of obtaining entropy coding identification information. And, the entropy coding is performed according to an entropy coding scheme indicated by the entropy coding identification information.
[597] In this case, the entropy coding scheme is one of a ID coding scheme and a multidimensional coding scheme (e.g., 2D coding scheme). And, the multi-dimensional coding scheme is one of a frequency pair (FP) coding scheme and a time pair (TP) coding scheme.
[598] The reference value may include one of a pilot reference value and a difference reference value.
[599] And, the signal processing method can further include the step of reconstructing the audio signal using the data as parameters.
[600] An apparatus for processing a signal according to one embodiment of the present invention includes a value obtaining part obtaining a reference value corresponding to a plurality of data and a difference value corresponding to the reference value, an entropy decoding part entropy-decoding the difference value, and a data obtaining part obtaining the data using the reference value and the entropy-decoded difference value.
[601] In this case, the value obtaining part is included in the aforesaid bitstream demultiplexing part 60 and the data obtaining part is included within the aforesaid data decoding part 91 or 92.
[602] A method of processing a signal according to another embodiment of the present invention includes the steps of generating a difference value using a reference value corresponding to a plurality of data and the data, entropy-encoding the generated difference value, and outputting the entropy-encoded difference value.
[603] In this case, the reference value is entropy-encoded. The entropy-encoded reference value is transferred.
[604] The method further includes the step of generating an entropy coding scheme used for the entropy encoding. And, the generated entropy coding scheme is transferred.
[605] An apparatus for processing a signal according to another embodiment of the present invention includes a value generating part generating a difference value using a reference value corresponding to a plurality of data and the data, an entropy encoding part entropy-encoding the generated difference value, and an outputting part outputting the entropy-encoded difference value.
[606] In this case, the value generating part is included within the aforesaid data encoding part 31 or 32. And, the outputting part is included within the aforesaid bitstream multiplexing part 50.
[607] A method of processing a signal according to another embodiment of the present invention includes the steps of obtaining data corresponding to a plurality of data coding schemes, deciding an entropy table for at least one of a pilot reference value and a pilot difference value included in the data using an entropy table identifier unique to the data coding scheme, and entropy-decoding at least one of the pilot reference value and the pilot difference value using the entropy table.
[608] In this case, the entropy table identifier is unique to one of a pilot coding scheme, a frequency differential coding scheme and a time differential coding scheme.
[609] And, the entropy table identifier is unique to each of the pilot reference value and the pilot difference value.
[610] The entropy table is unique to the entropy table identifier and includes one of a pilot table, a frequency differential table and a time differential table.
[611] Alternatively, the entropy table is not unique to the entropy table identifier and one of a frequency differential table and a time differential table can be shared.
[612] The entropy table corresponding to the pilot reference value is able to use a frequency differential table. In this case, the pilot reference value is entropy-decoded by the ID entropy coding scheme.
[613] The entropy coding scheme includes a ID entropy coding scheme and a 2D entropy coding scheme. In particular, the 2D entropy coding scheme includes a frequency pair (2D-FP) coding scheme and a time pair (2D-TP) coding scheme.
[614] And, the present method is able to reconstruct the audio signal using the data as parameters.
[615] An apparatus for processing a signal according to another embodiment of the present invention includes a value obtaining part obtaining a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value and an entropy decoding part entropy-decoding the pilot difference value. And, the apparatus includes a data obtaining part obtaining the data using the pilot reference value and the entropy-decoded pilot difference value.
[616] A method of processing a signal according to a further embodiment of the present invention includes the steps of generating a plot difference value using a pilot reference value corresponding to a plurality of data and the data, entropy-encoding the generated pilot difference value, and transferring the entropy-encoded pilot difference value.
[617] In this case, a table used for the entropy encoding may include a pilot dedicated table.
[618] The method further includes the step of entropy-encoding the pilot reference value.
And, the entropy-encoded pilot reference value is transferred.
[619] The method further includes the step of generating an entropy coding scheme used for the entropy encoding. And, the generated entropy coding scheme is transferred.
[620] An apparatus for processing a signal according to a further embodiment of the present invention includes a value generating part generating a plot difference value using a pilot reference value corresponding to a plurality of data and the data, an entropy encoding part entropy-encoding the generated pilot difference value, and an outputting part transferring the entropy-encoded pilot difference value.
[621 ] 2. Relation to Data Coding
[622] As mentioned in the foregoing description, the present invention has proposed three kinds of data coding schemes. Yet, entropy coding is not performed on the data according to the PCM scheme. Relations between PBC coding and entropy coding and relations between DIF coding and entropy coding are separately explained in the following description.
[623] 2- 1. PBC Coding and Entropy Coding
[624] FIG. 31 is a diagram of an entropy coding scheme for PBC coding result according to the present invention.
[625] As mentioned in the foregoing description, after completion of PBC coding, one pilot reference value and a plurality of differences values are calculated. And, all of the pilot reference value and the difference values become targets of entropy coding.
[626] For instance, according to the aforesaid grouping method, a group to which PBC coding will be applied is decided. In FIG. 31, for convenience of explanation, a case of a pair on a time axis and a case of non-pair on a time axis are taken as examples. Entropy coding after completion of PBC coding is explained as follows.
[627] First of all, a case 83 that PBC coding is performed on non-pairs is explained. ID entropy coding is performed on one pilot reference value becoming an entropy coding target, and ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
[628] In particular, since one group exists for one data set on a time axis in case of non- pair, it is unable to perform 2D-TP entropy coding. Even if 2D-FP is executed, ID entropy coding should be performed on a parameter value within a last band 81a failing to configure a pair after pairs of indexes have been derived. Once a per-data entropy coding scheme is decided, a codeword is generated using a corresponding entropy table.
[629] Since the present invention relates to a case that one pilot reference value is generated for one group for example, ID entropy coding should be performed. Yet, in another embodiment of the present invention, if at least two pilot reference values are generated from one group, it may be possible to perform 2D entropy coding on consecutive pilot reference values.
[630] Secondly, a case 84 of performing PBC coding on pairs is explained as follows.
[631] ID entropy coding is performed on one pilot reference value becoming an entropy coding target, and ID entropy coding, 2D-FP entropy coding or 2D-TP entropy coding can be performed on the rest difference values.
[632] In particular, since one group exists for two data sets neighbor to each other on a time axis in case of pairs, it is able to perform 2D-TP entropy coding. Even if 2D-FP is executed, ID entropy coding should be performed on a parameter value within a last band 81b or 81c failing to configure a pair after pairs of indexes have been derived. Yet, as can be confirmed in FIG. 31, in case of applying 2D-TP entropy coding, a last band failing to configure a pair does not exist.
[633] 2-2. DIFF Coding and Entropy Coding
[634] FIG. 32 is a diagram of entropy coding scheme for DIFF coding result according to the present invention.
[635] As mentioned in the foregoing description, after completion of DIFF coding, one pilot reference value and a plurality of differences values are calculated. And, all of the pilot reference value and the difference values become targets of entropy coding. Yet, in case of DIFF-DT, a reference value may not exist.
[636] For instance, according to the aforesaid grouping method, a group to which DIFF coding will be applied is decided. In FIG. 32, for convenience of explanation, a case of a pair on a time axis and a case of non-pair on a time axis are taken as examples. And, FIG. 32 shows a case that a data set as a unit of data coding is discriminated into DIFF- DT in time axis direction and DIFF-DF in frequency axis direction according to DIFF coding direction.
[637] Entropy coding after completion of DIFF coding is explained as follows. [638] First of all, a case that DIFF coding is performed on non-pairs is explained. In case of non-pairs, one data set exists on a time axis. And, the data set may become DIFF- DF or DIFF-DT according to DIF coding direction.
[639] For instance, if one data set of non-pair is DIFF-DF (85), a reference value becomes a parameter value within a first band 82a. ID entropy coding is performed on the reference value and ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
[640] Namely, in case of DIFF-DF as well as non-pair, one group for one data set exists on a time axis. So, it is unable to perform 2D-TP entropy coding. Even if 2D-FP is executed, after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last parameter band 83a failing to configure a pair. Once a coding scheme is decoded for each data, a codeword is generated using a corresponding entropy table.
[641] For instance, in case that one data set of non-pair is DIFF-DT (86), since a reference value does not exist within the corresponding data set, first band processing is not performed. So, ID entropy coding or 2D-FP entropy coding can be performed on the difference values.
[642] In case of DIFF-DT as well as non-pair, a data set to find a difference value may be a neighbor data set failing to configure a data pair or a data set within another audio frame.
[643] Namely, in case of DIFF-DT as well as non-pair (86), there exists one group for one data set on a time axis. So, it is unable to perform 2D-TP entropy coding. Even if 2D-FP is executed, after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last parameter band failing to configure a pair. Yet, FIG. 32 just shows a case that a last band failing to configure a pair does not exist, for example.
[644] Once a coding scheme is decoded for each data, a codeword is generated using a corresponding entropy table.
[645] Secondly, a case that DIFF coding is performed on pairs is explained. In case that data coding is performed on pairs, two data sets configure one group on a time axis. And, each of the data sets within the group can become DIFF-DF or DIFF-DT according to DIFF coding direction. So, it can be classified into a case that both two data sets configuring a pair are DIFF-DF (87), a case that both two data sets configuring a pair are DIFF-DT, and a case that two data sets configuring a pair have different coding directions (e.g., DIFF-DF/DT or DIFF-DT/DF), respectively (88).
[646] For instance, in case that both two data sets configuring a pair are DIFF-DF (i.e.,
DIFF-DF/DF) (87), if each of the data sets is non-paired and DIFF-DF, if all available entropy coding schemes are executable. [647] For instance, each reference value within the corresponding data set becomes a parameter value within a first band 82b or 82c and ID entropy coding is performed on the reference value. And, ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values.
[648] Even if 2D-FP is performed within a corresponding data set, after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last band 83b or 83c failing to configure a pair. Since two data sets configure a pair, 2D-TP entropy coding can be performed. In this case, 2D-TP entropy coding is sequentially performed on bands ranging from a next band excluding the first band 82b or 82c within the corresponding data set to a last band.
[649] If the 2D-TP entropy coding is performed, a last band failing to configure a pair is not generated.
[650] Once the entropy coding scheme per data is decided, a codeword is generated using a corresponding entropy table.
[651] For instance, in case that both of the two data sets configuring the pair are DIFF-DT
(i.e., DIFF-DT/DT) (89), since a reference value does not exist within a corresponding data set, first band processing is not performed. And, ID entropy coding or 2D-Fp entropy coding can be performed on all the difference values within each of the data sets.
[652] Even if 2D-FP is performed within a corresponding data set, after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last band failing to configure a pair. Yet, FIG. 32 shows an example that a last band failing to configure a pair does not exist.
[653] Since two data sets configure a pair, 2D-TP entropy coding is executable. In this case, 2D-TP entropy coding is sequentially performed on bands ranging from a first band to a last band within the corresponding data set.
[654] If the 2D-TP entropy coding is performed, a last band failing to configure a pair is not generated.
[655] Once the entropy coding scheme per data is decided, a codeword is generated using a corresponding entropy table.
[656] For instance, there may exist a case that two data sets configuring a pair have different coding directions, respectively (i.e., DIFF-DF/DT or DIFF-DT/DF) (88). FIG. 32 shows an example of DIFF-DF/DT. In this case, all entropy coding schemes applicable according to corresponding coding types can be basically performed on each o the data sets.
[657] For instance, in a data set of DIFF-DF among two data sets configuring a pair, ID entropy coding is performed on a parameter value within a first band 82d with a reference value within the corresponding data set (DIFF-DF). And, ID entropy coding or 2D-FP entropy coding can be performed on the rest difference values. [658] Even if 2D-FP is performed within a corresponding data set (DIFF-DF), after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last band 83d failing to configure a pair. [659] For instance, in a data set of DIFF-DT among two data sets configuring a pair, since a reference value does not exist, first band processing is not performed. And, ID entropy coding or 2D-FP entropy coding can be performed on all difference values within the corresponding data set (DIFF-DT). [660] Even if 2D-FP is performed within a corresponding data set (DIFF-DT), after pairs of indexes have been derived, ID entropy coding should be performed on a parameter value within a last band failing to configure a pair. Yet, FIG. 32 shows an example that a last band failing to configure a pair does not exist. [661] Since the two data sets configuring the pair have the coding directions different from each other, respectively, 2D-TP entropy coding is executable. In this case, 2D-TP entropy coding is sequentially performed on bands ranging from a next band excluding a first band including the first band 82d to a last band. [662] If the 2D-TP entropy coding is performed, a last band failing to configure a pair is not generated. [663] Once the entropy coding scheme per data is decided, a codeword is generated using a corresponding entropy table. [664] 2-3. Entropy Coding and Grouping
[665] As mentioned in the foregoing description, in case of 2D-FP or 2D-TP entropy coding, two indexes are extracted using one codeword. So, this means that a grouping scheme is performed for entropy coding. And, this can be named time grouping or frequency grouping. [666] For instance, an encoding part groups two indexes extracted in a data coding step in frequency or time direction. [667] Subsequently, the encoding part selects one codeword representing the two grouped indexes using an entropy table and then transfers the selected codeword by having it included in a bitstream. [668] A decoding part receives one codeword resulting from grouping the two indexes included in the bitstream and the extracts two index values using the applied entropy table. [669] 2-4. Signal Processing Method by Relation between Data Coding and Entropy
Coding [670] The features of the signal processing method according to the present invention by the relation between PBC coding and entropy coding and the relation between DIFF coding and entropy coding are explained as follows. [671] A method of processing a signal according to one embodiment of the present invention includes the steps of obtaining difference information, entropy-decoding the difference information according to an entropy coding scheme including time grouping and frequency grouping, and data-decoding the difference information according to a data decoding scheme including a pilot difference, a time difference and a frequency difference. And, detailed relations between data coding and entropy coding are the same as explained in the foregoing description.
[672] A method of processing a signal according to another embodiment of the present invention includes the steps of obtaining a digital signal, entropy-decoding the digital signal according to an entropy coding scheme, and data-decoding the entropy-decoded digital signal according to one of a plurality of data coding schemes including a pilot coding scheme at least. In this case, the entropy coding scheme can be decided according to the data coding scheme.
[673] An apparatus for processing a signal according to another embodiment of the present invention includes a signal obtaining part obtaining a digital signal, an entropy decoding part entropy-decoding the digital signal according to an entropy coding scheme, and a data decoding part data-decoding the entropy-decoded digital signal according to one of a plurality of data coding schemes including a pilot coding scheme at least.
[674] A method of processing a signal according to a further embodiment of the present invention includes the steps of data-encoding a digital signal by a data coding scheme, entropy-encoding the data-encoded digital signal by an entropy coding scheme, and transferring the entropy-encoded digital signal. In this case, the entropy coding scheme can be decided according to the data coding scheme.
[675] And, an apparatus for processing a signal according to a further embodiment of the present invention includes a data encoding part data-encoding a digital signal by a data coding scheme and an entropy encoding part entropy-encoding the data-encoded digital signal by an entropy coding scheme. And, the apparatus may further include an outputting part transferring the entropy-encoded digital signal.
[676] 3. Selection for Entropy Table
[677] An entropy table for entropy coding is automatically decided according to a data coding scheme and a type of data becoming an entropy coding target.
[678] For instance, if a data type is a CLD parameter and if an entropy coding target is a pilot reference value, ID entropy table to which a table name hcodPilot_CLD is given is used for entropy coding.
[679] For instance, if a data type is a CPC parameter, if data coding is DIFF-DF, and if an entropy coding target is a first band value, ID entropy table to which a table name hcodFirstband_CPC is given is used for entropy coding. [680] For instance, if a data type is an ICC parameter, if a data coding scheme is PBC, and if entropy coding is performed by 2D-TP, 2D-PC/TP entropy table to which a table name hcod2D_ICC_PC_TP_LL is given is used for entropy coding. In this case, LL within the 2D table name indicates a largest absolute value (hereinafter abbreviated LAV) within the table. And, the largest absolute value (LAV) will be explained later.
[681] For instance, if a data type is an ICC parameter, if a data coding scheme is DIF-DF, and if entropy coding is performed by 2D-FP, 2D-FP entropy table to which a table name hcod2D_ICC_DF_FP_LL is given is used for entropy coding.
[682] Namely, it is very important to decide to perform entropy coding using which one of a plurality of entropy tables. And, it is preferable that an entropy table suitable for a characteristic of each data becoming each entropy target is configured independent.
[683] Yet, entropy tables for data having attributes similar to each other can be shared to use. For representative example, if a data type is ADG or ATD, it is able to apply the CLD entropy table. And, a first band entropy table can be applied to a pilot reference value of PBC coding.
[684] A method of selecting an entropy table using the largest absolute value (LAV) is explained in detail as follows.
[685] 3- 1. Largest Absolute Vale (LAV) of Entropy Table
[686] FIG. 33 is a diagram to explain a method of selecting an entropy table according to the present invention.
[687] A plurality of entropy tables are shown in (a) of FIG. 33, and a table to select the entropy tables is shown in (b) of FIG. 33.
[688] As mentioned in the foregoing description, there exist a plurality of entropy tables according to data coding and data types.
[689] For instance, the entropy tables may include entropy tables (e.g., tables 1 to 4) applicable in case that a data type is xxx, entropy tables (e.g., tables 5 to 8) applicable in case that a data type is yyy, PBC dedicated entropy tables (e.g., tables k to k+1), escape entropy tables (e.g., tables n-2 ~ n-1), and an LAV index entropy table (e.g., table n).
[690] In particular, although it is preferable that a table is configured by giving a codeword to each index that can occur in corresponding data, if so, a size of the table considerably increases. And, it is inconvenient to manage indexes that are unnecessary or barely occur. In case of a 2D entropy table, those problems bring more inconvenience due to too many occurrences. To solve those problems, the largest absolute value (LAV) is used.
[691] For instance, if a range of an index value for a specific data type (e.g., CLD) is between -X ~ +X (X=15), at least one LAV having high frequency of occurrence in probability is selected within the range and is configured into a separate table. [692] For instance, in configuring a CLD entropy table, it is able to provide a table of
LAV=3, a table of LAV=5, a table of LAV=7 or a table of LAV=9.
[693] For instance, in (a) of FIG. 33, it is able to set the table- 1 91a to the CLD table of
LAV=3, the table-2 91b to the CLD table of LAV=5, the table-3 91c to the CLD table of LAV=7, and the table-4 9 Id to the CLD table of LAV=9.
[694] Indexes deviating from the LAV range within the LAV table are handled by escape entropy tables (e.g., tables n-2 ~ n-1).
[695] For instance, in performing coding using the CLD table 91c of LAV=7, if an index deviating from a maximum value 7 occurs (e.g., 8, 9, , 15), the corresponding index is separately handled by the escape entropy table (e.g., tables n-2 ~ n-1).
[696] Likewise, it is able to set the LAV table for another data type (e.g., ICC, CPC, etc.) in the same manner of the CLD table. Yet, LAV for each data has a different value because a range per data type varies.
[697] For instance, in configuring an ICC entropy table, for example, it is able to provide a table of LAV=I, a table of LAV=3, a table of LAV=5, and a table of LAV=7. In configuring a CPC entropy table, for example, it is able to provide a table of LAV=3, a table of LAV=6, a table of LAV=9, and a table of LAV=12.
[698] 3-2. Entropy Table for LAV Index
[699] The present invention employs an LAV index to select an entropy table using LAV.
Namely, LAV value per data type, as shown in (b) of FIG. 33, is discriminated by LAV index.
[700] In particular, to select an entropy table to be finally used, LAV index per a corresponding data type is confirmed and LAV corresponding to the LAV index is then confirmed. The finally confirmed LAV value corresponds to LL in the configuration of the aforesaid entropy table name.
[701] For instance, if a data type is a CLD parameter, if a data coding scheme is DIFF-
DF, if entropy coding is performed by 2D-FP, and if LAV=3, an entropy table to which a table name hcod2D_CLD_DF_FP_03 is used for entropy coding.
[702] In confirming the per data type LAV index, the present invention is characterized in using an entropy table for LAV index separately. This means that LAV index itself is handled as a target of entropy coding.
[703] For instance, the table-n in (a) of FIG. 33 is used as an LAV index entropy table
9 Ie. This is represented as Table 1.
[704] Table 1
Figure imgf000064_0001
Figure imgf000065_0001
[705] This table means that LAV index value itself statistically differs in frequency of use.
[706] For instance, since LAV Index = 0 has highest frequency of use, one bit is allocated to it. And, two bits are allocated to LAV Index = 1 having second highest frequency of use. Finally, three bits are allocated to LAV =2 or 3 having low frequency of use.
[707] In case that the LAV Index entropy table 9 Ie is not used, 2-bit identification information should be transferred to discriminate four kinds of LAV Indexes each time an LAV entropy table is used.
[708] Yet, if the LAV Index entropy table 9 Ie of the present invention is used, it is enough to transfer 1-bit codeword for a case of LAV Index = 0 having at least 60% frequency of use for example. So, the present invention is able to raise transmission efficiency higher than that of the related art method.
[709] In this case, the LAV Index entropy table 9 Ie in Table 1 is applied to a case of four kinds of LAV Indexes. And, it is apparent that transmission efficiency can be more enhanced if there are more LAV Indexes.
[710] 3-3. Signal Processing Method Using Entropy Table Selection
[711] A signal processing method and apparatus using the aforesaid entropy table selection are explained as follows.
[712] A method of processing a signal according to one embodiment of the present invention includes the steps of obtaining index information, entropy-decoding the index information, and identifying a content corresponding to the entropy-decoded index information.
[713] In this case, the index information is information for indexes having characteristics of frequency of use with probability.
[714] As mentioned in the foregoing description, the index information is entropy- decoded using the index dedicated entropy table 9 Ie.
[715] The content is classified according to a data type and is used for data decoding.
And, the content may become grouping information.
[716] The grouping information is information for grouping of a plurality of data.
[717] And, an index of the entropy table is a largest absolute value (LAV) among indexes included in the entropy table.
[718] Moreover, the entropy table is used in performing 2D entropy decoding on parameters.
[719] An apparatus for processing a signal according to one embodiment of the present invention includes an information obtaining part obtaining index information, a decoding part entropy-decoding the index information, and an identifying part identifying a content corresponding to the entropy-decoded index information.
[720] A method of processing a signal according to another embodiment of the present invention includes the steps of generating index information to identify a content, entropy-encoding the index information, and transferring the entropy-encoded index information.
[721] An apparatus for processing a signal according to another embodiment of the present invention includes an information generating part generating index information to identify a content, an encoding part entropy-encoding the index information, and an information outputting part transferring the entropy-encoded index information.
[722] A method of processing a signal according to another embodiment of the present invention includes the steps of obtaining a difference value and index information, entropy-decoding the index information, identifying an entropy table corresponding to the entropy-decoded index information, and entropy-decoding the difference value using the identified entropy table.
[723] Subsequently, a reference value corresponding to a plurality of data and the decoded difference value are used to obtain the data. In this case, the reference value may include a pilot reference value or a difference reference value.
[724] The index information is entropy-decoded using an index dedicated entropy table.
And, the entropy table is classified according to a type of each of a plurality of the data.
[725] The data are parameters, and the method further includes the step of reconstructing an audio signal using the parameters.
[726] In case of entropy-decoding the difference value, 2D entropy decoding is performed on the difference value using the entropy table.
[727] Moreover, the method further includes the steps of obtaining the reference value and entropy-decoding the reference value using the entropy table dedicated to the reference value.
[728] An apparatus for processing a signal according to another embodiment of the present invention includes an inputting part obtaining a difference value and index information, an index decoding part entropy-decoding the index information, a table identifying part identifying an entropy table corresponding to the entropy-decoded index information, and a data decoding part entropy-decoding the difference value using the identified entropy table.
[729] The apparatus further includes a data obtaining part obtaining data using a reference value corresponding to a plurality of data and the decoded difference value.
[730] A method of processing a signal according to a further embodiment of the present invention includes the steps of generating a difference value using a reference value corresponding to a plurality of data and the data, entropy-encoding the difference value using an entropy table, and generating index information to identify the entropy table.
[731] And, the method further includes the steps of entropy-encoding the index information and transferring the entropy-encoded index information and the difference value.
[732] And, an apparatus for processing a signal according to a further embodiment of the present invention includes a value generating part generating a difference value using a reference value corresponding to a plurality of data and the data, a value encoding part entropy-encoding the difference value using an entropy table, an information generating part generating index information to identify the entropy table, and an index encoding part entropy-encoding the index information. And, the apparatus further includes an information outputting part transferring the entropy-encoded index information and the difference value.
[733]
[734] [DATA STRUCTURE]
[735] A data structure including various kinds of information associated with the aforesaid data coding, grouping and entropy coding according to the present invention is explained as follows.
[736] FIG. 34 is a hierarchical diagram of a data structure according to the present invention.
[737] Referring to FIG. 34, a data structure according to the present invention includes a header 100 and a plurality off frames 101 and 102. Configuration information applied to the lower frames 101 and 102 in common is included in the header 100. And, the configuration information includes grouping information utilized for the aforesaid grouping.
[738] For instance, the grouping information includes a first time grouping information
100a, a first frequency grouping information 100b and a channel groping information 100c.
[739] Besides, the configuration information within the header 100 is called main configuration information and an information portion recorded in the frame is called payload.
[740] In particular, a case of applying the data structure of the present invention to audio spatial information is explained in the following description for example.
[741] First of all, the first time grouping information 100a within the header 100 becomes bsFrameLength field that designates a number of timeslots within a frame.
[742] The first frequency grouping information 100b becomes bsFreqRes field that designates a number of parameter bands within a frame.
[743] The channel grouping information 100c means OttmodeLFE-bsOttBands field and bsTttDualmode-bsTttBandsLow field. The OttmodeLFE-bsOttBands field is the information designating a number of parameter bands applied to LFE channel. And, thebsTttDualmode-bsTttBandsLow field is the information designating a number of parameter bands of a low frequency band within a dual mode having both low and high frequency bands. Ye, the bsTttDualmode-bsTttBandsLow field can be classified not as channel grouping information but as frequency grouping information.
[744] Each of the frames 101 and 102 includes a frame information (Frame Info) 101a applied to all groups within a frame in common and a plurality of groups 101b and 101c.
[745] The frame information 101a includes a time selection information 103a, a second time grouping information 103b and a second frequency grouping information 103c. Besides, the frame information 101a is called sub-configuration information applied to each frame.
[746] In detail, a case of applying the data structure of the present invention to audio spatial information is explained in the following description, for example.
[747] The time selection information 103a within the frame information 101a includes bsNumParamset field, bsParamslot field and bsDataMode filed.
[748] The bsNumParamset field is information indicating a number of parameter sets existing within an entire frame.
[749] And, the bsParamslot field is information designating a position of a timeslot where a parameter set exists.
[750] Moreover, the bsDataMode field is information designating an encoding and decoding processing method of each parameter set.
[751] For instance, in case of bsDataMode=0 (e.g., default mode) of a specific parameter set, a decoding part replaces the corresponding parameter set by a default value.
[752] In case of bsDataMode=l (e.g., previous mode) of a specific parameter set, a decoding part maintains a decoding value of a previous parameter set.
[753] In case of bsDataMode=2 (e.g., interpolation mode) of a specific parameter set, a decoding part calculates a corresponding parameter set by interpolation between parameter sets.
[754] Finally, in case of bsDataMode=3 (e.g., read mode) of a specific parameter set, it means that coding data for a corresponding parameter set is transferred. So, a plurality of the groups 101b and 101c within a frame are groups configured with data transferred in case of bsDataMode=3 (e.g., read mode). Hence, the encoding part decodes data with reference to coding type information within each of the groups.
[755] A signal processing method and apparatus using the bsDataMode field according to one embodiment of the present invention are explained in detail as follows.
[756] A method of processing a signal using the bsDataMode field according to one embodiment of the present invention includes the steps of obtaining mode information, obtaining a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value according to data attribute indicated by the mode information, and obtaining the data using the pilot reference value and the pilot difference value.
[757] In this case, the data are parameters, and the method further includes the step of reconstructing an audio signal using the parameters.
[758] If the mode information indicates a read mode, the pilot difference value is obtained.
[759] The mode information further includes at least one of a default mode, a previous mode and an interpolation mode.
[760] And, the pilot difference value is obtained per group band.
[761] Moreover, the signal processing method uses a first parameter (e.g., dataset) to identify a number of the read modes and a second parameter (e.g., setidx) to obtain the pilot difference value based on the first variable.
[762] An apparatus for processing a signal using the bsDataMode field according to one embodiment of the present invention includes an information obtaining part obtaining mode information, a value obtaining part obtaining a pilot reference value corresponding to a plurality of data and a pilot difference value corresponding to the pilot reference value according to data attribute indicated by the mode information, and a data obtaining part obtaining the data using the pilot reference value and the pilot difference value.
[763] And, the information obtaining part, the value obtaining part and the data obtaining part are provided within the aforesaid data decoding part 91 or 92.
[764] A method of processing a signal using the bsDataMode field according to another embodiment of the present invention includes the steps of generating mode information indicating attribute of data, generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data, and transferring the generated difference value. And, the method further includes the step of encoding the generated difference value.
[765] An apparatus for processing a signal using the bsDataMode field according to another embodiment of the present invention includes an information generating part generating mode information indicating attribute of data, a value generating part generating a pilot difference value using a pilot reference value corresponding to a plurality of data and the data, and an outputting part transferring the generated difference value. And, the value generating part is provided within the aforesaid data encoding part 31 or 32.
[766] The second time grouping information 103b within the frame information 101a includes bsDatapair field. The bsDatapair field is information that designates a presence or non-presence of a pair between data sets designated by the bsDataMode=3. In particular, two data sets are grouped into one group by the bsDatapair field.
[767] The second frequency grouping information within the frame information 101a includes bsFreqResStride field. The bsFreqResStride field is the information to second-group the parameter bad first-grouped by the bsFreqRes field as the first frequency grouping information 100b. Namely, a data band is generated by binding parameters amounting to a stride designated by the bsFreqResStride field. So, parameter values are given per the data band.
[768] Each of the groups 101b and 101c includes data coding type information 104a, entropy coding type information 104b, codeword 104c and side data 104d.
[769] In detail, a case of applying the data structure of the present invention to audio spatial information is explained as follows, for example.
[770] First of all, the data coding type information 104a within each of the groups 101b and 101c includes bsPCMCoding field, bsPilotCoding field, bsDiffType field and bd- DifftimeDirection field.
[771] The bsPCMCoding field is information to identify whether data coding of the corresponding group is PCM scheme or DIFF scheme.
[772] Only if the bsPCMCoding field designates the PCM scheme, a presence or non- presence of the PBC scheme is designated by the bsPilotCoding field.
[773] The bsDifftype field is information to designate a coding direction in case that DIFF scheme is applied. And, the bsDiffType field designates either DF: DIFF-FREQ or DT: DIFF-TIME.
[774] And, the bsDiffTimeDirection field is information to designate whether a coding direction on a time axis is FORWARD or BACKWARD in case that the bsDiffType field is DT.
[775] The entropy coding type information 104b within each of the groups 101b and 101c includes bsCodingScheme field and bsPairing field.
[776] The bsCodingScheme field is the information to designate whether entropy coding is ID or 2D.
[777] And, the bsPairing field is the information whether a direction for extracting two indexes is a frequency direction (FP: Frequency Pairing) or a time direction (TP: Time Pairing) in case that the bsCodingScheme field designates 2D.
[778] The codeword 104c within each of the groups 101b and 101c includes bsCodeW field. And, the bsCodeW field designates a codeword on a table applied for entropy coding. So, most of the aforesaid data become targets of entropy coding. In this case, they are transferred by the bsCodeW field. For instance, a pilot reference value and LAV Index value of PBC coding, which become targets of entropy coding, are transferred by the bsCodeW field.
[779] The side data 104d within each of the groups 101b and 101c includes bsLsb field and bsSign field. In particular, the side data 104d includes other data, which are entropy-coded not to be transferred by the bsCodeW field, as well as the bsLsb field and the bsSign field.
[780] The bsLsb field is a field applied to the aforesaid partial parameter and is the side information transferred only if a data type is CPC and in case of non-coarse quantization.
[781] And, the bsSign field is the information to designate a sign of an index extracted in case of applying ID entropy coding.
[782] Moreover, data transferred by PCM scheme are included in the side data 104d.
[783] Features of the signal processing data structure according to the present invention are explained as follows.
[784] First of all, a signal processing data structure according to the present invention includes a payload part having at least one of data coding information including pilot coding information at least per a frame and entropy coding information and a header part having main configuration information for the payload part.
[785] The main configuration information includes a first time information part having time information for entire frames and a first frequency information part having frequency information for the entire frames.
[786] And, the main configuration information further includes a first internal grouping information part having information for internal-grouping a random group including a plurality of data per frame.
[787] The frame includes a first data part having at least one of the data coding information and the entropy coding information and a frame information part having sub-configuration information for the first data part.
[788] The sub-configuration information includes a second time information part having time information for entire groups. And, the sub-configuration information further includes an external grouping information part having information for external grouping for a random group including a plurality of data per the group. Moreover, the sub-configuration information further includes a second internal grouping information part having information for internal-grouping the random group including a plurality of the data.
[789] Finally, the group includes the data coding information having information for a data coding scheme, the entropy coding information having information for an entropy coding scheme, a reference value corresponding to a plurality of data, and a second data part having a difference value generated using the reference value and the data.
[790] [791 ] [APPLICATION TO AUDIO CODING (MPEG SUROUND)]
[792] An example of unifying the aforesaid concepts and features of the present invention is explained as follows. [793] FIG. 35 is a block diagram of an apparatus for audio compression and recovery according to one embodiment of the present invention. [794] Referring to FIG. 35, an apparatus for audio compression and recovery according to one embodiment of the present invention includes an audio compression part 105-400 and an audio recovery part 500-800. [795] The audio compression part 105-400 includes a downmixing part 105, a core coding part 200, a spatial information coding part 300 and a multiplexing part 400. [796] And, the downmixing part 105 includes a channel downmixing part 110 and a spatial information generating part 120. [797] In the downmixing part 105, inputs of the channel downmixing part 110 are an audio signal of N multi-channels (X , X ,.., X ) and the audio signal. [798] The channel downmixing part 110 outputs a signal downmixed into channels of which number is smaller than that of channels of the inputs. [799] An output of the downmixing part 105 is downmixed into one or two channels, a specific number of channels according to a separate downmixing command, or a specific number of channels preset according to system implementation. [800] The core coding part 200 performs core coding on the output of the channel downmixing part 110, i.e., the downmixed audio signal. In this case, the core coding is carried out in a manner of compressing an input using various transform schemes such as a discrete transform scheme and the like. [801] The spatial information generating part 120 extracts spatial information from the multi-channel audio signal. The spatial information generating part 120 then transfers the extracted spatial information to the spatial information coding part 300. [802] The spatial information coding part 300 performs data coding and entropy coding on the inputted spatial information. The spatial information coding part 300 performs at least one of PCM, PBC and DIFF. In some cases, the spatial information coding part
300 further performs entropy coding. A decoding scheme by a spatial information decoding part 700 can be decided according to which data coding scheme is used by the spatial information coding part 300. And, the spatial information coding part 300 will be explained in detail with reference to FIG. 36 later. [803] An output of the core coding part 200 and an output of the spatial information coding part 300 are inputted to the multiplexing part 400. [804] The multiplexing part 400 multiplexes the two inputs into a bitstream and then transfers the bitstream to the audio recovery part 500 to 800. [805] The audio recovery part 500 to 800 includes a demultiplexing part 500, a core decoding part 600, a spatial information decoding part 700 and a multi-channel generating part 800.
[806] The demultiplexing part 500 demultiplexes the received bitstream into an audio part and a spatial information part. In this case, the audio part is a compressed audio signal and the spatial information part is a compressed spatial information.
[807] The core decoding part 600 receives the compressed audio signal from the demultiplexing part 500. The core decoding part 600 generates a downmixed audio signal by decoding the compressed audio signal.
[808] The spatial information decoding part 700 receives the compressed spatial information from the demultiplexing part 500. The spatial information decoding part 700 generates the spatial information by decoding the compressed spatial information.
[809] In doing so, identification information indicating various grouping information and coding information included in the data structure shown in FIG. 34 is extracted from the received bitstream. A specific decoding scheme is selected from at least one or more decoding schemes according to the identification information. And, the spatial information is generated by decoding the spatial information according to the selected decoding scheme. In this case, the decoding scheme by the spatial information decoding part 700 can be decided according to what data coding scheme is used by the spatial information coding part 300. And, the spatial information decoding part 700 is will be explained in detail with reference to FIG. 37 later.
[810] The multi-channel generating part 800 receives an output of the core coding part
600 and an output of the spatial information decoding part 160. The multi-channel generating part 800 generates an audio signal of N multi-channels Yl, Y2, , YN from the two received outputs.
[811] Meanwhile, the audio compression part 105-400 provides an identifier indicating what data coding scheme is used by the spatial information coding part 300 to the audio recovery part 500-800. To prepare for the above-explained case, the audio recovery part 500-800 includes a means for parsing the identification information.
[812] So, the spatial information decoding part 700 decides a decoding scheme with reference to the identification information provided by the audio compression part 105-400. Preferably, the means for parsing the identification information indicating the coding scheme is provided to the spatial information decoding part 700.
[813] FIG. 36 is a detailed block diagram of a spatial information encoding part according to one embodiment of the present invention, in which spatial information is named a spatial parameter.
[814] Referring to FIG. 36, a coding part according to one embodiment of the present invention includes a PCM coding part 310, a DIFF (differential coding) part 320 and a Huffman coding part 330. The Huffman coding part 330 corresponds to one embodiment of performing the aforesaid entropy coding.
[815] The PCM coding part 310 includes a grouped PCM coding part 311 and a PCB part
312. The grouped PCM coding part 311 PCM-codes spatial parameters. In some cases, the grouped PCM coding part 311 is able to PCM-codes spatial parameters by a group part. And, the PBC part 312 performs the aforesaid PBC on spatial parameters.
[816] The DIFF part 320 performs the aforesaid DIFF on spatial parameters.
[817] In particular, in the present invention, one of the grouped PCM coding part 311, the
PBC part 312 and the DIFF part 320 selectively operates for coding of spatial parameters. And, its control means is not separately shown in the drawing.
[818] The PBC executed by the PBC part 312 has been explained in detail in the foregoing description, of which explanation will be omitted in the following description.
[819] For another example of PBC, PBC is once performed on spatial parameters. And, the PBC can be further performed N-times (N>1) on a result of the first PBC. In particular, the PBC is at least once carried out on a pilot value or difference values as a result of performing the first PBC. In some cases, it is preferable that the PBC is carried out on the difference values only except the pilot value since the second PBC.
[820] The DIFF part 320 includes a DIFF_FREQ coding part 321 performing
DIFF_FREQ on a spatial parameter and DIFF_TIME coding parts 322 and 323 performing DIFF_TIME on spatial parameters.
[821] In the DIFF part 320, one selected from the group consisting of the DIFF_FREQ coding part 321 and the DIFF_TIME coding parts 322 and 323 carries out the processing for an inputted spatial parameter.
[822] In this case, the DIFF_TIME coding parts are classified into a
DIFF_TIME_FORWARD part 322 performing DIFF_TIME_FORWARD on a spatial parameter and a DIFF_TIME_BACKWARD part 323 performing DIFF_TIME_BACKWARD on a spatial parameter.
[823] In the DIFF_TIME coding parts 322 and 323, a selected one of the
DIFF_TIME_FORWARD part 322 and the DIFF_TIME_BACKWARD 323 carries out a data coding process on an inputted spatial parameter. Besides, the DIFF coding performed by each of the internal elements 321, 322 and 323 of the DIFF part 320 has been explained in detail in the foregoing description, of which explanation will be omitted in the following description.
[824] The Huffman coding part 330 performs Huffman coding on at least one of an output of the PBC part 312 and an output of the DIF part 320.
[825] The Huffman coding part 330 includes a 1 -dimension Huffman coding part
(hereinafter abbreviated HUFF_1D part) 331 processing data to be coded and transmitted one by one and a 2-dimension Huffman coding part (hereinafter ab- breviated HUFF_2D parts 332 and 333 processing data to be coded and transmitted by a unit of two combined data.
[826] A selected one of the HUFF_1D part 331 and the HUFF_2D parts 332 and 333 in the Huffman coding part 330 performs a Huffman coding processing on an input.
[827] In this case, the HUFF_2D parts 332 and 333 are classified into a frequency pair
2-Dimension Huffman coding part (hereinafter abbreviated HUFF_2D_FREQ_PAIR part) 332 performing Huffman coding on a data pair bound together based on a frequency and a time pair 2-Dimension Huffman coding part (hereinafter abbreviated HUFF_2D_TIME_PAIR part) 333 performing Huffman coding on a data pair bound together based on a time.
[828] In the HUFF_2D parts 332 and 333, a selected one of the HUFF_2D_FREQ_PAIR part 332 and the HUFF_2D_TIME_PAIR part 333 performs a Huffman coding processing on an input.
[829] Huffman coding performed by each of the internal elements 331, 332 and 333 of the
Huffman coding part 330 will explained in detail in the following description.
[830] Thereafter, an output of the Huffman coding part 330 is multiplexed with an output of the grouped PCM coding part 311 to be transferred.
[831] In a spatial information coding part according to the present invention, various kinds of identification information generated from data coding and entropy coding are inserted into a transport bitstream. And, the transport bitstream is transferred to a spatial information decoding part shown in FIG. 37.
[832] FIG. 37 is a detailed block diagram of a spatial information decoding part according to one embodiment of the present invention.
[833] Referring to FIG. 37, a spatial information decoding part receives a transport bitstream including spatial information and then generates the spatial information by decoding the received transport bitstream.
[834] A spatial information decoding part 700 includes an identifier extracting (flags parsing part) 710, a PCM decoding part 720, a Huffman decoding part 730 and a differential decoding part 740.
[835] The identifier parsing part 710 of the spatial information decoding part extracts various identifiers from a transport bitstream and then parses the extracted identifiers. This means that various kinds of the information mentioned in the foregoing description of FIG. 34 are extracted.
[836] The spatial information decoding part is able to know what kind of coding scheme is used for a spatial parameter using an output of the identifier parsing part 710 and then decides a decoding scheme corresponding to the recognized coding scheme. Besides, the execution of the identifier parsing part 710 can be performed by the aforesaid demultiplexing part 500 as well. [837] The PCM decoding part 720 includes a grouped PCM decoding part 721 and a pilot based decoding part 722.
[838] The grouped PCM decoding part 721 generates spatial parameters by performing
PCM decoding on a transport bitstream. In some cases, the grouped PCM decoding part 721 generates spatial parameters of a group part by decoding a transport bitstream.
[839] The pilot based decoding part 722 generates spatial parameter values by performing pilot based decoding on an output of the Huffman decoding part 730. This corresponds to a case that a pilot value is included in an output of the Huffman decoding part 730. For separate example, the pilot based decoding part 722 is able to include a pilot extracting part (not shown in the drawing) to directly extract a pilot value from a transport bitstream. So, spatial parameter values are generated using the pilot value extracted by the pilot extracting part and difference values that are the outputs of the Huffman decoding part 730.
[840] The Huffman decoding part 730 performs Huffman decoding on a transport bitstream. The Huffman decoding part 730 includes a 1 -Dimension Huffman decoding part (hereinafter abbreviated HUFF_1D decoding part) 731 outputting a data value one by one by performing 1 -Dimension Huffman decoding on a transport bitstream and 2-Dimension Huffman decoding parts (hereinafter abbreviated HUFF_2D decoding parts) 732 and 733 outputting a pair of data values each by performing 2-Dimension Huffman decoding on a transport bitstream.
[841] The identifier parsing part 710 extracts an identifier (e.g., bsCodingScheme) indicating whether a Huffman decoding scheme indicates HUFF_1D or HUFF_2D from a transport bitstream and then recognizes the used Huffman coding scheme by parsing the extracted identifier. So, either HUFF_1D or HUFF_2D decoding corresponding to each case is decided as a Huffman decoding scheme.
[842] The HUFF_1D decoding part 731 performs HUFF_1D decoding and each of the
HUFF_2D decoding parts 732 and 733 performs HUF_2D decoding.
[843] In case that the Huffman coding scheme is HUFF_2D in a transport bit stream, the identifier parsing part 710 further extracts an identifier (e.g., bsParsing) indicating whether the HUFF_2D scheme is HUFF_2D_FREQ_PAIR or HUFF_2D_TIME_PAIR and then parses the extracted identifier. So, the identifier parsing part 710 is able to recognize whether two data configuring one pair are bound together based on frequency or time. And, one of frequency pair 2-Dimension Huffman decoding (hereinafter abbreviated HUFF_2D_FREQ_PAIR decoding) and time pair 2-Dimension Huffman decoding (hereinafter abbreviated HUFF_2D_TIME_PAIR decoding) corresponding to the respective cases is decided as the Huffman decoding scheme.
[844] In the HUFF_2D decoding parts 732 and 733, the HUFF_2D_FREQ_PAIR part 732 performs HUFF_2D_FREQ_PAIR decoding and the HUFF_2D_TIME_PAIR part
733 performs HUFF_2D_FREQ_TIME decoding.
[845] An output of the Huffman decoding part 730 is transferred to the pilot based decoding part 722 or the differential decoding part 740 based on an output of the identifier parsing part 710.
[846] The differential decoding part 740 generates spatial parameter values by performing differential decoding on an output of the Huffman decoding part 730.
[847] The identifier parsing part 710 extracts an identifier (e.g., bsDiffType) indicating whether a DIFF scheme is DIF_FREQ or DIF_TIME from a transport bit stream from a transport bitstream and then recognizes the used DIFF scheme by parsing the extracted identifier. So, one of the DIFF_FREQ decoding and DIFF_TIME decoding corresponding to the respective cases is decided as a differential decoding scheme.
[848] The DIFF_FREQ decoding part 741 performs DIFF_FREQ decoding and each of the DIFF_TIME decoding parts 742 and 743 performs DIF_TIME decoding.
[849] In case that the DIFF scheme is DIFF_TIME, the identifier parsing part 710 further extracts an identifier (e.g., bsDiffTimeDirection) indicating whether the DIFF_TIME is DIFF_TIME_FORWARD or DIFF_TIME_BACKWARD from a transport bitstream and then parses the extracted identifier.
[850] So, it is able to recognize whether an output of the Huffman decoding part 730 is a difference value between current data and former data or a difference value between the current data and next data. One of DIFF_TIME_FORWARD decoding and DIFF_TIME_BACKWARD decoding corresponding to the respective cases is decided as a DIFF_TIME scheme.
[851] In the DIFF_TIME decoding parts 742 and 743, the DIFF_TIME_FORWARD part
742 performs DIFF_TIME_FORWARD decoding and the DIFF_TIME_BACKWARD part 743 performs DIFF_TIME_BACKWARD decoding.
[852] A procedure for deciding a Huffman decoding scheme and a data decoding scheme based on an output of the identifier parsing part 710 in the spatial information decoding part is explained as follows.
[853] For instance, the identifier parsing part 710 reads a first identifier (e.g., bsPCMCoding) indicating which one of PCM and DIFF is used in coding a spatial parameter.
[854] If the first identifier corresponds to a value indicating PCM, the identifier parsing part 710 further reads a second identifier (e.g., bsPilotCoding) indicating which one of PCM and PBC is used for coding of a spatial parameter.
[855] If the second identifier corresponds to a value indicating PBC, the spatial information decoding part performs decoding corresponding to the PBC.
[856] If the second identifier corresponds to a value indicating PCM, the spatial in- formation decoding part performs decoding corresponding to the PCM.
[857] On the other hand, if the first identifier corresponds to a value indicating DIFF, the spatial information decoding part performs a decoding processing that corresponds to the DIFF. Mode for the Invention
[858] Accordingly, various embodiments of the present invention are explained together with the aforesaid embodiments of the best mode. Industrial Applicability
[859] It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. For example, the grouping, data coding, and entropy coding according to the present invention are applicable to various fields and products. It is also possible to provide a medium storing data to which at least one feature of the present invention is applied.

Claims

Claims
[1] A method for signal processing in a method for decoding at least one service component using a specific alternative coding scheme determined based on information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, comprising: obtaining data coding identification information from a signal; and data-decoding data in accordance with a data coding scheme indicated by the data coding identification information, wherein: the data coding scheme comprises at least a pilot coding scheme; the pilot coding scheme comprises decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
[2] The method of claim 1, wherein the at least one service component is included in a main service channel (MSC) of a broadcast stream, and the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme is included in a fast information channel (FIC) of the broadcast stream.
[3] The method of claim 1, wherein a broadcast stream including, in a fast information channel (FIC) of the broadcast stream, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, defines the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, in an "ASCTy" field of the broadcast stream.
[4] The method of any one of claims 1 to 3, wherein the alternative coding scheme comprises an alternative audio coding scheme.
[5] The method of claim 1, wherein the data coding scheme further comprises a differential coding scheme, the differential coding scheme is one of a frequency differential coding scheme and a time differential coding scheme and the time differential coding scheme is one of a forward time differential coding scheme and a backward time differential coding scheme.
[6] The method of claim 1, further comprising: obtaining entropy coding identification information; and entropy-decoding the data using an entropy coding scheme indicated by the entropy coding identification information.
[7] The method of claim 6, wherein the data decoding step comprises executing the data-decoding for the entropy-decoded data using the data coding scheme.
[8] The method of claim 6, wherein the entropy decoding scheme is one of a one- dimensional coding scheme or a multi-dimensional coding scheme, and the multi-dimensional coding scheme is one of a frequency pair coding scheme and a time pair coding scheme.
[9] The method of claim 1, further comprising: decoding an audio signal, using the data as a parameter.
[10] An apparatus for signal processing in an apparatus for decoding at least one service component using a specific alternative coding scheme determined based on information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, comprising: an identification information obtaining part for obtaining data coding identification information from a signal; and a decoding part for data-decoding data in accordance with a data coding scheme indicated by the data coding identification information, wherein: the data coding scheme comprises at least a pilot coding scheme; the pilot coding scheme comprises decoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value, wherein a broadcast stream including, in a fast information channel (FIC) of the broadcast stream, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, defines the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, in an "ASCTy" field of the broadcast stream.
[11] A method for transmitting a digital broadcast signal, comprising: inserting, into a broadcast stream, at least one service component compressed in accordance with an alternative coding scheme; inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme; and transmitting, to a broadcast receiver, the broadcast stream including the at least one service component and the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, wherein the specific alternative coding scheme comprises: data-encoding data in accordance with a data coding scheme; and generating and transferring data coding identification information indicating the data coding scheme, wherein the data coding scheme comprises at least a pilot coding scheme; the pilot coding scheme comprises encoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
[12] The method of claim 11, wherein the alternative coding scheme comprises an alternative audio coding scheme.
[13] The method of claim 12, wherein the alternative audio coding scheme comprises at least one of an advanced audio coding (AAC) scheme and a bit sliced arithmetic coding (BSAC) scheme.
[14] The method of claim 13, wherein the alternative audio coding scheme comprises at least one of a spectral band replication (SBR) scheme and a moving picture experts group (MPEG) surround scheme.
[15] The method of claim 13, wherein the alternative coding scheme comprises an audio coding scheme having a higher compression rate than a masking pattern adapted universal sub-band integrated coding and multiplexing (MUSICAM) scheme.
[16] The method of claim 12, wherein the step of inserting, into a broadcast stream, at least one service component compressed in accordance with an alternative coding scheme comprises including the at least one service component in a main service channel (MSC) of the broadcast stream.
[17] The method of claim 11, wherein the step of inserting, into the broadcast stream, information indicating that the at least one service component has been compressed in accordance with a specific alternative coding scheme comprises including, in a fast information channel (FIC) of the broadcast stream, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
[18] The method of claim 17, wherein the inclusion of the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme in the FIC of the broadcast stream comprises defining, in an "ASCTy" field, the information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme.
[19] The method of any one of claims 16 to 18, wherein the alternative coding scheme comprises an alternative audio coding scheme.
[20] An apparatus for signal processing in an apparatus for decoding at least one service component using a specific alternative coding scheme determined based on information indicating that the at least one service component has been compressed in accordance with the specific alternative coding scheme, comprising: an encoding part for data-encoding data in accordance with a data coding scheme; and an outputting part for generating and transferring data coding identification information indicating the data coding scheme, wherein the data coding scheme comprises at least a pilot coding scheme; the pilot coding scheme comprises encoding the data using a pilot reference value corresponding to a plurality of data and a pilot difference value; and the pilot difference value is generated using the data and the pilot reference value.
PCT/KR2006/004149 2005-10-13 2006-10-13 Method and apparatus for signal processing WO2007043841A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2006300102A AU2006300102B2 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing
US12/083,459 US8199827B2 (en) 2005-10-13 2006-10-13 Method of processing a signal and apparatus for processing a signal
EP06799227A EP1946555A4 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing

Applications Claiming Priority (34)

Application Number Priority Date Filing Date Title
US72565405P 2005-10-13 2005-10-13
US60/725,654 2005-10-13
US72622805P 2005-10-14 2005-10-14
US60/726,228 2005-10-14
US72971305P 2005-10-25 2005-10-25
US60/729,713 2005-10-25
US73039405P 2005-10-27 2005-10-27
US73039305P 2005-10-27 2005-10-27
US60/730,394 2005-10-27
US60/730,393 2005-10-27
US73776005P 2005-11-18 2005-11-18
US60/737,760 2005-11-18
US75291105P 2005-12-23 2005-12-23
US60/752,911 2005-12-23
US75340805P 2005-12-27 2005-12-27
US60/753,408 2005-12-27
US75823106P 2006-01-12 2006-01-12
US75823806P 2006-01-12 2006-01-12
US60/758,231 2006-01-12
US60/758,238 2006-01-12
KR10-2006-0004050 2006-01-13
KR10-2006-0004049 2006-01-13
KR20060004050 2006-01-13
KR20060004049 2006-01-13
KR20060007786 2006-01-25
KR10-2006-0007786 2006-01-25
KR10-2006-0030651 2006-04-04
KR20060030651 2006-04-04
KR1020060079836A KR20070108312A (en) 2005-10-05 2006-08-23 Method and apparatus for encoding/decoding an audio signal
KR10-2006-0079837 2006-08-23
KR10-2006-0079838 2006-08-23
KR10-2006-0079836 2006-08-23
KR1020060079837A KR20070108313A (en) 2005-10-05 2006-08-23 Method and apparatus for encoding/decoding an audio signal
KR1020060079838A KR20070108314A (en) 2005-10-05 2006-08-23 Method and apparatus for encoding/decoding an audio signal

Publications (1)

Publication Number Publication Date
WO2007043841A1 true WO2007043841A1 (en) 2007-04-19

Family

ID=37943027

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/KR2006/004150 WO2007043842A1 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing
PCT/KR2006/004149 WO2007043841A1 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing
PCT/KR2006/004147 WO2007043840A1 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/004150 WO2007043842A1 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/004147 WO2007043840A1 (en) 2005-10-13 2006-10-13 Method and apparatus for signal processing

Country Status (3)

Country Link
EP (3) EP1946556A4 (en)
AU (3) AU2006300103B2 (en)
WO (3) WO2007043842A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101507851B1 (en) * 2008-04-13 2015-04-06 엘지전자 주식회사 Transmitting/receiving system and method of processing data in the transmitting/receiving system
WO2012158333A1 (en) 2011-05-19 2012-11-22 Dolby Laboratories Licensing Corporation Forensic detection of parametric audio coding schemes
CN103187064B (en) * 2011-12-30 2015-04-22 中国移动通信集团北京有限公司 Method and device of confirming speech coding modes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4661862A (en) * 1984-04-27 1987-04-28 Rca Corporation Differential PCM video transmission system employing horizontally offset five pixel groups and delta signals having plural non-linear encoding functions
JPS6294090A (en) * 1985-10-21 1987-04-30 Hitachi Ltd Encoding device
WO2004004353A1 (en) * 2000-12-15 2004-01-08 Multivia Co., Ltd. System of coding and decoding multimedia data
WO2004049309A1 (en) * 2002-11-28 2004-06-10 Koninklijke Philips Electronics N.V. Coding an audio signal
WO2005020576A1 (en) * 2003-08-20 2005-03-03 Electronics And Telecommunications Research Institute System and method for digital multimedia broadcasting

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU643677B2 (en) * 1989-01-27 1993-11-25 Dolby Laboratories Licensing Corporation Low time-delay transform coder, decoder, and encoder/decoder for high-quality audio
US5226084A (en) * 1990-12-05 1993-07-06 Digital Voice Systems, Inc. Methods for speech quantization and error correction
US5231485A (en) * 1991-11-19 1993-07-27 Scientific-Atlanta, Inc. Method and apparatus for transforming between fixed-rate vector quantized data and variable rate vector quantized data
US5671226A (en) * 1995-02-09 1997-09-23 Mitsubishi Denki Kabushiki Kaisha Multimedia information processing system
JP3614301B2 (en) * 1998-05-28 2005-01-26 パイオニア株式会社 Digital broadcast receiver
JP3631613B2 (en) * 1998-05-29 2005-03-23 京セラ株式会社 Portable wireless communication device
DE69924922T2 (en) * 1998-06-15 2006-12-21 Matsushita Electric Industrial Co., Ltd., Kadoma Audio encoding method and audio encoding device
JP4021124B2 (en) * 2000-05-30 2007-12-12 株式会社リコー Digital acoustic signal encoding apparatus, method and recording medium
BR0206783A (en) * 2001-11-30 2004-02-25 Koninkl Philips Electronics Nv Method and encoder for encoding a signal, bit stream representing a coded signal, storage medium, method and decoder for decoding a bit stream representing a coded signal, transmitter, receiver, and system
WO2004090864A2 (en) * 2003-03-12 2004-10-21 The Indian Institute Of Technology, Bombay Method and apparatus for the encoding and decoding of speech

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4661862A (en) * 1984-04-27 1987-04-28 Rca Corporation Differential PCM video transmission system employing horizontally offset five pixel groups and delta signals having plural non-linear encoding functions
JPS6294090A (en) * 1985-10-21 1987-04-30 Hitachi Ltd Encoding device
WO2004004353A1 (en) * 2000-12-15 2004-01-08 Multivia Co., Ltd. System of coding and decoding multimedia data
WO2004049309A1 (en) * 2002-11-28 2004-06-10 Koninklijke Philips Electronics N.V. Coding an audio signal
WO2005020576A1 (en) * 2003-08-20 2005-03-03 Electronics And Telecommunications Research Institute System and method for digital multimedia broadcasting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1946555A4 *

Also Published As

Publication number Publication date
AU2006300102B2 (en) 2010-09-16
AU2006300102A1 (en) 2007-04-19
AU2006300103B2 (en) 2010-09-09
EP1949698A4 (en) 2009-12-30
EP1946556A1 (en) 2008-07-23
WO2007043842A1 (en) 2007-04-19
EP1946555A1 (en) 2008-07-23
AU2006300101B2 (en) 2010-09-16
EP1949698A1 (en) 2008-07-30
EP1946556A4 (en) 2009-12-30
EP1946555A4 (en) 2009-12-30
AU2006300103A1 (en) 2007-04-19
AU2006300101A1 (en) 2007-04-19
WO2007043840A1 (en) 2007-04-19

Similar Documents

Publication Publication Date Title
US7684498B2 (en) Signal processing using pilot based coding
EP1952114A1 (en) Method and apparatus for signal processing and encoding and decoding method, and apparatus therefor
US8199828B2 (en) Method of processing a signal and apparatus for processing a signal
US7752053B2 (en) Audio signal processing using pilot based coding
AU2006300102B2 (en) Method and apparatus for signal processing

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 12083459

Country of ref document: US

Ref document number: 2006300102

Country of ref document: AU

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2006799227

Country of ref document: EP