[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103873878A - Video decoding method and corresponding video decoding device thereof - Google Patents

Video decoding method and corresponding video decoding device thereof Download PDF

Info

Publication number
CN103873878A
CN103873878A CN201210545243.4A CN201210545243A CN103873878A CN 103873878 A CN103873878 A CN 103873878A CN 201210545243 A CN201210545243 A CN 201210545243A CN 103873878 A CN103873878 A CN 103873878A
Authority
CN
China
Prior art keywords
data
module
decoding
video
code stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210545243.4A
Other languages
Chinese (zh)
Inventor
王健铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Display Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201210545243.4A priority Critical patent/CN103873878A/en
Publication of CN103873878A publication Critical patent/CN103873878A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video decoding method. The method comprises the following steps: extracting different parameters according to start codes representing different levels in video bitstream respectively; performing decoding of differential discrete cosine transformation (DCT) on the video bitstream from which the parameters are extracted, performing motion compensation calculation and reconstructing an image till a current block is decoded. The invention further discloses a video decoding device. The device is designed on the basis of an FPGA (Field Programmable Gate Array). By using the method and the device, free optimization of design can be realized, so that the data processing efficiency in a decoding process can be increased.

Description

Video decoding method and corresponding video decoding device
Technical Field
The present invention relates to the field of video display technologies, and in particular, to a video decoding method based on a Moving Picture Experts Group (MPEG 2) protocol and a video decoding device corresponding thereto.
Background
At present, with the advent of the digital and global integration era, technologies of transmission and processing of various media information including sound, graphics and video become important components of the related industry technologies, and video processing in a general sense includes many repeated image data, so that the data volume of the video data in the components is the largest, which makes the video compression technology of the relatively mature MPEG2 protocol particularly important. The MPEG2 protocol is an international standard for video and audio compression coding and its data stream format. It defines the coding and decoding technique and the transmission protocol of data stream, and sets up the common standard between MPEG2 protocol decoders.
In the prior art, there are various solutions for the video decoding technology based on the MPEG2 protocol, and the most common is to use the Digital Signal Processing (DSP) technology to implement the decoding function, but as for the DSP technology, because the function design can only be performed based on the instruction set provided by the device itself, the design details, especially when the hardware structure needs to be adjusted, cannot be freely optimized, for example: in the pipeline optimization design of the timing sequence, the total number of clock cycles of the DSP instruction operation cannot be controlled externally, which is obstructive to the improvement of the decoding efficiency.
Disclosure of Invention
In view of the above, the present invention provides a video decoding method and a corresponding video decoding apparatus, which can achieve design-oriented optimization.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a method of video decoding, the method comprising:
respectively extracting different parameters according to start codes which represent different layers in a video code stream;
and carrying out differential Discrete Cosine Transform (DCT) decoding on the video code stream with the extracted parameters, carrying out motion compensation calculation, and reconstructing an image until the decoding of the current chunk is finished.
Specifically, the method further comprises: after all the macro blocks with the chunk as the unit are processed, jumping to different layers according to different priorities according to different start codes until entering an IDLE state.
Preferably, the extracted parameters include: control information, quantization matrices, motion vectors, and image difference data.
Preferably, the order of extracting the parameters of the video code streams at different levels is as follows: picture sequence layer → picture group layer → picture layer → macroblock group layer → macroblock layer.
Preferably, the decoding of the DCT comprises: inverse scanning, inverse quantization calculation and Inverse Discrete Cosine Transform (IDCT) calculation.
Preferably, the inverse scanning and inverse quantization calculation process includes: positioning a two-dimensional matrix after the input one-dimensional differential DCT is subjected to inverse scanning according to the selection parameters related to the inverse scanning matrix in the code stream; judging whether the current differential DCT is an internal macro block or not and whether the current differential DCT is a first coefficient of the internal macro block or not; carrying out corresponding inverse quantization operation on the DC coefficient according to three inverse quantization algorithms in an MPEG2 protocol; carrying out saturation calculation and detuning control;
the IDCT calculation process includes: the conversion of the 8X8 block data from frequency domain to time domain is performed.
Preferably, the motion compensation calculation comprises:
and acquiring corresponding data from the DDR according to the decoded motion compensation information, performing semi-precision interpolation operation on the acquired data, and adding the time domain difference value obtained by the IDCT calculation and the reference data subjected to the semi-precision interpolation to form final data.
The invention also provides a video decoding device, which is characterized in that the device is designed based on the field programmable gate array FPGA and comprises the following components: the motion compensation device comprises a data extraction module, an inverse quantization module, an IDCT module and a motion compensation module; wherein,
the data extraction module is used for respectively extracting different parameters according to the start codes which represent different layers in the video code stream; the system is also used for determining that all macro blocks taking the chunk as a unit are processed, and then jumping to different layers according to different initial codes and different priorities until entering an IDLE state;
the inverse quantization module and the IDCT module are used for decoding the video code stream with the extracted parameters by differential DCT;
and the motion compensation module is used for performing motion compensation calculation on the video code stream with the extracted parameters and reconstructing an image until the decoding of the current chunk is finished.
Preferably, the data extraction module is specifically configured to analyze data in a corresponding layer according to start codes representing different layers in a video code stream, and store the data as a value of a control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream, and executing decoding operation on the variable-length code; and, control data is shifted into the module from the peripheral module.
Preferably, the data extraction module further comprises: the device comprises a variable length decoding module, a code stream analyzing module and a shift control module; wherein,
the variable length decoding module is used for executing decoding operation on the variable length code;
the code stream analyzing module is used for analyzing the data in the corresponding layer according to the start codes which represent different layers in the video code stream and storing the data as the value of the control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream;
and the shift control module is used for controlling data to be shifted from the peripheral module to the local module.
The invention provides a video decoding method and a corresponding video decoding device, which are designed based on a field programmable gate array, and the realization method comprises the following steps: respectively extracting different parameters according to start codes which represent different layers in a video code stream; decoding a differential Discrete Cosine Transform (DCT) coefficient of the video code stream with the extracted parameters, performing motion compensation calculation, and reconstructing an image until the decoding of the current chunk is finished; further, after all the macro blocks with the chunk as the unit are determined to be processed, according to different start codes and different priorities, jumping to different layers until entering an IDLE (IDLE) state. Because the hardware design of the invention is based on the Field Programmable Gate Array (FPGA), the protocol and algorithm among different modules can be changed and adjusted; in addition, the motion compensation module in the invention is an independent peripheral module, namely whether the module exists or not can be selected according to the requirement, and the realization of the whole decoding algorithm can not be influenced, so that the free optimization on the design can be realized.
In addition, the motion compensation module is parallel to the inverse quantization module and the IDCT module in time sequence, so that the operation of the whole structure on the production line is facilitated. Therefore, the invention can shorten the data extraction and processing time and improve the data processing efficiency when processing a large batch of video information.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for implementing a video decoding method according to the present invention;
FIG. 2 is a block diagram of a video decoding apparatus according to the present invention.
Detailed Description
The invention is designed based on a Field Programmable Gate Array (FPGA), can solve the problem that the design in the prior art can not be freely optimized, and has the basic idea that:
respectively extracting different parameters according to start codes which represent different layers in a video code stream; decoding a differential Discrete Cosine Transform (DCT) coefficient of the video code stream with the extracted parameters, performing motion compensation calculation, and reconstructing an image until the decoding of the current chunk is finished; after all the macro blocks with the chunk as the unit are processed, jumping to different layers according to different priorities according to different start codes until entering an IDLE state.
The invention is described in further detail below with reference to the figures and the embodiments.
Fig. 1 is a schematic diagram of a method implementation flow of the video decoding method of the present invention, as shown in fig. 1, the implementation flow is as follows:
step 101: respectively extracting different parameters according to start codes which represent different layers in a video code stream;
here, the parameters include: control information, quantization matrices, motion vectors, image difference data, and the like; the extraction sequence of the corresponding parameters of the video code streams of different levels is as follows: image sequence layer (VSL) → picture group layer (GOPL) → image layer (PL) → macroblock group layer (SL) → Macroblock Layer (ML). After entering the macroblock layer, the parameters of the macroblock, including the macroblock position, skipped macroblock, macroblock mode, etc., are decoded first, and then the forward or backward, frame or field, first or second motion vector is extracted according to the decoded control information and according to the specification of the MPEG2 protocol.
In the practical application process, the method comprises the following steps: analyzing data in the corresponding layer according to start codes which represent different layers in the video code stream, and storing the data as a value of a control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream, and executing decoding operation on the variable-length code; and, control data is shifted from the peripheral module into the local module.
Step 102: carrying out differential DCT decoding on the video code stream with the extracted parameters, carrying out motion compensation calculation, and reconstructing an image until the decoding of the current chunk is finished;
the method specifically comprises the following steps: the video code stream with the extracted parameters is decoded in two paths, namely: the differential DCT decoding of one path mainly comprises: and performing inverse scanning, inverse quantization calculation and Inverse Discrete Cosine Transform (IDCT) calculation, performing motion compensation on the other path, combining the two paths of data to obtain a final image value, and repeating the steps until the decoding of the current chunk is finished.
Wherein the inverse scanning and inverse quantization calculation process comprises: positioning a two-dimensional matrix after the input one-dimensional differential DCT is subjected to inverse scanning according to the selection parameters related to the inverse scanning matrix in the code stream; judging whether the current differential DCT is an internal macro block or not and whether the current differential DCT is a first coefficient of the internal macro block, namely a Direct Current (DC) coefficient; carrying out corresponding inverse quantization operation on the DC coefficient according to three inverse quantization algorithms in an MPEG2 protocol; saturation calculation and detuning control are performed. The IDCT calculation process is as follows: the conversion of the 8X8 block data from the frequency domain to the time domain is performed without regard to whether the current block is a luminance matrix or a color difference matrix.
The algorithm of the invention is added with a motion compensation function, and the existing Motion Estimation and Motion Compensation (MEMC) calculation method comprises the following steps: by using the data correlation in or between images, one part of decoded data values is used to predict the other part of data values waiting for decoding, and of course, the pointer relationship between the two parts of data values, the structure and size of the data area, and the like are key indexes for distinguishing the motion compensation performance in different protocols. The motion compensation described in the present invention does not require the above MEMC calculation, but only includes: and acquiring corresponding data from a double data rate synchronous dynamic random access memory (DDR) according to the decoded motion compensation information, performing semi-precision interpolation operation on the acquired data, and adding the time domain difference value calculated by the IDCT and the reference data subjected to the semi-precision interpolation to form final data.
Further, the present invention also includes: and storing the data of the reconstructed image.
Step 103: after all the macro blocks with the chunk as the unit are determined to be processed, jumping to different layers according to different initial codes and different priorities until entering an IDLE state;
the method specifically comprises the following steps: after the decoding of the current chunk is finished, judging whether the next start code is the start code of the chunk, if so, jumping to the macroblock group layer in the step 101; otherwise, judging whether the start code is an image start code, and if so, jumping to the image layer in the step 101; otherwise, continuously judging whether the start code is a sequence start code, and if so, jumping to the image sequence layer in the step 101; otherwise, enter IDLE state, wait for the next sequence start code.
The present invention further provides a video decoding apparatus, as shown in fig. 2, including: the motion compensation device comprises a data extraction module, an inverse quantization module, an IDCT module and a motion compensation module; wherein,
the data extraction module is used for respectively extracting different parameters according to the start codes which represent different layers in the video code stream; the system is also used for determining that all macro blocks taking the chunk as a unit are processed, and then jumping to different layers according to different initial codes and different priorities until entering an IDLE state;
specifically, in the actual application process, the data extraction module needs to have the following functions:
analyzing data in the corresponding layer according to start codes which represent different layers in the video code stream, and storing the data as a value of a control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream, and executing decoding operation on the variable-length code; and, control data is shifted from the peripheral module into the local module.
According to the above different functions of the data extraction module, in the design process, the data extraction module can be divided into three sub-modules, which are respectively: the device comprises a variable length decoding module, a code stream analyzing module and a shift control module; wherein,
the variable length decoding module is used for executing decoding operation on the variable length code;
the code stream analyzing module is used for analyzing the data in the corresponding layer according to the start codes which represent different layers in the video code stream and storing the data as the value of the control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream;
and the shift control module is used for controlling data to shift from the peripheral module to the local module.
The inverse quantization module and the IDCT module are used for decoding the video code stream with the extracted parameters by differential DCT; wherein the decoding of the differential DCT mainly comprises: inverse scanning, inverse quantization calculation and IDCT calculation; accordingly, the method can be used for solving the problems that,
the inverse quantization module is used for executing inverse scanning and inverse quantization calculation; in particular, the method comprises the following steps of,
the device is used for positioning the two-dimensional matrix after the input one-dimensional differential DCT is subjected to inverse scanning according to the selection parameters related to the inverse scanning matrix in the code stream; judging whether the current differential DCT is an internal macro block or not, and whether the current differential DCT is the first coefficient of the internal macro block, namely a DC coefficient; carrying out corresponding inverse quantization operation on the DC coefficient according to three inverse quantization algorithms in an MPEG2 protocol; saturation calculation and detuning control are performed.
In the prior art, the two calculation functions of inverse scanning and inverse quantization are separately performed in two modules, but the invention integrates the two calculation processes of inverse scanning and inverse quantization into one module, that is, both calculation processes are performed in the inverse quantization module, because: firstly, the inverse scanning is actually only a preliminary processing stage of inverse quantization in function, one-dimensional data is changed into a two-dimensional matrix through inverse scanning, only the change of relative position is realized, and no processing is carried out on the data per se; secondly, the data can directly enter the inverse quantization stage through a pipeline mode after being filtered by inverse scanning, so that two calculation processes are not required to be split into two modules.
In the practical application process, the inverse quantization module can be divided into a plurality of sub-modules according to different functions, for example: the first module is used for positioning a two-dimensional matrix after the input one-dimensional differential DCT is subjected to inverse scanning according to the selection parameters related to the inverse scanning matrix in the code stream; and saturation calculation and detuning control are performed. A second module, configured to determine whether the current differential DCT is an internal macroblock, and whether the current differential DCT is a first coefficient of the internal macroblock, that is, a DC coefficient; and carrying out corresponding inverse quantization operation on the DC coefficient according to three inverse quantization algorithms in an MPEG2 protocol. The second module can be implemented by a pair of combinational logic, and the two functions of the second module can be completed in one clock cycle, and can also be used as an optimization mode of the time sequence of the quantization process.
The IDCT module is used for executing IDCT calculation; specifically, the conversion of the 8 × 8 block data from the frequency domain to the time domain is performed.
The IDCT calculation is the bottleneck of the whole decoding speed, because other modules can almost do pipeline processing, and the pipeline element is a clock cycle, and is used as two-dimensional inverse discrete cosine transform, the method is characterized in that effective calculation can be expanded only when 8 12bits of data of a certain row or a certain column are all stabilized on an internal data line or an address line. Therefore, if the pipeline is made, at least 8T, that is, 8 clock cycles are one flow element, according to the analysis of the throughput of the pipeline, the flow efficiency of IDCT can only be up to 1/8 designed by 1T flow element. As such, the IDCT module at least needs to have the following functions: first, signed number multiplication of 12bits (which can be converted by a multiplier, or by table lookup and shift addition); secondly, the addition of 12bits signed number can be realized, and saturation control is realized; finally, the 8X8 matrix of the intermediate result must be transposed.
The motion compensation module is used for performing motion compensation calculation on the video code stream with the extracted parameters and reconstructing an image until the decoding of the current chunk is finished; in particular, the method comprises the following steps of,
and the device is used for acquiring corresponding data from the DDR according to the decoded motion compensation information, performing semi-precision interpolation operation on the acquired data, and adding the time domain difference value obtained by the IDCT calculation and the reference data subjected to the semi-precision interpolation to form final data.
Here, the above-mentioned data extraction module has transmitted the decoded data required for the motion compensation calculation, such as the macroblock position, the motion compensation mode, and the corresponding motion vector, to the motion compensation module, and thus, the calculation method performed by the motion compensation module in the present invention is different from the existing MEMC calculation method mentioned in step 102.
Because the amount of information involved in motion compensation calculation is very large, although there is no data delay much like the IDCT, the algorithm is advantageous to the operation of the whole structure on the pipeline because the motion compensation module is parallel to the inverse quantization module and the IDCT module in time sequence. Therefore, the invention can shorten the data extraction and processing time and improve the data processing efficiency when processing a large batch of video information.
In addition, the motion compensation module in the invention is an independent peripheral module, namely whether the module exists or not can be selected according to the requirement, and the realization of the whole algorithm cannot be influenced.
Further, the apparatus further comprises: and the image data access module is used for storing the data of the image reconstructed by the motion compensation module.
In the whole decoding device, the image data access module has relatively single function and the simplest structure. It is only necessary to store corresponding data in a data output line according to an input DDR address or retrieve data incoming from a DDR interface through input data.
The foregoing implementation method of the present invention is described below with reference to specific embodiments, where the decoding process is described by being divided into two state machines, where the state machines include: decoding a main state machine and a macro block processing state machine; in particular, the method comprises the following steps of,
for a video decoding main state machine, the state jump process is basically divided into three parts:
firstly, effective parameters or data of different layers are obtained according to different start codes, and the part can also be called a data header extraction part. This part is mainly prepared for the subsequent macroblock processing part, and mainly comprises: differential DCT data, quantization matrices, motion vectors, and significant flag information, etc.
And secondly, after the data extraction is finished, entering a macro block processing stage. The macroblock processing stage is the main part of the video decoding process, performs differential DCT decoding and motion compensation, and reconstructs an image. The state machine for macroblock processing is described in detail later.
And finally, after all the macro blocks taking the chunk as a unit are processed, jumping to different layers according to different priorities according to different start codes until the host structure in the IDLE state is returned.
Here we need to pipeline the local decoding process of the differential DCT data according to functions in the state machine to improve the performance over time. According to the theory of the pipeline, the time with the highest time utilization rate is that the clock numbers of the differential decomposition code chain and the motion compensation pipeline are the same, so that the parallel efficiency can be improved to the maximum extent.
For the macroblock processing state machine, when the state machine starts to operate, it will first accept the address jump value or address increment value of the related macroblock to confirm where the current macroblock is located in the current image. If it is the luminance signal Y, its address is in 16 units, and for the color difference signal CrCb, it is determined whether its address is in 16 units or 8 units, depending on the encoding format of the currently processed image, i.e., whether the image is 4:4:4, 4:2:2 or 4:2: 0. If the ratio is 4:4:4, the horizontal and vertical coordinates are all in 16 units; if the ratio is 4:2:2, the abscissa takes 8 as a unit, and the ordinate takes 16 as a unit; in the case of 4:2:0, the abscissa and ordinate are in units of 8. The calculation method plays a role in positioning when the reference point set is searched by the subsequent motion vector. When the Macroblock increment (Macro _ block _ increment) value and the Macroblock mode (Macro _ block _ modes) are calculated, variable length decoding is performed at the first position of the decoding chain.
Macroblock modes (Macroblock _ modes) are decoded to obtain a series of values relating to the type of motion compensation, Macroblock characterization parameters and quantization parameters. According to the parameters obtained by decoding the Macroblock mode (Macroblock _ modes), and according to the relevant variable length decoding table, the state machine decodes the motion vector used by the current Macroblock, and determines whether the motion vector is a forward motion vector or a backward motion vector, whether the motion vector is a frame prediction or a field prediction, whether the motion vector is a top field or a bottom field, whether the motion vector is a first vector or a second vector, and the like. The macroblock processing also sets all zero values for the non-encoded 8X8 block.
After the parameters required for motion compensation are calculated, the motion compensation module starts to work under the trigger of a Start signal, including the confirmation of the data macro block pointed by the motion vector, half-point interpolation and the like. While the motion compensation is running, blocks of luminance or chrominance data, which are composed of different differential DCT data, may also enter the local decoding chain. The local decoding chain comprises: inverse scanning, inverse quantization and IDCT calculation.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (10)

1. A method for video decoding, the method comprising:
respectively extracting different parameters according to start codes which represent different layers in a video code stream;
and carrying out differential Discrete Cosine Transform (DCT) decoding on the video code stream with the extracted parameters, carrying out motion compensation calculation, and reconstructing an image until the decoding of the current chunk is finished.
2. The video decoding method of claim 1, wherein the method further comprises: after all the macro blocks with the chunk as the unit are processed, jumping to different layers according to different priorities according to different start codes until entering an IDLE state.
3. The video decoding method of claim 1, wherein the extracted parameters comprise: control information, quantization matrices, motion vectors, and image difference data.
4. The video decoding method of claim 1, wherein the order of extracting the parameters of the different layers of video streams is: picture sequence layer → picture group layer → picture layer → macroblock group layer → macroblock layer.
5. The video decoding method of any of claims 1 to 4, wherein the decoding of the DCT comprises: inverse scanning, inverse quantization calculation and Inverse Discrete Cosine Transform (IDCT) calculation.
6. The video decoding method of claim 5, wherein the inverse scanning and inverse quantization calculation process comprises: positioning a two-dimensional matrix after the input one-dimensional differential DCT is subjected to inverse scanning according to the selection parameters related to the inverse scanning matrix in the code stream; judging whether the current differential DCT is an internal macro block or not and whether the current differential DCT is a first coefficient of the internal macro block or not; carrying out corresponding inverse quantization operation on the DC coefficient according to three inverse quantization algorithms in an MPEG2 protocol; carrying out saturation calculation and detuning control;
the IDCT calculation process includes: the conversion of the 8X8 block data from frequency domain to time domain is performed.
7. The video decoding method of any of claims 1 to 4, wherein the motion compensation calculation comprises:
and acquiring corresponding data from the DDR according to the decoded motion compensation information, performing semi-precision interpolation operation on the acquired data, and adding the time domain difference value obtained by the IDCT calculation and the reference data subjected to the semi-precision interpolation to form final data.
8. A video decoding device is designed based on a Field Programmable Gate Array (FPGA), and comprises: the motion compensation device comprises a data extraction module, an inverse quantization module, an IDCT module and a motion compensation module; wherein,
the data extraction module is used for respectively extracting different parameters according to the start codes which represent different layers in the video code stream; the system is also used for determining that all macro blocks taking the chunk as a unit are processed, and then jumping to different layers according to different initial codes and different priorities until entering an IDLE state;
the inverse quantization module and the IDCT module are used for decoding the video code stream with the extracted parameters by differential DCT;
and the motion compensation module is used for performing motion compensation calculation on the video code stream with the extracted parameters and reconstructing an image until the decoding of the current chunk is finished.
9. The video decoding apparatus according to claim 8, wherein the data extraction module is specifically configured to parse data in a corresponding layer according to start codes representing different layers in a video code stream, and store the parsed data as a value of a control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream, and executing decoding operation on the variable-length code; and, control data is shifted into the module from the peripheral module.
10. The video decoding apparatus of claim 9, wherein the data extraction module further comprises: the device comprises a variable length decoding module, a code stream analyzing module and a shift control module; wherein,
the variable length decoding module is used for executing decoding operation on the variable length code;
the code stream analyzing module is used for analyzing the data in the corresponding layer according to the start codes which represent different layers in the video code stream and storing the data as the value of the control register; judging data which must be stored and data which must be ignored in the current video code stream according to the level grade of the decoding device and the flag bit in the video code stream;
and the shift control module is used for controlling data to be shifted from the peripheral module to the local module.
CN201210545243.4A 2012-12-14 2012-12-14 Video decoding method and corresponding video decoding device thereof Pending CN103873878A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210545243.4A CN103873878A (en) 2012-12-14 2012-12-14 Video decoding method and corresponding video decoding device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210545243.4A CN103873878A (en) 2012-12-14 2012-12-14 Video decoding method and corresponding video decoding device thereof

Publications (1)

Publication Number Publication Date
CN103873878A true CN103873878A (en) 2014-06-18

Family

ID=50911952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210545243.4A Pending CN103873878A (en) 2012-12-14 2012-12-14 Video decoding method and corresponding video decoding device thereof

Country Status (1)

Country Link
CN (1) CN103873878A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090003451A1 (en) * 2004-12-10 2009-01-01 Micronas Usa, Inc. Shared pipeline architecture for motion vector prediction and residual decoding
CN201266990Y (en) * 2008-09-28 2009-07-01 西安飞鹰科技有限责任公司 Device for encoding MPEG-4 video based on FPGA
CN101568030A (en) * 2009-06-05 2009-10-28 湖南工程学院 Method and system for decoding self-adaptive multi-standard reconfigurable video
CN101848383A (en) * 2009-03-24 2010-09-29 虹软(上海)科技有限公司 Downsampling decoding method for MPEG2-format video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090003451A1 (en) * 2004-12-10 2009-01-01 Micronas Usa, Inc. Shared pipeline architecture for motion vector prediction and residual decoding
CN201266990Y (en) * 2008-09-28 2009-07-01 西安飞鹰科技有限责任公司 Device for encoding MPEG-4 video based on FPGA
CN101848383A (en) * 2009-03-24 2010-09-29 虹软(上海)科技有限公司 Downsampling decoding method for MPEG2-format video
CN101568030A (en) * 2009-06-05 2009-10-28 湖南工程学院 Method and system for decoding self-adaptive multi-standard reconfigurable video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张云: "MPEG-2视频解码器的FPGA设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
US8218641B2 (en) Picture encoding using same-picture reference for pixel reconstruction
US8019002B2 (en) Parallel batch decoding of video blocks
RU2573257C2 (en) Image signal decoding device, image signal decoding method, image signal encoding device, image signal encoding method and programme
CN105120293A (en) Image cooperative decoding method and apparatus based on CPU and GPU
JP2009544225A (en) Parallel processing unit for video compression
KR101710001B1 (en) Apparatus and Method for JPEG2000 Encoding/Decoding based on GPU
US11245937B2 (en) Method and system for zero overhead parallel entropy decoding
US20100316123A1 (en) Moving image coding device, imaging device and moving image coding method
CN101207812A (en) A video loop filtering method
KR20230145063A (en) Upsampling of reference pixels for intra-prediction in video coding
CN101188761A (en) Method for optimizing DCT quick algorithm based on parallel processing in AVS
US20130101029A1 (en) Multimedia data encoding
CN102857758B (en) Reusable pixel processing method and reusable video processing chip
US20150043645A1 (en) Video stream partitioning to allow efficient concurrent hardware decoding
CN101443808B (en) Memory organizational scheme and controller architecture for image and video processing
US20060133512A1 (en) Video decoder and associated methods of operation
CN103533353B (en) An Approximate Video Coding System
CN112422986B (en) Hardware decoder pipeline optimization method and application
CN104113759B (en) Video system, video frame buffer recompression/decompression method and device
CN100486333C (en) Interpolation arithmetic device and method
US7330595B2 (en) System and method for video data compression
CN101778280A (en) Circuit and method based on AVS motion compensation interpolation
CN104038766A (en) Device used for using image frames as basis to execute parallel video coding and method thereof
CN103873878A (en) Video decoding method and corresponding video decoding device thereof
Wang et al. High definition IEEE AVS decoder on ARM NEON platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140618