CN103716629B - Image processing method, device, coder and decoder - Google Patents
Image processing method, device, coder and decoder Download PDFInfo
- Publication number
- CN103716629B CN103716629B CN201210375019.5A CN201210375019A CN103716629B CN 103716629 B CN103716629 B CN 103716629B CN 201210375019 A CN201210375019 A CN 201210375019A CN 103716629 B CN103716629 B CN 103716629B
- Authority
- CN
- China
- Prior art keywords
- block
- image
- sub
- idx
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Disclosed is an image processing method which includes: determining a second target image sub-block according to the size of a target image block, the size of each target image sub-block included in the target image block and second indication information used for indicating the position of a first target image sub-block in the target image block when it is determined that monition information of a first basic-layer image sub-block corresponding to the first target image sub-block of the target image block is vacant; determining first reference information used for coding the first target image sub-block according to the motion information of the second target image sub-block, wherein the first-layer image sub-block is an image block in a basic-layer image and the target image block is in a reinforced-layer image and the basic-layer image is corresponding to the reinforced-layer image; and coding the target image block so as to generate a target code stream and first indication information included in the target code stream.
Description
Technical field
The present invention relates to field of video processing, and more particularly, to a kind of method for image procossing, device,
Encoder.
Background technology
Fast development with the Internet and cultural the becoming increasingly abundant of people's material spirit, are directed to video in the Internet
Application demand get more and more in particular for the application demand of HD video, and the data volume of HD video is very big, wants
HD video can transmit in band-limited the Internet it is necessary to the problem of solution first is exactly HD video compressed encoding asks
Topic.
In network environment (such as the Internet), because the network bandwidth is limited, the demand of terminal unit and user
It is all different, so for certain specific application, the code stream of first compression is not satisfactory and effective, to one
A bit for specific user or equipment, even nonsensical.An effective method solving this problem is exactly to utilize
Scalable video (SVC, scalable video coding) technology.Scalable coding is also referred to as hierarchical coding.At this
In SVC technology, according to the mass parameter including spatial resolution, temporal resolution or noise specific strength etc., to image
Carry out hierarchical coding.For example, when spatial scalable encodes, image can be carried out resolution decreasing process and obtain low resolution figure
Original image is referred to as high-definition picture by picture as a comparison, encoder respectively to this low quality (for example, low resolution) image with
And this high-quality (for example, high-resolution) image is encoded, obtain high quality graphic coding information and encode with low-quality image
Information.In this SVC technology, according to the quality ginseng including spatial resolution, temporal resolution or noise specific strength etc.
Number, an image is divided into multiple images layer.The target of SVC is exactly to allow high quality graphic layer sufficiently utilize low-quality spirogram as far as possible
As the information of layer, improve the efficiency of inter-layer prediction so that can be in hgher efficiency when coding quality high image.
In order to improve inter-layer prediction efficiency, in the prior art, if in low quality layer image with high quality layer image
The corresponding all image blocks of image block in have at least one image block to use inter-frame forecast mode, then directly scaled using suitable
The movable information of the image block of low quality layer image afterwards as the movable information of respective image block in high quality layer image, but
It is for example, to have one or more sub-blocks (corresponding sub-block) to adopt frame mode (that is, this correspondence sub-block in low quality layer image
Movable information be sky) in the case of coding, sub-block in high quality layer image block cannot correspondence from low quality layer image
Movable information is obtained in sub-block.The movable information of this sub-block in the case of being somebody's turn to do, can be obtained according to given method construct.However, these
The movable information being derived by is inaccurate, thus the coding efficiency to this sub-block can be affected, and affects entirely high-quality further
The code efficiency of amount tomographic image.
Accordingly, it is desirable to provide a kind of method, it is possible to increase for can not be from low in the target image block of high quality layer image
Corresponding sub-block in quality layer image obtains the coding efficiency of the sub-block of movable information.
Content of the invention
The embodiment of the present invention provides a kind of method and apparatus for image procossing, it is possible to increase for enhancement layer image
The corresponding sub-block that can not include from basic tomographic image in target image block obtains the coding efficiency of the sub-block of movable information.
A kind of first aspect, there is provided method for image procossing, the method includes:When determination and target image block
The movable information of the first corresponding Primary layer image subblock of first object image subblock is space-time, according to this target image block
The size of each target image sub-block that size, this target image block include and be used for indicating first object image subblock in this mesh
Second configured information of the position in logo image block, determines the second target image sub-block;According to this second target image sub-block
Movable information, determines the first reference information for being encoded to this first object image subblock, wherein, this first Primary layer
Image subblock is in the image block in basic tomographic image, and this target image block is located in enhancement layer image, the Primary layer figure being somebody's turn to do
As corresponding with this enhancement layer image, and this first primary image block sub-block locus in this basic tomographic image with this
Locus in this enhancement layer image for the one target image sub-block are corresponding;This target image block is encoded, to generate
Target code stream and be contained in the first configured information in this target code stream.
In a kind of possible embodiment, this target being included according to the size of target image block, this target image block
The size of image subblock and the second configured information for indicating position in this target image block for the first object image subblock,
Determine the second target image sub-block, including:According to following arbitrary formula, determine this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent for indicate this position in this target image block for the second target image sub-block the 3rd
Configured information, Idx1Represent this second configured information, N is the big of size according to this target image block and this target image sub-block
Little determination.
In conjunction with first aspect and the first possible embodiment, in the possible embodiment of second, this basis should
The movable information of the second target image sub-block, determines the first reference letter for being encoded to this first object image subblock
Breath, including:If the movable information of this second target image sub-block is empty it is determined that this first reference information is zero motion letter
Breath.
In conjunction with first aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, this this target image block is carried out coding include:According to this first reference information, to this first object image
Sub-block carries out motion compensation process.
In conjunction with the possible embodiment of first aspect, the first possible embodiment, second with the third is possible
Embodiment, in the 4th kind of possible embodiment, this, according to this reference information, encodes to this target image block, bag
Include:It is pointed to the pixel near the border between this target image sub-block and carry out block-eliminating effect filtering process.
In conjunction with the possible embodiment of first aspect, the first possible embodiment, second, the third possible reality
Apply mode and the 4th kind of possible embodiment, in the 5th kind of possible embodiment, this according to this reference information, to this mesh
Logo image block is encoded, including:Entropy code is carried out to this first configured information, so that this first configured information is in this object code
Adjacent with skip mode flag bit or fusion MERGE mode flags position information in stream.
In conjunction with the possible embodiment of first aspect, the first possible embodiment, second, the third possible reality
Apply mode, the 4th kind of possible embodiment and the 5th kind of possible embodiment, in the 6th kind of possible embodiment, should
According to this reference information, this target image block is encoded, including:According to the ginseng being located at predeterminated position in this enhancement layer image
Examine whether image block is encoded using reference information, determine context;According to this context, this first configured information is carried out
Entropy code.
A kind of second aspect, there is provided method for image procossing, the method includes:From target code stream, obtain the
One configured information;Motion letter when the first corresponding Primary layer image subblock of the first object image subblock with target image block
Cease for space-time, based on this first configured information, each target of being included according to the size of this target image block, this target image block
The size of image subblock and the second configured information for indicating position in this target image block for the first object image subblock,
Determine the second target image sub-block;According to the movable information of this second target image sub-block, determine for this first object figure
The first reference information being decoded as sub-block, wherein, this first Primary layer image subblock is in the figure in basic tomographic image
As block, this target image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this
Locus in this basic tomographic image for the one primary image block sub-block and this first object image subblock are in this enhancement layer image
In locus corresponding;This object code stream is decoded, to obtain this target image block.
In a kind of possible embodiment, this target being included according to the size of target image block, this target image block
The size of image subblock and the second configured information for indicating position in this target image block for the first object image subblock,
Determine the second target image sub-block, including:According to following arbitrary formula, determine this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
In conjunction with second aspect and the first possible embodiment, in the possible embodiment of second, this basis should
The movable information of the second target image sub-block, determines the first reference letter for being encoded to this first object image subblock
Breath, including:If the movable information of this second target image sub-block is empty it is determined that this first reference information is zero motion letter
Breath.
In conjunction with second aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, this this object code stream is decoded including:According to this first reference information, to this first object image
Block carries out motion compensation process.
In conjunction with the possible embodiment of second aspect, the first possible embodiment, second with the third is possible
Embodiment, in the 4th kind of possible embodiment, this, according to this reference information, is decoded to this target image block, bag
Include:It is pointed to the pixel near the border between this target image sub-block and carry out block-eliminating effect filtering process.
In conjunction with the possible embodiment of second aspect, the first possible embodiment, second, the third possible reality
Apply mode and the 4th kind of possible embodiment, in the 5th kind of possible embodiment, first should be obtained from target code stream
Configured information, including:From target code stream, obtain the first configured information, wherein, this first configured information is in this target code stream
Adjacent with skip mode flag bit or fusion MERGE mode flags position information.
In conjunction with the possible embodiment of second aspect, the first possible embodiment, second, the third possible reality
Apply mode, the 4th kind of possible embodiment and the 5th kind of possible embodiment, in the 6th kind of possible embodiment,
The first configured information should be obtained from target code stream, including:According to the reference picture being located at predeterminated position in this enhancement layer image
Whether block is decoded using reference information, determines context;According to this context, carry out entropy decoding, to determine this first finger
Show information.
A kind of third aspect, there is provided device for image procossing, this device includes:Acquiring unit, for when determination
The movable information of the first Primary layer image subblock corresponding with the first object image subblock of target image block is space-time, according to
The size of each target image sub-block that the size of this target image block, this target image block include and be used for indicating first object
Second configured information of position in this target image block for the image subblock, determines the second target image sub-block;Should for basis
The movable information of the second target image sub-block, determines the first reference letter for being encoded to this first object image subblock
Breath, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located at enhancement layer
In image, the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block sub-block is in this Primary layer figure
Locus in picture are corresponding with this first object image subblock locus in this enhancement layer image;Coding unit,
This target image block is encoded, to generate target code stream and to be contained in the first configured information in this target code stream.
In a kind of possible embodiment, this acquiring unit specifically for according to following arbitrary formula, determine this second
Target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
In conjunction with the third aspect and the first possible embodiment, in the possible embodiment of second, this acquisition list
If unit is empty it is determined that this first reference information is zero motion letter specifically for the movable information of this second target image sub-block
Breath.
In conjunction with the third aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, this coding unit is specifically for according to this first reference information, transporting to this first object image subblock
Dynamic compensation deals.
In conjunction with the possible embodiment of the third aspect, the first possible embodiment, second with the third is possible
Embodiment, in the 4th kind of possible embodiment, this coding unit is additionally operable to be pointed between this target image sub-block
Pixel near border carries out block-eliminating effect filtering process.
In conjunction with the possible embodiment of the third aspect, the first possible embodiment, second, the third possible reality
Apply mode and the 4th kind of possible embodiment, in the 5th kind of possible embodiment, this coding unit is specifically for this
First configured information carries out entropy code so that this first configured information in this target code stream with skip mode flag bit or fusion
MERGE mode flags position information is adjacent.
In conjunction with the possible embodiment of the third aspect, the first possible embodiment, second, the third possible reality
Apply mode, the 4th kind of possible embodiment and the 5th kind of possible embodiment, in the 6th kind of possible embodiment, should
Whether coding unit is specifically for being entered using reference information according to the reference image block being located at predeterminated position in this enhancement layer image
Row coding, determines context;For according to this context, entropy code being carried out to this first configured information.
A kind of fourth aspect, there is provided device of image procossing, this device includes:Decoding unit, for from target code stream
In, obtain the first configured information;Acquiring unit, for working as first corresponding with the first object image subblock of target image block
The movable information of Primary layer image subblock is space-time, and this first configured information being obtained based on this decoding unit, according to this target
The size of each target image sub-block that the size of image block, this target image block include and be used for indicating first object image
Second configured information of position in this target image block for the block, determines the second target image sub-block;For according to this second mesh
The movable information of logo image sub-block, determines the first reference information for being decoded to this first object image subblock, wherein,
This first Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located in enhancement layer image,
The basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block sub-block sky in this basic tomographic image
Between position corresponding with this first object image subblock locus in this enhancement layer image;It is right that this decoding unit is additionally operable to
This object code stream is decoded, to obtain this target image block.
In a kind of possible embodiment, this acquiring unit specifically for according to following arbitrary formula, determine this second
Target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
In conjunction with fourth aspect and the first possible embodiment, in the possible embodiment of second, this acquisition list
If unit is empty it is determined that this first reference information is zero motion specifically for the movable information of this second target image sub-block
Information.
In conjunction with fourth aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, this decoding unit is specifically for according to this first reference information, transporting to this first object image subblock
Dynamic compensation deals.
In conjunction with the possible embodiment of fourth aspect, the first possible embodiment, second with the third is possible
Embodiment, in the 4th kind of possible embodiment, this decoding unit is additionally operable to be pointed between this target image sub-block
Pixel near border carries out block-eliminating effect filtering process.
In conjunction with the possible embodiment of fourth aspect, the first possible embodiment, second, the third possible reality
Apply mode and the 4th kind of possible embodiment, in the 5th kind of possible embodiment, this decoding unit is specifically for from mesh
In coding stream, obtain the first configured information, wherein, this first configured information in this target code stream with skip mode flag bit or
Merge MERGE mode flags position information adjacent.
In conjunction with the possible embodiment of fourth aspect, the first possible embodiment, second, the third possible reality
Apply mode, the 4th kind of possible embodiment and the 5th kind of possible embodiment, in the 6th kind of possible embodiment, should
Whether decoding unit is specifically for being entered using reference information according to the reference image block being located at predeterminated position in this enhancement layer image
Row decoding, determines context;For according to this context, carrying out entropy decoding, to determine this first configured information.
A kind of 5th aspect, there is provided encoder for image procossing, this encoder includes:Bus;With this bus phase
Processor even;The memorizer being connected with this bus;Wherein, this processor passes through this bus, calls storage in this memorizer
Program, for when the fortune determining the first Primary layer image subblock corresponding with the first object image subblock of target image block
Dynamic information is space-time, the size of each the target image sub-block being included according to the size of this target image block, this target image block
The second configured information with for indicating position in this target image block for the first object image subblock, determines the second target figure
As sub-block;For the movable information according to this second target image sub-block, determine for carrying out to this first object image subblock
First reference information of coding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this target
Image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block
Locus in this basic tomographic image for the sub-block and this first object image subblock locus in this enhancement layer image
Corresponding;For encoding to this target image block, to generate target code stream and to be contained in this target code stream first
Configured information.
In a kind of possible embodiment, this processor is specifically for according to following arbitrary formula, determining this second mesh
Logo image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
In conjunction with the 5th aspect and the first possible embodiment, in the possible embodiment of second, this processor
If the movable information specifically for this second target image sub-block is empty it is determined that this first reference information is zero motion letter
Breath.
In conjunction with the 5th aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, this processor is specifically for according to this first reference information, moving to this first object image subblock
Compensation deals.
In conjunction with the possible embodiment of the 5th aspect, the first possible embodiment, second with the third is possible
Embodiment, in the 4th kind of possible embodiment, this processor is specifically for being pointed between this target image sub-block
Pixel near border carries out block-eliminating effect filtering process.
In conjunction with the possible embodiment of the 5th aspect, the first possible embodiment, second, the third possible reality
Apply mode and the 4th kind of possible embodiment, in the 5th kind of possible embodiment, this processor specifically for this
One configured information carries out entropy code so that this first configured information in this target code stream with skip mode flag bit or fusion
MERGE mode flags position information is adjacent.
In conjunction with the possible embodiment of the 5th aspect, the first possible embodiment, second, the third possible reality
Apply mode, the 4th kind of possible embodiment and the 5th kind of possible embodiment, in the 6th kind of possible embodiment, should
Whether processor is specifically for being carried out using reference information according to the reference image block being located at predeterminated position in this enhancement layer image
Coding, determines context;For according to this context, entropy code being carried out to this first configured information.
A kind of 6th aspect, there is provided decoder for image procossing, this decoder includes:Bus;With this bus phase
Processor even;The memorizer being connected with this bus;Wherein, this processor passes through this bus, calls storage in this memorizer
Program, for, from target code stream, obtaining the first configured information;For when the first object image subblock with target image block
The movable information of the first corresponding Primary layer image subblock is space-time, based on this first configured information, according to this target image
The size of each target image sub-block that the size of block, this target image block include and be used for indicating that first object image subblock exists
Second configured information of the position in this target image block, determines the second target image sub-block;For according to this second target figure
As the movable information of sub-block, determine the first reference information for being decoded to this first object image subblock, wherein, this
One Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located in enhancement layer image, is somebody's turn to do
Basic tomographic image is corresponding with this enhancement layer image, and this first primary image block sub-block space bit in this basic tomographic image
Put corresponding with this locus in this enhancement layer image for first object image subblock;For solving to this object code stream
Code, to obtain this target image block.
In a kind of possible embodiment, this processor is specifically for according to following arbitrary formula, determining this second mesh
Logo image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
In conjunction with the 6th aspect and the first possible embodiment, in the possible embodiment of second, this processor
If the movable information specifically for this second target image sub-block is empty it is determined that this first reference information is zero motion letter
Breath.
In conjunction with the 6th aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, this processor is specifically for according to this first reference information, moving to this first object image subblock
Compensation deals.
In conjunction with the possible embodiment of the 6th aspect, the first possible embodiment, second with the third is possible
Embodiment, in the 4th kind of possible embodiment, this processor is specifically for being pointed between this target image sub-block
Pixel near border carries out block-eliminating effect filtering process.
In conjunction with the possible embodiment of the 6th aspect, the first possible embodiment, second, the third possible reality
Apply mode and the 4th kind of possible embodiment, in the 5th kind of possible embodiment, this processor is specifically for from target
In code stream, obtain the first configured information, wherein, this first configured information in this target code stream with skip mode flag bit or melt
Close MERGE mode flags position information adjacent.
In conjunction with the possible embodiment of the 6th aspect, the first possible embodiment, second, the third possible reality
Apply mode, the 4th kind of possible embodiment and the 5th kind of possible embodiment, in the 6th kind of possible embodiment, should
Whether processor is specifically for being carried out using reference information according to the reference image block being located at predeterminated position in this enhancement layer image
Decoding, determines context;For according to this context, carrying out entropy decoding, to determine this first configured information.
A kind of 7th aspect, there is provided method for image procossing, the method includes:When determination and target image block
The movable information of the first corresponding Primary layer image subblock of first object image subblock is space-time, according to this first Primary layer figure
As the reconstruction pixel of sub-block, determine the second reference information for being encoded to this first object image subblock, wherein, this
One Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located in enhancement layer image, is somebody's turn to do
Basic tomographic image is corresponding with this enhancement layer image, and this first primary image block sub-block space bit in this basic tomographic image
Put corresponding with this locus in this enhancement layer image for first object image subblock;This target image block is compiled
Code, to generate target code stream and to be contained in the 4th configured information in this target code stream.
In a kind of possible embodiment, this carries out coding and includes to this target image block:It is pointed to this target image
The pixel near border between sub-block carries out block-eliminating effect filtering process.
In conjunction with the 7th aspect and the first possible embodiment, in the possible embodiment of second, this is to this mesh
Logo image block carries out coding and includes:Entropy code is carried out to the 4th configured information, so that the 4th configured information is in this object code
Adjacent with skip mode flag bit or fusion MERGE mode flags position information in stream.
In conjunction with the 7th aspect, the first possible embodiment and the possible embodiment of second, this is to this target figure
Include as block carries out coding:Whether the reference image block according to being located at predeterminated position in this enhancement layer image is entered using reference information
Row coding, determines context;According to this context, entropy code is carried out to the 4th configured information.
A kind of eighth aspect, there is provided method for image procossing, the method includes:From target code stream, obtain the
Four configured informations;When the fortune determining the first Primary layer image subblock corresponding with the first object image subblock of target image block
Dynamic information is space-time, based on the 4th configured information, according to the reconstruction pixel of this first Primary layer image subblock, determines for right
The second reference information that this first object image subblock is encoded, wherein, this first Primary layer image subblock is in substantially
Image block in tomographic image, this target image block is located in enhancement layer image, basic tomographic image and this enhancement layer image phase be somebody's turn to do
Corresponding, and this first primary image block sub-block locus in this basic tomographic image and this first object image subblock are at this
Locus in enhancement layer image are corresponding;This object code stream is decoded, to obtain this target image block.
In a kind of possible embodiment, this this object code stream is decoded including:It is pointed to this target image
The pixel near border between block carries out block-eliminating effect filtering process.
In conjunction with eighth aspect and the first possible embodiment, in the possible embodiment of second, should be from target
In code stream, obtain the 4th configured information, including:From target code stream, obtain the 4th configured information, wherein, the 4th instruction letter
Breath is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
In conjunction with eighth aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, should from target code stream, obtain the 4th configured information, including:Default according to being located in this enhancement layer image
Whether the reference image block of position is decoded using reference information, determines context;According to this context, carry out entropy decoding,
To determine the 4th configured information.
A kind of 9th aspect, there is provided device for image procossing, this device includes:Acquiring unit, for when determination
The movable information of the first Primary layer image subblock corresponding with the first object image subblock of target image block is space-time, according to
The reconstruction pixel of this first Primary layer image subblock, determines the second reference for being encoded to this first object image subblock
Information, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located at and strengthens
In tomographic image, the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block sub-block is in this Primary layer
Locus in image are corresponding with this first object image subblock locus in this enhancement layer image;Coding is single
Unit, for encoding to this target image block, to generate target code stream and to be contained in the 4th instruction in this target code stream
Information.
In a kind of possible embodiment, this coding unit is specifically for being pointed to the side between this target image sub-block
Pixel near boundary carries out block-eliminating effect filtering process.
In conjunction with the 9th aspect and the first possible embodiment, in the possible embodiment of second, this coding list
Unit is specifically for carrying out entropy code to the 4th configured information, so that the 4th configured information and skips mould in this target code stream
Formula flag bit or fusion MERGE mode flags position information are adjacent.
In conjunction with the 9th aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, according to the reference image block being located at predeterminated position in this enhancement layer image whether this coding unit specifically for
Encoded using reference information, determined context;For according to this context, entropy code being carried out to the 4th configured information.
A kind of tenth aspect, there is provided device of image procossing, this device includes:Decoding unit, for from target code stream
In, obtain the 4th configured information;Acquiring unit, for corresponding with the first object image subblock of target image block when determining
The movable information of the first Primary layer image subblock is space-time, the 4th configured information being obtained based on this decoding unit, according to this
The reconstruction pixel of one Primary layer image subblock, determines the second reference letter for being encoded to this first object image subblock
Breath, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located at enhancement layer
In image, the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block sub-block is in this Primary layer figure
Locus in picture are corresponding with this first object image subblock locus in this enhancement layer image;This coding unit
It is additionally operable to this object code stream is decoded, to obtain this target image block.
In a kind of possible embodiment, this decoding unit is specifically for being pointed to the side between this target image sub-block
Pixel near boundary carries out block-eliminating effect filtering process.
In conjunction with the tenth aspect and the first possible embodiment, in the possible embodiment of second, this decoding list
Unit specifically for from target code stream, obtains the 4th configured information, wherein, the 4th configured information in this target code stream with jump
Cross mode flags position or fusion MERGE mode flags position information is adjacent.
In conjunction with the tenth aspect, the first possible embodiment and the possible embodiment of second, may at the third
Embodiment in, according to the reference image block being located at predeterminated position in this enhancement layer image whether this decoding unit specifically for
It is decoded using reference information, determine context;For according to this context, carrying out entropy decoding, to determine the 4th instruction
Information.
A kind of 11st aspect, there is provided encoder for image procossing, this encoder includes:Bus;With this bus
Connected processor;The memorizer being connected with this bus;Wherein, this processor passes through this bus, calls storage in this memorizer
Program, for when determining the first Primary layer image subblock corresponding with the first object image subblock of target image block
Movable information is space-time, according to the reconstruction pixel of this first Primary layer image subblock, determines for this first object image
The second reference information that block is encoded, wherein, this first Primary layer image subblock is in the image block in basic tomographic image,
This target image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this is first basic
Locus in this basic tomographic image for the image block sub-block and this first object image subblock sky in this enhancement layer image
Between position corresponding;This target image block is encoded, to generate target code stream and to be contained in this target code stream
Four configured informations.
In a kind of possible embodiment, this process implement body is attached with the border being pointed between this target image sub-block
Near pixel carries out block-eliminating effect filtering process.
In conjunction with the 11st aspect and the first possible embodiment, in the possible embodiment of second, this process
Implement body is used for carrying out entropy code to the 4th configured information, so that the 4th configured information and skips mould in this target code stream
Formula flag bit or fusion MERGE mode flags position information are adjacent.
In conjunction with the 11st aspect, the first possible embodiment and the possible embodiment of second, can at the third
In the embodiment of energy, this processor is specifically for according to the reference image block being located at predeterminated position in this enhancement layer image being
No use reference information is encoded, and determines context;For according to this context, entropy volume being carried out to the 4th configured information
Code.
A kind of 12nd aspect, there is provided decoder for image procossing, this decoder includes:Bus;With this bus
Connected processor;The memorizer being connected with this bus;Wherein, this processor passes through this bus, calls storage in this memorizer
Program, for from target code stream, obtain the 4th configured information;For when the first object figure of determination and target image block
As the movable information of corresponding the first Primary layer image subblock of sub-block is space-time, based on the 4th configured information, according to this
The reconstruction pixel of one Primary layer image subblock, determines the second reference letter for being encoded to this first object image subblock
Breath, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located at enhancement layer
In image, the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block sub-block is in this Primary layer figure
Locus in picture are corresponding with this first object image subblock locus in this enhancement layer image;For to this mesh
Coding stream is decoded, to obtain this target image block.
In a kind of possible embodiment, this processor is specifically for being pointed to the border between this target image sub-block
Neighbouring pixel carries out block-eliminating effect filtering process.
In conjunction with the 12nd aspect and the first possible embodiment, in the possible embodiment of second, this process
Implement body be used for from target code stream, obtain the 4th configured information, wherein, the 4th configured information in this target code stream with jump
Cross mode flags position or fusion MERGE mode flags position information is adjacent.
In conjunction with the 12nd aspect, this processor specifically for according in this enhancement layer image be located at predeterminated position with reference to figure
As whether block is decoded using reference information, determine context;For according to this context, carrying out entropy decoding, it is somebody's turn to do with determining
4th configured information.
Method and apparatus for image procossing according to embodiments of the present invention, for the target image block of enhancement layer image
In the corresponding sub-block that can not include from basic tomographic image obtain the first object image subblock of movable information, by according to described
The position of one target image sub-block determines the second target image sub-block, and the movable information according to this second target image sub-block or
The reconstruction pixel of the first Primary layer image subblock corresponding with this first object image subblock on locus, determination is directed to
The reference information of this first object image subblock, and encoded according to this reference information, it is possible to increase this first object image
The coding efficiency of sub-block.
Brief description
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, below will be to required in the embodiment of the present invention
Use accompanying drawing be briefly described it should be apparent that, drawings described below is only some embodiments of the present invention, right
For those of ordinary skill in the art, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings
His accompanying drawing.
Fig. 1 is the indicative flowchart of the method for image procossing according to an embodiment of the invention.
Fig. 2 is the schematic diagram of partition according to an embodiment of the invention and sub-block index.
Fig. 3 is the meaning flow chart of the method for image procossing according to another embodiment of the present invention.
Fig. 4 is the schematic block diagram of the device for image procossing according to an embodiment of the invention.
Fig. 5 is the schematic block diagram of the device for image procossing according to another embodiment of the present invention.
Fig. 6 is the schematic block diagram of the encoder for image procossing according to an embodiment of the invention.
Fig. 7 is the schematic block diagram of the decoder for image procossing according to another embodiment of the present invention.
Fig. 8 is the indicative flowchart of the method for image procossing according to yet another embodiment of the invention
Fig. 9 is the indicative flowchart of the method for image procossing according to yet another embodiment of the invention
Figure 10 is the schematic block diagram of the device for image procossing according to yet another embodiment of the invention.
Figure 11 is the schematic block diagram of the device for image procossing according to yet another embodiment of the invention.
Figure 12 is the schematic block diagram of the encoder for image procossing according to yet another embodiment of the invention.
Figure 13 is the schematic block diagram of the decoder for image procossing according to yet another embodiment of the invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation description is it is clear that described embodiment a part of embodiment that is the present invention, rather than whole embodiments.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under the premise of not making creative work
Example, broadly falls into the scope of protection of the invention.
Fig. 1 shows the method 100 for image procossing according to embodiments of the present invention from the description of coding side angle
Indicative flowchart.As shown in figure 1, the method 100 includes:
S110, when the determination first Primary layer image subblock corresponding with the first object image subblock of target image block
Movable information is space-time, each the target image sub-block being included according to the size of this target image block, this target image block big
Little and for indicating the second configured information of position in this target image block for the first object image subblock, determine the second target
Image subblock;
S120, according to the movable information of this second target image sub-block, determines for entering to this first object image subblock
First reference information of row coding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this mesh
Logo image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image
Locus in this basic tomographic image for the block sub-block and this first object image subblock space bit in this enhancement layer image
Put corresponding;
S130, encodes to this target image block, to generate target code stream and to be contained in this target code stream
One configured information.
Specifically, hierarchical coding is being carried out to image, for example, when spatial scalable encodes, image can carried out
Resolution processes are obtained low-resolution image, as a comparison original image is referred to as high-definition picture, encoder is respectively to this
Low-resolution image and this high-definition picture are encoded.For convenience of describing, herein by image to be encoded high for quality
It is referred to as enhancement layer image, corresponding low-quality image (for example described low-resolution image) to be encoded is referred to as Primary layer figure
Picture.
In embodiments of the present invention, target image is the image being processed using layered coding technique, and Primary layer refers to
Quality (including the parameters such as frame rate, spatial resolution, temporal resolution, noise specific strength or credit rating) in hierarchical coding
Relatively low layer, enhancement layer refers to that quality in hierarchical coding (includes frame rate, spatial resolution, temporal resolution, signal to noise ratio strong
Degree or the parameter such as credit rating) higher layer.It should be noted that in embodiments of the present invention, in embodiments of the present invention, right
In a given enhancement layer, Primary layer corresponding thereto can be less than any layer of this enhancement layer for quality, for example, if
There is currently five layers, coding quality improves (that is, ground floor quality is minimum, layer 5 quality highest) successively, if enhancement layer is
4th layer, then Primary layer can be ground floor or the second layer, can also be third layer, can also be the 4th layer.With
Reason, for a given Primary layer, enhancement layer corresponding thereto can for quality higher than this Primary layer any layer.
Enhancement layer image is the image in currently processed enhancement layer, basic tomographic image be Primary layer in enhancement layer image
Image in synchronization.
In sum, in embodiments of the present invention, the quality of this basic tomographic image is less than the quality of this enhancement layer image.
Target image block is the image block processing in this enhancement layer image.
Primary layer image block is the figure that there is corresponding relation in basic tomographic image with this target image block on locus
As block.
In embodiments of the present invention, the corresponding relation of the image block in Primary layer and the image block in enhancement layer can basis
Basic resolution proportionate relationship between tomographic image and enhancement layer image is calculated.For example, x direction and y direction are being included
In system, if the resolution in x direction and y direction for the enhancement layer image is 2 times of basic tomographic image respectively, for enhancement layer
The pixel coordinate in the middle upper left corner is (2x, 2y) and the image block for (2m) × (2n) for the size, the corresponding blocks in its basic tomographic image
Can be the pixel coordinate in the upper left corner be the image block that (x, y) and size are m × n.
In embodiments of the present invention, aftermentioned sub-block refers to the sub-block (strengthening in the layer image block) of target image block, aftermentioned
Corresponding sub-block refers to the corresponding in the base layer image block of this sub-block.
In embodiments of the present invention, movable information can be included in prediction direction, reference picture index or motion vector
One or more, wherein, prediction direction can be divided into unidirectional and bi-directional predicted, single directional prediction can be divided into again forward prediction with backward
Prediction, forward prediction refers to using forward direction reference picture list, and that is, the reference picture in list (list) 0 produces prediction signal, after
Refer to using backward reference picture list, i.e. reference picture generation prediction signal in list 1 to prediction, bi-directional predicted finger is simultaneously
Produce prediction signal using the reference picture in list 0 and list 1;For single directional prediction, need a reference picture index
Indicate the reference picture selected in list0 or list 1, for bi-directional predicted, need two reference picture index, respectively
Indicate the reference picture selected in list 0 and list 1;Each motion vector includes horizontal direction component x and vertically
Durection component y, can be denoted as (x, y), for single directional prediction, need a motion vector indication predicting signal in selected list
Displacement in 0 or list 1 reference picture, for bi-directional predicted, need two motion vectors, indicates respectively forward prediction signal
With displacement in selected list 0 reference picture with list 1 reference picture for the back forecast signal.
In embodiments of the present invention, target image block is considered as by least two sub-blocks (that is, target image sub-block) structure
Become, wherein, the size of this sub-block can determine according to preset value, for convenience of description, below, with sub-block size for 4 × 4 is
Example illustrates.For example, if the size of target image block is 16 × 16, can determine that this target image block includes 16 sons
Block (size is 4 × 4).Thus, in embodiments of the present invention it may be determined that each of this target image block sub-block is basic
Corresponding sub-block (belonging to this correspondence image block) in layer, and determine the movable information of this correspondence sub-block.
In embodiments of the present invention, (can be denoted as according to the coordinate of a certain pixel in sub-block:“(Ex,Ey) "), determine
This pixel coordinate of correspondence position in basic tomographic image (is denoted as:“(Bx,By) "), and affiliated corresponding position seat will be comprised
Image block in target Primary layer is as described corresponding sub-block.In embodiments of the present invention, can be according in terms of following formula 1 and formula 2
Obtain (Bx,By):
Wherein, Round () represents the operation blocking fractional part, RxAnd RyRepresent side-play amount, can be according in terms of following formula 3
Calculation obtains Rx, it is calculated R according to following formula 4y
Rx=2s-5(3)
Ry=2s-5(4)
Wherein, S is the precision controlling factor (for example, in the embodiment of the present invention, could be arranged to 16), can be according to following formula
5 are calculated Dx, it is calculated D according to following formula 6y
Wherein, BaseWidth represents the width of basic tomographic image, and BaseHeight represents the height of basic tomographic image,
ScaledBaseWidth represents the width of enhancement layer image, and ScaledBaseHeight represents the height of enhancement layer image.
It is thus possible to determine this correspondence sub-block, and, in the case that this correspondence sub-block includes movable information, Ke Yizhi
Connect the prediction direction being used in this movable information and reference picture index as the prediction of this sub-block (first object image subblock)
Direction is indexed with reference picture.Can be according to following formula 7 to formula 10, the motion vector (BMV to this corresponding sub-blockx,BMVy) carry out
Scaling, and using scaling after motion vector as this sub-block (first object image subblock) motion vector (EMVx,EMVy).
EMVx=(BMVx×ScaledBaseWidth+RBW)/BaseWidth (7)
EMVy=(BMVy×ScaledBaseHeight+RBH)/BaseHeight (8)
RBW=sgn (BMVx)*BaseWidth/2 (9)
RBH=sgn (BMVy)*BaseHeight/2 (10)
Wherein, sgn (x) is sign function, can obtain the symbol of x.
Here, if it should be noted that basic tomographic image is identical with the resolution of enhancement layer image, need not carry out on
The zoom operations stated, and the movable information movable information as sub-block of corresponding sub-block can be directly used.
By above method it may be determined that the corresponding sub-block that can include from basic tomographic image in target image block obtains motion
The sub-block of information.
Thus, the corresponding sub-block that can include is obtained to the sub-block of movable information from basic tomographic image, can be right by it
Answer the movable information of sub-block
Alternatively, in embodiments of the present invention, the method also includes:
According to the coding mode of described basic tomographic image, determine first Primary layer corresponding with first object image subblock
Whether image subblock includes movable information.
Specifically, in embodiments of the present invention, can be according to the coding mould of described basic tomographic image (correspondence image block)
Formula determines whether this correspondence sub-block includes movable information.For example, if basic tomographic image uses intra-prediction code mode,
Can determine that this correspondence sub-block does not comprise movable information (that is, the movable information of the first Primary layer image subblock is sky).
Thus, when this correspondence sub-block includes movable information, process can determine this correspondence sub-block and obtain as described above
Take its movable information, when this correspondence sub-block does not include movable information, above flow process can be skipped.
The corresponding sub-block that can not include is obtained to sub-block (that is, the first object image of movable information from basic tomographic image
Sub-block), S110, can by the following method 1, determine the second target image sub-block, and S120, according to this second target image
The movable information of block, obtains the first reference information.
Method 1
Movable information can be filled for this target image sub-block, such that it is able to using the movable information of this filling as with reference to letter
Breath.
Below, describe in detail in the embodiment of the present invention, the method for filling movable information.
Specifically, without loss of generality, for example, if the size of target image block is 16 × 16, the size of sub-block is 4 ×
4, in embodiments of the present invention, the index distribution method of sub-block can be same as the prior art, and here, the description thereof will be omitted, Fig. 2 institute
Division and the index of sub-block are shown.
In embodiments of the present invention, can determine and process level according to the size of the size of target image block and sub-block, and
Process by processing level Layer by layer recurrence.For example, in embodiments of the present invention, can specify that the level of the bottom (is denoted as, first
Layer) in each processing unit (being denoted as, first processing units) include four sub-blocks, the last layer of ground floor (is denoted as second
Layer) each processing unit (being denoted as, second processing unit) include four first processing units, by that analogy, in order to avoid superfluous
State, omit recursion explanation.Thus, non-limiting as an example, in the target image block shown in Fig. 2, two layers can be included
Secondary, in ground floor, sub-block 0~sub-block 3 constitutes first processing units 0, and sub-block 4~sub-block 7 constitutes first processing units 1,
Sub-block 8~sub-block 11 constitutes first processing units 2, and sub-block 12~sub-block 15 constitutes first processing units 3.In the second layer,
First processing units 0~first processing units 3 constitute second processing unit 0.It should be understood that hierarchical division method listed above
It is merely illustrative, the present invention is not limited to this.
In embodiments of the present invention, for each first processing units, can according to sub-block call number (for example, from
Little to big) judge the movable information of each sub-block successively whether as empty, if the movable information of this sub-block is sky, can be based on should
The movable information of the sub-block (of the second target image sub-block) being adjacent in first processing units determines its motion letter
Breath.For example, if the movable information indexing the sub-block (that is, sub-block 0 belongs to first processing units 0) for 0 is sky, can obtain
Belong to the movable information of other sub-blocks same processing unit (first processing units 0) Nei, and using this movable information as this son
The movable information of block 0.Acquisition order can be for example, to obtain the sub-block (sub-block 1, i.e. the second target image for 1 for the index first
One of sub-block, adjacent with sub-block 0 in the horizontal direction) movable information, if the movable information of sub-block 1 is sky, permissible
Obtain the son for 2 (sub-blocks 2, i.e. another example of the second target image sub-block, in the vertical direction is adjacent with sub-block 0) for the index again
The movable information of block, if the movable information of sub-block 2 is sky, can obtain index again is 3 (sub-blocks 3, i.e. the second target figure
As sub-block another example, adjacent with sub-block 0 in the diagonal directions) sub-block movable information.In the same manner, for each movable information
For empty sub-block, all by above method, its movable information can be filled with.It should be understood that listed above to movable information
Movable information fill method for empty sub-block is merely illustrative, and the present invention is not limited to this, for example, obtains for above-mentioned
Take order it is also possible to first obtain the movable information of the regulation sub-block (here, being adjacent sub-blocks) on vertical direction, then obtain level
The movable information of the regulation sub-block (here, being adjacent sub-blocks) on direction, then obtain diagonally adjacent regulation sub-block (this
In, be adjacent sub-blocks) movable information.That is, this acquisition order can arbitrarily change.
Thus, through the above-mentioned process carrying out in ground floor, above-mentioned place is carried out to each sub-block in each first processing units
After reason, as long as the movable information having at least one sub-block in four sub-blocks in this first processing units is not empty it is possible to be
In this first processing units, all movable informations are empty sub-block filling (in other words, obtaining) movable information.
It should be noted that for the sub-block being filled with movable information according to said method, needing in subsequent treatment
Using this sub-block movable information when, can be directly using the movable information filled for this sub-block.That is, the second target image sub-block
Movable information may refer to the movable information of the corresponding sub-block of Primary layer of this second target image sub-block or according to this
The method of the filling movable information of inventive embodiments is the filling of this second target image sub-block (from other enhancement layer sub-blocks
) movable information.
Therefore, when the corresponding sub-block determining some sub-block (for example, sub-block 0) does not include movable information, can be from the
It is in other sub-blocks (for example, sub-block 1 of same first processing units (for example, first processing units 0) with this sub-block 0 in one layer
~sub-block 3) obtain movable information.(for example, first process when being in same first processing units with this sub-block (for example, sub-block 0)
Unit 0) the movable information of other sub-blocks (the first in the layer regulation sub-block, for example, sub-block 1~sub-block 3) be sky, then can obtain
Take regulation in the regulation first processing units (for example, first processing units 1~first processing units 3) in second processing unit
The movable information of block (another example of the second target image sub-block), and using this movable information as this sub-block (for example, sub-block 0)
Movable information.
That is, if the corresponding sub-block of all sub-blocks in a first processing units (for example, first processing units 0) is
Sky, then can obtain other first processing units (for example, first processing units 1~the first process list in second processing unit
First 3) the regulation sub-block in (for convenience of description, taking the sub-block in each first processing units upper left corner is as a example said in the present invention
Bright) movable information, and using this movable information as in this first processing units (first processing units 0) each sub-block motion letter
Breath.Acquisition order can be that for example, (first processing units 1, it is in level side to obtain the first processing units for 1 for the index first
Adjacent with first processing units 0 upwards) the sub-block (sub-block 4, i.e. of the second target image sub-block) in the upper left corner motion
Information, if the movable information of sub-block 4 is sky, it is considered that the movable information of this first processing units 1 other sub-block interior
For sky, such that it is able to obtain the first processing units for 2 for the index again, (first processing units 2, at its in the vertical direction and first
Reason unit 0 adjacent) the sub-block (sub-block 8, i.e. another example of the second target image sub-block) in the upper left corner movable information, if
The movable information of sub-block 8 is sky, then it is considered that the movable information of this first processing units 2 other sub-block interior is also sky, thus
Can obtain again index for 3 first processing units (first processing units 3, its in the diagonal directions with first processing units 0
Adjacent) the sub-block (sub-block 12, i.e. another example of the second target image sub-block) in the upper left corner movable information.In the same manner, for each
Movable information is empty first processing units, all by above method, its movable information can be filled with.It should be understood that more than
Enumerate is that the movable information fill method of empty sub-block is merely illustrative to movable information, and the present invention is not limited to
This, for example, for above-mentioned acquisition order it is also possible to first obtaining the regulation first processing units on vertical direction (is here, adjacent
First processing units) regulation sub-block movable information, then the regulation first processing units obtaining in horizontal direction (here, are
Adjacent first processing units) regulation sub-block movable information, then obtain diagonally adjacent regulation first processing units (this
In, be adjacent first processing units) regulation sub-block movable information.That is, this acquisition order can arbitrarily change.And, make
It is not limited in sub-block or the same first processing units in the first processing units upper left corner for above-mentioned " regulation sub-block "
The sub-block of optional position.
It should be noted that because the size of the above-mentioned target image block enumerated is 16 × 16, the size of sub-block is 4 × 4,
Therefore this target image block only includes two-layer, and above-mentioned recursive procedure terminates.But the size of target image block is bigger, for example,
32 × 32, and the size of sub-block is 4 × 4, then this target image block includes three layers, can continue according to method same as described above
Carry out recursive operation, be when all sub-blocks of target image block obtain movable information.
Alternatively, the size of the target image sub-block that this includes according to the size of target image block, this target image block and
For indicating the second configured information of position in this target image block for the first object image subblock, determine the second target image
Sub-block, including:
According to following arbitrary formula, determine this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
Wherein, Idx2Represent the 3rd rope for indicating this position in this target image block for the second target image sub-block
Fuse ceases, Idx1Represent this second index information, % represents modular arithmetic or remainder operation, N represents what this target image block included
The quantity of sub-block.
Specifically, can determine currently processed according to the index of the sub-block being presently processing according to above-mentioned each formula
Level in the second target image sub-block, wherein N is corresponding with the level being presently processing, and, N is according to target figure
The size of the size of picture block and sub-block determines, for example, if the size of target image block is 16 × 16, the size of sub-block is 4
× 4, then this target image block includes two-layer as mentioned above, and when processing ground floor, N is for each processing unit in this layer (at first
Reason unit) the sub-block quantity that includes, here for 4.When processing the second layer, N is each processing unit (second processing list in this layer
Unit) the sub-block quantity that includes, here for 16.
More than, list when the upper left corner sub-block that above-mentioned " regulation sub-block " is described processing unit, the formula being used.
But the present invention does not limit and this, according to the position in " regulation sub-block " described processing unit, above-mentioned formula can also be carried out
Change.
Alternatively, this, according to the movable information of this second target image sub-block, determines the of this first object image subblock
One reference information, including:
If the movable information of this second target image sub-block is empty it is determined that this first reference information is zero motion letter
Breath.
Specifically, if it is impossible to fill movable information for this sub-block after above-mentioned process, made using zero movable information
Movable information for this sub-block.In embodiments of the present invention, zero movable information can be constructed in accordance with the following methods.For example, pre-
Survey in encoded image frame (P frame), the prediction direction of zero movable information is single directional prediction, and reference picture index is 0, motion vector
(0,0).In bidirectionally predictive coded picture frame (B frame), the prediction direction of zero movable information is bi-directional predicted, two reference pictures
Index is 0, and two motion vectors are (0,0).
It should be noted that when target image block includes multiple process levels, above-mentioned zero movable information is used as this son
The method of the movable information of block, can carry out it is also possible to other arbitrary levels after last level is processed
Carry out after being processed, the present invention is simultaneously not particularly limited.It should be understood that the method for the movable information of acquisition sub-block listed above is only
For the exemplary illustration of the present invention, the present invention is not limited to this, for example, is in example in the present invention, can also as described above,
Determine whether this correspondence sub-block includes movable information according to the coding mode of described basic tomographic image (correspondence image block).Example
As if basic tomographic image uses intra-prediction code mode, can determine that this correspondence sub-block does not comprise movable information (i.e.,
The movable information of the first Primary layer image subblock is sky).If it is determined that only one of which sub-block in all sub-blocks of target image block
(specifically, be its corresponding sub-block) has movable information, then can using the movable information of this sub-block as other sub-blocks fortune
Dynamic information.
Thus, by said method it may be determined that the first reference information of first object image subblock.
In S130, obtaining first object image block by said method 1, (in Primary layer, corresponding sub-block does not include transporting
Dynamic information) the first reference information.
Alternatively, this encodes to this target image block, including:
According to this first reference information, motion compensation process is carried out to this first object image subblock.
Specifically, can also be right according to the reference information (specifically movable information) of first object image subblock
This first object image subblock is encoded (specifically, being motion compensation process).Specifically, can be according to as described above
The movable information obtaining for this first object image subblock or filling, carries out independent motion and mends to this first object image subblock
Repay process.
In embodiments of the present invention, (in Primary layer, corresponding sub-block includes moving can also to obtain the 3rd target image block
Information) movable information, the method obtaining the movable information of the 3rd target image block can be same as the prior art, saves here
Slightly its explanation.
Thus, after all sub-blocks to target image block carry out motion compensation process, obtain the prediction of target image block
After signal, target image block can be predicted encode, thus calculating rate distortion costs.Target image block can be calculated
Distortion rate cost after, if this distortion rate Least-cost, can determine a sign (the first configured information), with refer to
Show decoding end, obtain the reference information of first object image block by said method 1 and method 2, and obtain the 3rd target image
The movable information of block (in Primary layer, corresponding sub-block includes movable information).And entropy code is carried out to this first configured information.
Alternatively, this, according to this reference information, encodes to this first object image subblock, including:
Entropy code is carried out to this first configured information, so that this first configured information and skips mould in this target code stream
Formula flag bit or fusion MERGE mode flags position information are adjacent.
Specifically, in embodiments of the present invention, in target code stream, can by first configured information configuration with skip
The adjacent position of mode flags position information.Specifically,
For example, it is possible to as target image block in target before the first configured information is placed in skip mode zone bit information
First information in code stream, as target image after can also being placed in skip mode zone bit information using the first configured information
Second information in target code stream for the block.Above-mentioned skip mode can be pattern same as the prior art, and it determines method
Can be same as the prior art with allocation position, here, in order to avoid repeating, the description thereof will be omitted.
Again for example, it is possible to the first configured information is placed in the position adjacent with MERGE mode flags position information.Specifically,
Before first configured information being placed in MERGE mode flags position information, the first configured information can also be placed in MERGE mould
After formula zone bit information.Described MERGE can be pattern same as the prior art, and it determines that method is permissible with allocation position
Same as the prior art, here, in order to avoid repeating, the description thereof will be omitted.
In embodiments of the present invention, this first configured information can be a binary flags position.Therefore, referring to first
When showing that information carries out entropy code, binary conversion treatment need not be carried out to the first configured information.
Thereafter, the context for the first configured information of binaryzation is carried out with use during entropy code can be selected, optional
Ground, this, according to this reference information, encodes to this first object image subblock, including:
Whether the reference image block according to being located at predeterminated position in this enhancement layer image is encoded using reference information, really
Determine context;
According to this context, entropy code is carried out to this first configured information.
Specifically.As shown in the following Table 1, this context can comprise 0,1,2 three contexts.The present embodiment is according to a left side
Which context whether the image block of side and top determined using using basic layer model.In the present embodiment, for example, may be used
With the image block according to the target image block left side and top whether using its respective first configured information, determine using any
Context, for example, if the image block of the target image block left side and top does not use the first configured information, then select index
Context model for 0, if the image block of the target image block left side and top has use first configured information, then choosing
Select the context model for 1 for the index, if the image block of the target image block left side and top is all using the first configured information, then
Select the context for 2 for the index.
Table 1
Thereafter, binary arithmetic coding can be carried out to this first configured information according to context selected as described above,
And update used context model.In embodiments of the present invention, this process can be same as the prior art, herein for keeping away
Exempt to repeat, the description thereof will be omitted.
In embodiments of the present invention, according to the reference information being obtained, each first object image subblock can be compiled
Code, and then completes the coding to target image block, and by the after the information of the target image block generating and above-mentioned entropy code
One configured information adds code stream (target code stream).
Here, it should be noted that in this target code stream, the target image after can including encoding (includes Primary layer
Image and enhancement layer image) information, and, this processing procedure can be same as the prior art, here, in order to avoid repeating, saves
Slightly its explanation.
In decoding end, target image information can be obtained from code stream, and determine that target image (specifically, is target
Image block), entropy decoding is carried out to the first configured information (information after entropy code) obtaining, in this processing procedure, upper and lower selected works
Select same or similar with the process of above-mentioned coding side with the process of updating context, the description thereof will be omitted here.
The binary character word string obtaining representing the first configured information can be parsed from code stream according to the context selecting
(bin string).Wherein, this binary arithmetic decoding is corresponding with the binary arithmetic coding of coding side.
In embodiments of the present invention, it can be stated that for example, when the first configured information is 1, then decoding end need using with
Coding side identical method obtains the first reference information of first object image subblock, then decoding end needs to use and coding side phase
Same method obtains the first reference information of first object image subblock.
It should be understood that the indicating means of the first configured information listed above is merely illustrative, the present invention does not limit
In this.
It should be noted that in embodiments of the present invention, obtain the reference of first object image subblock in using method 1
During information, can be according to the reference information (specifically movable information) of first object image subblock, to this first object figure
As sub-block is decoded (specifically, being motion compensation process).Specifically, can be according to as mentioned above for this first object
The movable information of image subblock filling, carries out independent motion compensation process to this first object image subblock.
Alternatively, this carries out coding inclusion to this target image block:
It is pointed to the pixel near the border between this target image sub-block and carry out block-eliminating effect filtering process.
Specifically, can also pixel near the border between each sub-block of target image block be filtered processing.
The method for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and the movable information according to this second target image sub-block, determination is directed to
First reference information of this first object image subblock, and encoded according to this first reference information, it is possible to increase this first
The coding efficiency of target image sub-block.
Fig. 3 shows the method 200 for image procossing according to embodiments of the present invention from the description of decoding end angle
Indicative flowchart.As shown in Fig. 2 the method 200 includes:
S210, from target code stream, obtains the first configured information;
S220, when the motion of the first corresponding Primary layer image subblock of the first object image subblock with target image block
Information is space-time, based on this first configured information, each mesh of being included according to the size of this target image block, this target image block
The size of logo image sub-block and the second instruction letter for indicating position in this target image block for the first object image subblock
Breath, determines the second target image sub-block;
S230, according to the movable information of this second target image sub-block, determines for entering to this first object image subblock
First reference information of row decoding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this mesh
Logo image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image
Locus in this basic tomographic image for the block sub-block and this first object image subblock space bit in this enhancement layer image
Put corresponding;
S240, is decoded to this object code stream, to obtain this target image block.
Specifically, in S210, decoding end can obtain target image information from code stream, and determines target image (tool
Say body, be target image block), and the first configured information (information after entropy code) obtaining.
Alternatively, the first configured information should be obtained from target code stream, including:
From target code stream, obtain the first configured information, wherein, this first configured information in this target code stream with skip
Mode flags position or fusion MERGE mode flags position information are adjacent.
Specifically, in embodiments of the present invention, in target code stream, can by first configured information configuration with skip
The adjacent position of mode flags position information.Specifically,
For example, it is possible to as target image block in target before the first configured information is placed in skip mode zone bit information
First information in code stream, as target image after can also being placed in skip mode zone bit information using the first configured information
Second information in target code stream for the block.Above-mentioned skip mode can be pattern same as the prior art, and it determines method
Can be same as the prior art with allocation position, here, in order to avoid repeating, the description thereof will be omitted.
Again for example, it is possible to the first configured information is placed in the position adjacent with MERGE mode flags position information.Specifically,
Before first configured information being placed in MERGE mode flags position information, the first configured information can also be placed in MERGE mould
After formula zone bit information.This MERGE can be pattern same as the prior art, and it determines that method and allocation position can be with
Prior art is identical, and here, in order to avoid repeating, the description thereof will be omitted.
Thereafter, decoding end can carry out entropy decoding to the first configured information obtaining.
In embodiments of the present invention, this first configured information can be a binary flags position.Therefore, referring to first
When showing that information carries out entropy decoding, the first configured information binaryzation need not be made.
Thereafter, the context for the first configured information of binaryzation is carried out with use during entropy code can be selected, optional
Ground, should obtain the first configured information from target code stream, including:
Whether the reference image block according to being located at predeterminated position in this enhancement layer image is decoded using reference information, really
Determine context;
According to this context, carry out entropy decoding, to determine this first configured information.
Specifically.As shown in Table 1 above, this context can comprise 0,1,2 three contexts.The present embodiment is according to a left side
Which context whether the image block of side and top determined using using basic layer model.In the present embodiment, for example, may be used
With the image block according to the target image block left side and top whether using its respective first configured information, determine using any
Context, for example, if the image block of the target image block left side and top does not use the first configured information, then select index
Context model for 0, if the image block of the target image block left side and top has use first configured information, then choosing
Select the context model for 1 for the index, if the image block of the target image block left side and top is all using the first configured information, then
Select the context for 2 for the index.
Thereafter, binary arithmetic decoding can be carried out to this first configured information according to the context of as above this selection, and
Update used context model.In embodiments of the present invention, this process can be same as the prior art, herein for avoiding
Repeat, the description thereof will be omitted.
In embodiments of the present invention, it can be stated that for example, when the first configured information is 1, then decoding end need using with
Coding side identical method obtains the first reference information of first object image subblock, then decoding end needs to use and coding side phase
Same method obtains the first reference information of first object image subblock.
It should be understood that the indicating means of the first configured information listed above is merely illustrative, the present invention does not limit
In this.
Therefore, decoding end can be according to this first configured information, it is determined whether need to obtain first object image subblock
First reference information, below, coding side is needed the situation obtaining the first reference information of first object image subblock say
Bright.
Hierarchical coding is being carried out to image, for example, when spatial scalable encodes, image can entered be about at resolution
Reason obtains low-resolution image, as a comparison original image is referred to as high-definition picture, encoder is respectively to this low resolution figure
Picture and this high-definition picture are encoded.For convenience of describing, herein image to be encoded high for quality is referred to as enhancement layer
Image, corresponding low-quality image (such as this low-resolution image) to be encoded is referred to as basic tomographic image.
In embodiments of the present invention, target image is the image being processed using layered coding technique, and Primary layer refers to
Quality (including the parameters such as frame rate, spatial resolution, temporal resolution, noise specific strength or credit rating) in hierarchical coding
Relatively low layer, enhancement layer refers to that quality in hierarchical coding (includes frame rate, spatial resolution, temporal resolution, signal to noise ratio strong
Degree or the parameter such as credit rating) higher layer.It should be noted that in embodiments of the present invention, in embodiments of the present invention, right
In a given enhancement layer, Primary layer corresponding thereto can be less than any layer of this enhancement layer for quality, for example, if
There is currently five layers, coding quality improves (that is, ground floor quality is minimum, layer 5 quality highest) successively, if enhancement layer is
4th layer, then Primary layer can be ground floor or the second layer, can also be third layer, can also be the 4th layer.With
Reason, for a given Primary layer, enhancement layer corresponding thereto can for quality less than this Primary layer any layer.
Enhancement layer image is the image in currently processed enhancement layer, basic tomographic image be Primary layer in enhancement layer image
Image in synchronization.
To sum up it is somebody's turn to do, in embodiments of the present invention, the quality of this basic tomographic image is less than the quality of this enhancement layer image.
Target image block is the image block processing in this enhancement layer image.
Primary layer image block is the figure that there is corresponding relation in basic tomographic image with this target image block on locus
As block.
In embodiments of the present invention, the corresponding relation of the image block in Primary layer and the image block in enhancement layer can basis
Basic resolution proportionate relationship between tomographic image and enhancement layer image is calculated.For example, x direction and y direction are being included
In system, if the resolution in x direction and y direction for the enhancement layer image is 2 times of basic tomographic image respectively, for enhancement layer
The pixel coordinate in the middle upper left corner is (2x, 2y) and the image block for (2m) × (2n) for the size, the corresponding blocks in its basic tomographic image
Can be the pixel coordinate in the upper left corner be the image block that (x, y) and size are m × n.
In embodiments of the present invention, aftermentioned sub-block refers to the sub-block (strengthening in the layer image block) of target image block, aftermentioned
Corresponding sub-block refers to the corresponding in the base layer image block of this sub-block.
In embodiments of the present invention, movable information can be included in prediction direction, reference picture index or motion vector
One or more, wherein, prediction direction can be divided into unidirectional and bi-directional predicted, single directional prediction can be divided into again forward prediction with backward
Prediction, forward prediction refers to using forward direction reference picture list, and that is, the reference picture in list (list) 0 produces prediction signal, after
Refer to using backward reference picture list, i.e. reference picture generation prediction signal in list 1 to prediction, bi-directional predicted finger is simultaneously
Produce prediction signal using the reference picture in list 0 and list 1;For single directional prediction, need a reference picture index
Indicate the reference picture selected in list0 or list 1, for bi-directional predicted, need two reference picture index, respectively
Indicate the reference picture selected in list 0 and list 1;Each motion vector includes horizontal direction component x and vertically
Durection component y, can be denoted as (x, y), for single directional prediction, need a motion vector indication predicting signal in selected list
Displacement in 0 or list 1 reference picture, for bi-directional predicted, need two motion vectors, indicates respectively forward prediction signal
With displacement in selected list 0 reference picture with list 1 reference picture for the back forecast signal.
In embodiments of the present invention, target image block is considered as by least two sub-blocks (that is, target image sub-block) structure
Become, wherein, the size of this sub-block can determine according to preset value, for convenience of description, below, with sub-block size for 4 × 4 is
Example illustrates.For example, if the size of target image block is 16 × 16, can determine that this target image block includes 16 sons
Block (size is 4 × 4).Thus, in embodiments of the present invention it may be determined that each of this target image block sub-block (first
Target image sub-block) corresponding sub-block (the first Primary layer image subblock) in the base layer, and determine the motion of this correspondence sub-block
Information.
In embodiments of the present invention, can be according to the coordinate of a certain pixel in sub-block (first object image subblock)
(it is denoted as:“(Ex,Ey) "), determine that this pixel coordinate of correspondence position in basic tomographic image (is denoted as:“(Bx,By) "), and
Image block in the Primary layer of corresponding position coordinateses belonging to comprising is as described corresponding sub-block (the first basic tomographic image
Block).In embodiments of the present invention, (B can be calculated according to equation 1 below to formula 10x,By):
It is thus possible to determine the first Primary layer image subblock corresponding with this first object image subblock, and, at this
In the case that first Primary layer image subblock includes movable information, can be directly using the prediction direction in this movable information and ginseng
The prediction direction examining image index as this sub-block (first object image subblock) is indexed with reference picture.Can be according to following formula
7 to formula 10, the motion vector (BMV to this first Primary layer image subblockx,BMVy) zoom in and out, and the motion after scaling is sweared
Amount is as the motion vector (EMV of this sub-block (first object image subblock)x,EMVy).
Here, if it should be noted that basic tomographic image is identical with the resolution of enhancement layer image, need not carry out on
The zoom operations stated, and the movable information movable information as sub-block of corresponding sub-block can be directly used.
By above method it may be determined that the corresponding sub-block that can include from basic tomographic image in target image block obtains motion
The sub-block of information.
Thus, the corresponding sub-block that can include is obtained to the sub-block of movable information from basic tomographic image, can be right by it
Answer the movable information of sub-block
Alternatively, in embodiments of the present invention, the method also includes:
According to the coding mode of described basic tomographic image, determine first Primary layer corresponding with first object image subblock
Whether image subblock includes movable information.
Specifically, in embodiments of the present invention, can be according to the coding mould of described basic tomographic image (correspondence image block)
Formula determines whether this correspondence sub-block includes movable information.For example, if basic tomographic image uses intra-prediction code mode,
Can determine that this correspondence sub-block does not comprise movable information (that is, the movable information of the first Primary layer image subblock is sky).
Thus, when this correspondence sub-block includes movable information, this correspondence sub-block can be determined by as above this process and obtain
Its movable information, when this correspondence sub-block does not include movable information, can skip above flow process.
The corresponding sub-block that can not include is obtained to sub-block (that is, the first object image of movable information from basic tomographic image
Sub-block), S220, can by the following method 2, determine the second target image sub-block, and S230, according to this second target image
The movable information of block, obtains the first reference information.
Method 2
Movable information can be filled for this target image sub-block, such that it is able to using the movable information of this filling as with reference to letter
Breath.
Below, describe in detail in the embodiment of the present invention, the method for filling movable information.
Specifically, without loss of generality, for example, if the size of target image block is 16 × 16, the size of sub-block is 4 ×
4, in embodiments of the present invention, the index distribution method of sub-block can be same as the prior art, and here, the description thereof will be omitted, Fig. 2 institute
Division and the index of sub-block are shown.
In embodiments of the present invention, can determine and process level according to the size of the size of target image block and sub-block, and
Process by processing level Layer by layer recurrence.For example, in embodiments of the present invention, can specify that the level of the bottom (is denoted as, first
Layer) in each processing unit (being denoted as, first processing units) include four sub-blocks, the last layer of ground floor (is denoted as second
Layer) each processing unit (being denoted as, second processing unit) include four first processing units, by that analogy, in order to avoid superfluous
State, omit recursion explanation.Thus, non-limiting as an example, in the target image block shown in Fig. 2, two layers can be included
Secondary, in ground floor, sub-block 0~sub-block 3 constitutes first processing units 0, and sub-block 4~sub-block 7 constitutes first processing units 1,
Sub-block 8~sub-block 11 constitutes first processing units 2, and sub-block 12~sub-block 15 constitutes first processing units 3.In the second layer,
First processing units 0~first processing units 3 constitute second processing unit 0.It should be understood that hierarchical division method listed above
It is merely illustrative, the present invention is not limited to this.
In embodiments of the present invention, for each first processing units, can according to sub-block call number (for example, from
Little to big) judge the movable information of each sub-block successively whether as empty, if the movable information of this sub-block is sky, can be based on should
The movable information of the sub-block (of the second target image sub-block) being adjacent in first processing units determines its movable information.
For example, if the movable information indexing the sub-block (that is, sub-block 0 belongs to first processing units 0) for 0 is sky, can obtain and belong to
The movable information of other sub-blocks in same processing unit (first processing units 0), and using this movable information as this sub-block 0
Movable information.Acquisition order can be for example, to obtain the sub-block (sub-block 1, i.e. the second target image sub-block for 1 for the index first
One, adjacent with sub-block 0 in the horizontal direction) movable information, if the movable information of sub-block 1 be sky, can obtain again
Take index for the sub-block of 2 (sub-blocks 2, i.e. another example of the second target image sub-block, in the vertical direction is adjacent with sub-block 0)
Movable information, if the movable information of sub-block 2 is sky, can obtain index again for 3 (sub-blocks 3, i.e. the second target image
Another example of block, adjacent with sub-block 0 in the diagonal directions) sub-block movable information.In the same manner, for each movable information be sky
Sub-block, all by above method, its movable information can be filled with.It should be understood that listed above is sky to movable information
The movable information fill method of sub-block be merely illustrative, the present invention is not limited to this, for example, suitable for above-mentioned acquisition
Sequence is it is also possible to first obtain the movable information of the regulation sub-block (here, being adjacent sub-blocks) on vertical direction, then obtains level side
The movable information of regulation sub-block (here, being adjacent sub-blocks) upwards, then obtain diagonally adjacent regulation sub-block (here,
For adjacent sub-blocks) movable information.That is, this acquisition order can arbitrarily change.
Thus, through the above-mentioned process carrying out in ground floor, above-mentioned place is carried out to each sub-block in each first processing units
After reason, as long as the movable information having at least one sub-block in four sub-blocks in this first processing units is not empty it is possible to be
In this first processing units, all movable informations are empty sub-block filling (in other words, obtaining) movable information.
It should be noted that for the sub-block being filled with movable information according to said method, needing in subsequent treatment
Using this sub-block movable information when, can be directly using the movable information filled for this sub-block.That is, the second target image sub-block
Movable information may refer to the movable information of the corresponding sub-block of Primary layer of this second target image sub-block or according to this
The method of the filling movable information of inventive embodiments is the filling of this second target image sub-block (from other enhancement layer sub-blocks
) movable information.
Therefore, when the corresponding sub-block determining some sub-block (for example, sub-block 0) does not include movable information, can be from the
It is in other sub-blocks (for example, sub-block 1 of same first processing units (for example, first processing units 0) with this sub-block 0 in one layer
~sub-block 3) obtain movable information.(for example, first process when being in same first processing units with this sub-block (for example, sub-block 0)
Unit 0) the movable information of other sub-blocks (the first in the layer regulation sub-block, for example, sub-block 1~sub-block 3) be sky, then can obtain
Take regulation in the regulation first processing units (for example, first processing units 1~first processing units 3) in second processing unit
The movable information of block (another example of the second target image sub-block), and using this movable information as this sub-block (for example, sub-block 0)
Movable information.
That is, if the corresponding sub-block of all sub-blocks in a first processing units (for example, first processing units 0) is
Sky, then can obtain other first processing units (for example, first processing units 1~the first process list in second processing unit
First 3) the regulation sub-block in (for convenience of description, taking the sub-block in each first processing units upper left corner is as a example said in the present invention
Bright) movable information, and using this movable information as in this first processing units (first processing units 0) each sub-block motion letter
Breath.Acquisition order can be that for example, (first processing units 1, it is in level side to obtain the first processing units for 1 for the index first
Adjacent with first processing units 0 upwards) the sub-block (sub-block 4, i.e. of the second target image sub-block) in the upper left corner motion
Information, if the movable information of sub-block 4 is sky, it is considered that the movable information of this first processing units 1 other sub-block interior
For sky, such that it is able to obtain the first processing units for 2 for the index again, (first processing units 2, at its in the vertical direction and first
Reason unit 0 adjacent) the sub-block (sub-block 8, i.e. another example of the second target image sub-block) in the upper left corner movable information, if
The movable information of sub-block 8 is sky, then it is considered that the movable information of this first processing units 2 other sub-block interior is also sky, thus
Can obtain again index for 3 first processing units (first processing units 3, its in the diagonal directions with first processing units 0
Adjacent) the sub-block (sub-block 12, i.e. another example of the second target image sub-block) in the upper left corner movable information.In the same manner, for each
Movable information is empty first processing units, all by above method, its movable information can be filled with.It should be understood that more than
Enumerate is that the movable information fill method of empty sub-block is merely illustrative to movable information, and the present invention is not limited to
This, for example, for above-mentioned acquisition order it is also possible to first obtaining the regulation first processing units on vertical direction (is here, adjacent
First processing units) regulation sub-block movable information, then the regulation first processing units obtaining in horizontal direction (here, are
Adjacent first processing units) regulation sub-block movable information, then obtain diagonally adjacent regulation first processing units (this
In, be adjacent first processing units) regulation sub-block movable information.That is, this acquisition order can arbitrarily change.And, make
It is not limited in sub-block or the same first processing units in the first processing units upper left corner for above-mentioned " regulation sub-block "
The sub-block of optional position.
It should be noted that because the size of the above-mentioned target image block enumerated is 16 × 16, the size of sub-block is 4 × 4,
Therefore this target image block only includes two-layer, and above-mentioned recursive procedure terminates.But the size of target image block is bigger, for example,
32 × 32, and the size of sub-block is 4 × 4, then this target image block includes three layers, can continue according to method same as described above
Carry out recursive operation, be when all sub-blocks of target image block obtain movable information.
Alternatively, the size of the target image sub-block that this includes according to the size of target image block, this target image block and
For indicating the second configured information of position in this target image block for the first object image subblock, determine the second target image
Sub-block, including:
According to following arbitrary formula, determine this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
Wherein, Idx2Represent the 3rd rope for indicating this position in this target image block for the second target image sub-block
Fuse ceases, Idx1Represent this second index information, % represents modular arithmetic or remainder operation, N represents what this target image block included
The quantity of sub-block.
Specifically, can determine currently processed according to the index of the sub-block being presently processing according to above-mentioned each formula
Level in the second target image sub-block, wherein N is corresponding with the level being presently processing, and, N is according to target figure
The size of the size of picture block and sub-block determines, for example, if the size of target image block is 16 × 16, the size of sub-block is 4
× 4, then this target image block includes two-layer as mentioned above, and when processing ground floor, N is for each processing unit in this layer (at first
Reason unit) the sub-block quantity that includes, here for 4.When processing the second layer, N is each processing unit (second processing list in this layer
Unit) the sub-block quantity that includes, here for 16.
More than, list when the upper left corner sub-block that above-mentioned " regulation sub-block " is described processing unit, the formula being used.
But the present invention does not limit and this, according to the position in " regulation sub-block " described processing unit, above-mentioned formula can also be carried out
Change.
Alternatively, this, according to the movable information of this second target image sub-block, determines the of this first object image subblock
One reference information, including:
If the movable information of this second target image sub-block is empty it is determined that this first reference information is zero motion letter
Breath.
Specifically, if it is impossible to fill movable information for this sub-block after above-mentioned process, made using zero movable information
Movable information for this sub-block.In embodiments of the present invention, zero movable information can be constructed in accordance with the following methods.For example, pre-
Survey in encoded image frame (P frame), the prediction direction of zero movable information is single directional prediction, and reference picture index is 0, motion vector
(0,0).In bidirectionally predictive coded picture frame (B frame), the prediction direction of zero movable information is bi-directional predicted, two reference pictures
Index is 0, and two motion vectors are (0,0).
It should be noted that when target image block includes multiple process levels, above-mentioned zero movable information is used as this son
The method of the movable information of block, can carry out it is also possible to other arbitrary levels after last level is processed
Carry out after being processed, the present invention is simultaneously not particularly limited.It should be understood that the method for the movable information of acquisition sub-block listed above is only
For the exemplary illustration of the present invention, the present invention is not limited to this, for example, is in example in the present invention, can also as described above,
Determine whether this correspondence sub-block includes movable information according to the coding mode of described basic tomographic image (correspondence image block).Example
As if basic tomographic image uses intra-prediction code mode, can determine that this correspondence sub-block does not comprise movable information (i.e.,
The movable information of the first Primary layer image subblock is sky).If it is determined that only one of which sub-block in all sub-blocks of target image block
(specifically, be its corresponding sub-block) has movable information, then can using the movable information of this sub-block as other sub-blocks fortune
Dynamic information.
Thus, by said method it may be determined that the first reference information of first object image subblock.
In S240, according to the first reference information obtaining, this first object sub-block can be decoded, for example, it is possible to
The first reference information (specifically movable information) according to first object image subblock, enters to this first object image subblock
Row decoding (specifically, being motion compensation process).Specifically, can be according to as mentioned above for this first object image subblock
The movable information of filling, carries out motion compensation process to this first object image subblock.
In embodiments of the present invention, for the 3rd target image sub-block (the corresponding son in Primary layer in target image block
Block includes movable information), its movable information can be obtained, and it is decoded by method same as the prior art, should
Process can be same as the prior art, and here, in order to avoid repeating, the description thereof will be omitted.
Alternatively, this, according to this reference information, is decoded to this target image block, including:
It is pointed to the pixel near the border between this target image sub-block and carry out block-eliminating effect filtering process.
The method for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and determined for this according to the movable information of this second target image sub-block
The reference information of first object image subblock, and encoded according to this reference information, it is possible to increase this first object image
The coding efficiency of block.
Above, in conjunction with Fig. 1 to Fig. 3, describe the method for image procossing according to embodiments of the present invention in detail, under
Face, will describe the device for image procossing according to embodiments of the present invention in detail in conjunction with Fig. 4 to Fig. 5.
Fig. 4 shows the schematic block diagram of the device 300 for image procossing according to embodiments of the present invention.As Fig. 4 institute
Show, this device 300 includes:
For working as, acquiring unit 310, determines that first corresponding with the first object image subblock of target image block is basic
The movable information of tomographic image sub-block is space-time, each target being included according to the size of this target image block, this target image block
The size of image subblock and the second configured information for indicating position in this target image block for the first object image subblock,
Determine the second target image sub-block;
For the movable information according to this second target image sub-block, determine for carrying out to this first object image subblock
First reference information of coding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this target
Image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block
Locus in this basic tomographic image for the sub-block and this first object image subblock locus in this enhancement layer image
Corresponding;
Coding unit 320, encodes to this target image block, to generate target code stream and to be contained in this target code stream
In the first configured information.
Alternatively, this acquiring unit 310 is specifically for according to following arbitrary formula, determining this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
Alternatively, if this acquiring unit 310 is sky specifically for the movable information of this second target image sub-block, really
This first reference information fixed is zero movable information.
Alternatively, this coding unit 320 is specifically for according to this first reference information, entering to this first object image subblock
Row motion compensation process.
Alternatively, the pixel near border that this coding unit 320 is additionally operable to be pointed between this target image sub-block is entered
Row block-eliminating effect filtering is processed.
Alternatively, this coding unit 320 is specifically for carrying out entropy code to this first configured information, so that this first instruction
Information is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this coding unit 320 is specifically for according to the reference picture being located at predeterminated position in this enhancement layer image
Whether block is encoded using reference information, determines context;
For according to this context, entropy code being carried out to this first configured information.
Device 300 for image procossing according to embodiments of the present invention may correspond in the method for the embodiment of the present invention
Each unit in coding side, and, the device 300 of this image procossing is module and other operations above-mentioned and/or function is respectively
Realize the corresponding flow process of the method 100 in Fig. 1, for sake of simplicity, will not be described here.
The device for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and the movable information according to this second target image sub-block, determination is directed to
The reference information of this first object image subblock, and encoded according to this reference information, it is possible to increase this first object image
The coding efficiency of sub-block.
Fig. 5 shows the schematic block diagram of the device 400 for image procossing according to embodiments of the present invention.As Fig. 5 institute
Show, this device 400 includes:
Decoding unit 410, for, from target code stream, obtaining the first configured information;
Acquiring unit 420, for when the first Primary layer figure corresponding with the first object image subblock of target image block
As the movable information of sub-block is space-time, this first configured information being obtained based on this decoding unit, according to this target image block
The size of each target image sub-block that size, this target image block include and be used for indicating first object image subblock in this mesh
Second configured information of the position in logo image block, determines the second target image sub-block;
For the movable information according to this second target image sub-block, determine for carrying out to this first object image subblock
First reference information of decoding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this target
Image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block
Locus in this basic tomographic image for the sub-block and this first object image subblock locus in this enhancement layer image
Corresponding;
This decoding unit 410 is additionally operable to this object code stream is decoded, to obtain this target image block.
Alternatively, this acquiring unit 420 is specifically for according to following arbitrary formula, determining this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
Alternatively, if this acquiring unit 420 is sky specifically for the movable information of this second target image sub-block, really
This first reference information fixed is zero movable information.
Alternatively, this decoding unit 410 is specifically for according to this first reference information, entering to this first object image subblock
Row motion compensation process.
Alternatively, the pixel near border that this decoding unit 410 is additionally operable to be pointed between this target image sub-block is entered
Row block-eliminating effect filtering is processed.
Alternatively, this decoding unit 410 is specifically for, from target code stream, obtaining the first configured information, wherein, this first
Configured information is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this decoding unit 410 is specifically for according to the reference picture being located at predeterminated position in this enhancement layer image
Whether block is decoded using reference information, determines context;
For according to this context, carrying out entropy decoding, to determine this first configured information.
The device 400 for image procossing according to embodiments of the present invention may correspond to solve in the method for the embodiment of the present invention
Code end, and, this each unit being used in the device 400 of image procossing is module and other operations above-mentioned and/or function difference
In order to realize the corresponding flow process of the method 200 in Fig. 3, for sake of simplicity, will not be described here.
The device for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and the movable information according to this second target image sub-block, determination is directed to
The reference information of this first object image subblock, and encoded according to this reference information, it is possible to increase this first object image
The coding efficiency of sub-block.
Above, in conjunction with Fig. 1 to Fig. 5, describe in detail the method for image procossing according to embodiments of the present invention and
Device, below in conjunction with Fig. 6 and Fig. 7, describes the encoder for image procossing according to embodiments of the present invention and decoding in detail
Device.
Fig. 6 shows the schematic block diagram of the encoder 500 for image procossing according to embodiments of the present invention.As Fig. 6
Shown, this encoder 500 can include:
Bus 510;
The processor 520 being connected with this bus;
The memorizer 530 being connected with this bus;
Wherein, this processor 520 passes through this bus 510, calls the program of storage in this memorizer 530, for when really
The movable information of the fixed first Primary layer image subblock corresponding with the first object image subblock of target image block is space-time, root
The size of each the target image sub-block including according to the size of this target image block, this target image block and for instruction the first mesh
Second configured information of position in this target image block for the logo image sub-block, determines the second target image sub-block;
For the movable information according to this second target image sub-block, determine for carrying out to this first object image subblock
First reference information of coding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this target
Image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block
Locus in this basic tomographic image for the sub-block and this first object image subblock locus in this enhancement layer image
Corresponding;
For encoding to this target image block, to generate target code stream and to be contained in this target code stream first
Configured information.
Alternatively, this processor 520 is specifically for according to following arbitrary formula, determining this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
Alternatively, if this processor 520 specifically for this second target image sub-block movable information be empty it is determined that
This first reference information is zero movable information.
Alternatively, this processor 520 is specifically for according to this first reference information, carrying out to this first object image subblock
Motion compensation process.
Alternatively, this processor 520 enters specifically for the pixel being pointed near the border between this target image sub-block
Row block-eliminating effect filtering is processed.
Alternatively, this processor 520 is specifically for carrying out entropy code to this first configured information, so that this first instruction letter
Breath is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this processor 520 is specifically for according to the reference image block being located at predeterminated position in this enhancement layer image
Whether encoded using reference information, determined context;
For according to this context, entropy code being carried out to this first configured information.
Encoder 500 for image procossing according to embodiments of the present invention may correspond in the method for the embodiment of the present invention
Coding side, and, this each unit being used in the encoder 500 of image procossing is module and other operations above-mentioned and/or function
Respectively in order to realize the corresponding flow process of the method 100 in Fig. 1, for sake of simplicity, will not be described here.
The encoder for image procossing according to embodiments of the present invention, in the target image block of enhancement layer image not
The corresponding sub-block that can include from basic tomographic image obtains the first object image subblock of movable information, by according to this first object
The position of image subblock determines the second target image sub-block, and the movable information according to this second target image sub-block, determines pin
Reference information to this first object image subblock, and encoded according to this reference information, it is possible to increase this first object figure
Coding efficiency as sub-block.
Fig. 7 shows the schematic block diagram of the decoder 600 for image procossing according to embodiments of the present invention.As Fig. 7
Shown, this decoder 600 can include:
Bus 610;
The processor 620 being connected with this bus;
The memorizer 630 being connected with this bus;
Wherein, this processor 620 passes through this bus 610, calls the program of storage in this memorizer 630, for from mesh
In coding stream, obtain the first configured information;
For when the motion of the first Primary layer image subblock corresponding with the first object image subblock of target image block
Information is space-time, based on this first configured information, each mesh of being included according to the size of this target image block, this target image block
The size of logo image sub-block and the second instruction letter for indicating position in this target image block for the first object image subblock
Breath, determines the second target image sub-block;
For the movable information according to this second target image sub-block, determine for carrying out to this first object image subblock
First reference information of decoding, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, this target
Image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block
Locus in this basic tomographic image for the sub-block and this first object image subblock locus in this enhancement layer image
Corresponding;
For being decoded to this object code stream, to obtain this target image block.
Alternatively, this processor 620 is specifically for according to following arbitrary formula, determining this second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating this position in this target image block for the second target image sub-block
Show information, Idx1Represent this second configured information, N is the size of the size according to this target image block and this target image sub-block
Determine.
Alternatively, if this processor 620 specifically for this second target image sub-block movable information be empty it is determined that
This first reference information is zero movable information.
Alternatively, this processor 620 is specifically for according to this first reference information, carrying out to this first object image subblock
Motion compensation process.
Alternatively, this processor 620 enters specifically for the pixel being pointed near the border between this target image sub-block
Row block-eliminating effect filtering is processed.
Alternatively, this processor 620 specifically for from target code stream, obtains the first configured information, wherein, this first finger
Show that information is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this processor 620 is specifically for according to the reference image block being located at predeterminated position in this enhancement layer image
Whether it is decoded using reference information, determine context;
For according to this context, carrying out entropy decoding, to determine this first configured information.
Decoder 600 for image procossing according to embodiments of the present invention may correspond in the method for the embodiment of the present invention
Decoding end, and, this each unit being used in the decoder 600 of image procossing is module and other operations above-mentioned and/or function
Respectively in order to realize the corresponding flow process of the method 200 in Fig. 5, for sake of simplicity, will not be described here.
The decoder for image procossing according to embodiments of the present invention, in the target image block of enhancement layer image not
The corresponding sub-block that can include from basic tomographic image obtains the first object image subblock of movable information, by according to this first object
The position of image subblock determines the second target image sub-block, and the movable information according to this second target image sub-block, determines pin
Reference information to this first object image subblock, and encoded according to this reference information, it is possible to increase this first object figure
Coding efficiency as sub-block.
Fig. 8 shows the method 700 for image procossing according to embodiments of the present invention from the description of coding side angle
Indicative flowchart.As shown in figure 8, the method 700 includes:
S710, when the determination first Primary layer image subblock corresponding with the first object image subblock of target image block
Movable information is space-time, according to the reconstruction pixel of this first Primary layer image subblock, determines for this first object image
The second reference information that block is encoded, wherein, this first Primary layer image subblock is in the image block in basic tomographic image,
This target image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this is first basic
Locus in this basic tomographic image for the image block sub-block and this first object image subblock sky in this enhancement layer image
Between position corresponding;
S720, encodes to this target image block, to generate target code stream and to be contained in this target code stream
Four configured informations.
Specifically, hierarchical coding is being carried out to image, for example, when spatial scalable encodes, image can carried out
Resolution processes are obtained low-resolution image, as a comparison original image is referred to as high-definition picture, encoder is respectively to this
Low-resolution image and this high-definition picture carry out coded treatment.For convenience of describing, herein will be high for quality to be encoded
Image is referred to as enhancement layer image, and corresponding low-quality image (for example described low-resolution image) to be encoded is referred to as Primary layer
Image.
In embodiments of the present invention, target image is the image being processed using layered coding technique, and Primary layer refers to
Quality (including the parameters such as frame rate, spatial resolution, temporal resolution, noise specific strength or credit rating) in hierarchical coding
Relatively low layer, enhancement layer refers to that the quality in hierarchical coding (includes frame rate, spatial resolution, temporal resolution, signal to noise ratio
The parameter such as intensity or credit rating) higher layer.It should be noted that in embodiments of the present invention, in embodiments of the present invention,
For a given enhancement layer, Primary layer corresponding thereto can be less than any layer of this enhancement layer for quality, for example, such as
Fruit there is currently five layers, and coding quality improves (that is, ground floor quality is minimum, layer 5 quality highest) successively, if enhancement layer
For the 4th layer, then Primary layer can be ground floor or the second layer, can also be third layer, can also be the 4th layer.With
Reason, for a given Primary layer, enhancement layer corresponding thereto can for quality less than this Primary layer any layer.
Enhancement layer image is the image in currently processed enhancement layer, basic tomographic image be Primary layer in enhancement layer image
Image in synchronization.
In sum, in embodiments of the present invention, the quality of this basic tomographic image is less than the quality of this enhancement layer image.
Target image block is the image block processing in this enhancement layer image.
Primary layer image block is the figure that there is corresponding relation in basic tomographic image with this target image block on locus
As block.
In embodiments of the present invention, the corresponding relation of the image block in Primary layer and the image block in enhancement layer can basis
Basic resolution proportionate relationship between tomographic image and enhancement layer image is calculated.For example, x direction and y direction are being included
In system, if the resolution in x direction and y direction for the enhancement layer image is 2 times of basic tomographic image respectively, for enhancement layer
The pixel coordinate in the middle upper left corner is (2x, 2y) and the image block for (2m) × (2n) for the size, the corresponding blocks in its basic tomographic image
Can be the pixel coordinate in the upper left corner be the image block that (x, y) and size are m × n.
In embodiments of the present invention, aftermentioned sub-block refers to the sub-block (strengthening in the layer image block) of target image block, aftermentioned
Corresponding sub-block refers to the corresponding in the base layer image block of this sub-block.
In embodiments of the present invention, movable information can be included in prediction direction, reference picture index or motion vector
One or more, wherein, prediction direction can be divided into unidirectional and bi-directional predicted, single directional prediction can be divided into again forward prediction with backward
Prediction, forward prediction refers to using forward direction reference picture list, and that is, the reference picture in list (list) 0 produces prediction signal, after
Refer to using backward reference picture list, i.e. reference picture generation prediction signal in list 1 to prediction, bi-directional predicted finger is simultaneously
Produce prediction signal using the reference picture in list 0 and list 1;For single directional prediction, need a reference picture index
Indicate the reference picture selected in list0 or list 1, for bi-directional predicted, need two reference picture index, respectively
Indicate the reference picture selected in list 0 and list 1;Each motion vector includes horizontal direction component x and vertically
Durection component y, can be denoted as (x, y), for single directional prediction, need a motion vector indication predicting signal selected
Displacement in list 0 or list 1 reference picture, for bi-directional predicted, need two motion vectors, indicates respectively forward prediction
Signal displacement in selected list 0 reference picture with list 1 reference picture with back forecast signal.
In embodiments of the present invention, target image block is considered as by least two sub-blocks (that is, target image sub-block) structure
Become, wherein, the size of this sub-block can determine according to preset value, for convenience of description, below, with sub-block size for 4 × 4 is
Example illustrates.For example, if the size of target image block is 16 × 16, can determine that this target image block includes 16 sons
Block (size is 4 × 4).Thus, in embodiments of the present invention it may be determined that each of this target image block sub-block is basic
Corresponding sub-block (belonging to this correspondence image block) in layer, and determine the movable information of this correspondence sub-block.
In embodiments of the present invention, (can be denoted as according to the coordinate of a certain pixel in sub-block:“(Ex,Ey) "), determine
This pixel coordinate of correspondence position in basic tomographic image (is denoted as:“(Bx,By) "), and affiliated corresponding position seat will be comprised
Image block in target Primary layer is as described corresponding sub-block.In embodiments of the present invention, can according to above formula 1 to formula 10,
Motion vector (BMV to this first Primary layer image subblockx,BMVy) zoom in and out, and using the motion vector after scaling as this
Motion vector (the EMV of sub-block (first object image subblock)x,EMVy).
Here, if it should be noted that basic tomographic image is identical with the resolution of enhancement layer image, need not carry out on
The zoom operations stated, and the movable information movable information as sub-block of corresponding sub-block can be directly used.
By above method it may be determined that the corresponding sub-block that can include from basic tomographic image in target image block obtains motion
The sub-block of information.
Thus, the corresponding sub-block that can include is obtained to the sub-block of movable information from basic tomographic image, can be right by it
Answer the movable information of sub-block
Alternatively, in embodiments of the present invention, the method also includes:
According to the coding mode of described basic tomographic image, determine first Primary layer corresponding with first object image subblock
Whether image subblock includes movable information.
Specifically, in embodiments of the present invention, can be according to the coding mould of described basic tomographic image (correspondence image block)
Formula determines whether this correspondence sub-block includes movable information.For example, if basic tomographic image uses intra-prediction code mode,
Can determine that this correspondence sub-block does not comprise movable information (that is, the movable information of the first Primary layer image subblock is sky).
Thus, when this correspondence sub-block includes movable information, process can determine this correspondence sub-block and obtain as described above
Take its movable information, when this correspondence sub-block does not include movable information, above flow process can be skipped.
The corresponding sub-block that can not include is obtained to sub-block (that is, the first object image of movable information from basic tomographic image
Sub-block), can by the following method 3, obtain its second reference information.
Method 3
Specifically, the reconstruction picture of corresponding with this first object image subblock sub-block in basic tomographic image can be obtained
Element, and up-sampling process is carried out to this reconstruction pixel, the prediction signal generating this first object image subblock is as reference information.
Alternatively, this, according to this reference information, carries out coded treatment to this target image block, including:
According to this reference information, motion compensation process is carried out to this first object image subblock.
Specifically, can be right according to the reference information (specifically, being this prediction signal) of first object image subblock
This first object image subblock carries out coded treatment (being specifically, that predictive coding is processed).Specifically, rebuild pixel to carry out
As the prediction signal of current sub-block after suitably up-sampling.It is possible to current after the prediction signal getting current block
Block is predicted encoding, thus calculating rate distortion costs.
Alternatively, this, according to this reference information, carries out coded treatment to this target image block, including:
It is pointed to the pixel near the border between this target image sub-block and carry out block-eliminating effect filtering process.
In S230, obtaining first object image block by said method 3, (in Primary layer, corresponding sub-block does not include transporting
Dynamic information) the second reference information, and obtain the 3rd target image block (in Primary layer, corresponding sub-block includes movable information)
Movable information, the method obtaining the movable information of the 3rd target image block can be same as the prior art, omits it here and says
Bright, it is thus possible to after calculating the distortion rate cost of target image block, if this distortion rate Least-cost, can determine one
Individual sign (the 4th configured information), to indicate decoding end, obtains first object image block by said method 1 and method 2
Reference information, and obtain the movable information of the 3rd target image block (in Primary layer, corresponding sub-block includes movable information).And
Entropy code process is carried out to the 4th configured information.
Alternatively, this, according to this reference information, carries out coded treatment to this first object image subblock, including:
Entropy code process is carried out to this first configured information so that this first configured information in this target code stream with skip
Mode flags position or fusion MERGE mode flags position information are adjacent.
Specifically, in embodiments of the present invention, in target code stream, can by first configured information configuration with skip
The adjacent position of mode flags position information.Specifically,
For example, it is possible to as target image block in mesh before the first configured information is placed in skip mode zone bit information
First information in coding stream, as target figure after can also being placed in skip mode zone bit information using the first configured information
As second information in target code stream for the block.Above-mentioned skip mode can be pattern same as the prior art, its determination side
Method can be same as the prior art with allocation position, and here, in order to avoid repeating, the description thereof will be omitted.
Again for example, it is possible to the first configured information is placed in the position adjacent with MERGE mode flags position information.Specifically,
Before first configured information being placed in MERGE mode flags position information, the first configured information can also be placed in MERGE mould
After formula zone bit information.Described MERGE can be pattern same as the prior art, and it determines that method is permissible with allocation position
Same as the prior art, here, in order to avoid repeating, the description thereof will be omitted.
In embodiments of the present invention, this first configured information can be a binary flags position.Therefore, referring to first
When showing that information carries out entropy code, binary conversion treatment need not be carried out to the first configured information.
Thereafter, can select for the first configured information of binaryzation is carried out with the context using when entropy code is processed,
Alternatively, this, according to this reference information, carries out coded treatment to this first object image subblock, including:
Whether the reference image block according to being located at predeterminated position in this enhancement layer image is carried out at coding using reference information
Reason, determines context;
According to this context, entropy code process is carried out to this first configured information.
Specifically.As shown in Table 1 above, this context can comprise 0,1,2 three contexts.The present embodiment is according to a left side
Which context whether the image block of side and top determined using using basic layer model.In the present embodiment, for example, may be used
With the image block according to the target image block left side and top whether using its respective first configured information, determine using any
Context, for example, if the image block of the target image block left side and top does not use the first configured information, then select index
Context model for 0, if the image block of the target image block left side and top has use first configured information, then choosing
Select the context model for 1 for the index, if the image block of the target image block left side and top is all using the first configured information, then
Select the context for 2 for the index.
Thereafter, binary arithmetic coding can be carried out to this first configured information according to context selected as described above,
And update used context model.In embodiments of the present invention, this process can be same as the prior art, herein for keeping away
Exempt to repeat, the description thereof will be omitted.
In embodiments of the present invention, according to the reference information being obtained, each first object image subblock can be compiled
Code is processed, and then completes the coded treatment to target image block, and the information of the target image block generating and above-mentioned entropy are compiled
The first configured information after code is processed adds code stream (target code stream).
Here, it should be noted that in this target code stream, the target image after coded treatment can be included and (includes base
This tomographic image and enhancement layer image) information, and, this processing procedure can be same as the prior art, here, in order to avoid superfluous
State, the description thereof will be omitted.
In embodiments of the present invention, it can be stated that for example, when the 4th configured information is 1, then decoding end need using with
Coding side identical method obtains the second reference information of first object image subblock, then decoding end needs to use and coding side phase
Same method obtains the second reference information of first object image subblock.
It should be understood that the indicating means of the 4th configured information listed above is merely illustrative, the present invention does not limit
In this.
Therefore, decoding end can be according to the 4th configured information, it is determined whether needs to obtain first object image subblock
Second reference information, below, coding side is needed the situation obtaining the second reference information of first object image subblock say
Bright.
In decoding end, target image information can be obtained from code stream, and determine that target image (specifically, is target
Image block), entropy decoding process is carried out to the 4th configured information (information after entropy code process) obtaining, in this processing procedure,
Context selection is same or similar with the process of above-mentioned coding side with the process of updating context, and the description thereof will be omitted here.
The binary character word string obtaining representing the 4th configured information can be parsed from code stream according to the context selecting
(bin string).Wherein, the process of this binary arithmetic decoding is corresponding with the binary arithmetic coding process of coding side.
It should be noted that in embodiments of the present invention, obtain the second ginseng of first object image subblock in using method 3
The reconstruction pixel of corresponding with this first object image subblock sub-block in basic tomographic image when examining information, can be obtained, and right
This reconstruction pixel carries out up-sampling process, and the prediction signal generating this first object image subblock is as reference information.Can root
According to the reference information (specifically, being this prediction signal) of first object image subblock, this first object image subblock is carried out
Coded treatment (being specifically, that predictive coding is processed).Specifically, rebuild the prediction as current sub-block after pixel up-samples
Signal.It is possible to motion compensation process is carried out to current block after the prediction signal getting current block and, acceptable
In addition superposition decodes the residual signals obtaining, to obtain reconstruction signal.
And, can also pixel near the border between each sub-block of target image block be filtered processing.
The method for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and according to relative with this first object image subblock on locus
The reconstruction pixel of the first Primary layer image subblock answered, determines the reference information for this first object image subblock, and according to
This reference information carries out coded treatment, it is possible to increase the coding efficiency of this first object image subblock.
Fig. 9 shows the method 800 for image procossing according to embodiments of the present invention from the description of decoding end angle
Indicative flowchart.As shown in figure 9, the method 800 includes:
S210, from target code stream, obtains the 4th configured information;
S220, when the determination first Primary layer image subblock corresponding with the first object image subblock of target image block
Movable information is space-time, based on the 4th configured information, according to the reconstruction pixel of this first Primary layer image subblock, determines and is used for
The second reference information that this first object image subblock is encoded, wherein, this first Primary layer image subblock is in base
Image block in this tomographic image, this target image block is located in enhancement layer image, the basic tomographic image that is somebody's turn to do and this enhancement layer image
Corresponding, and this first primary image block sub-block locus in this basic tomographic image and this first object image subblock exist
Locus in this enhancement layer image are corresponding;
S230, is decoded to this object code stream, to obtain this target image block.
Specifically, in S210, decoding end can obtain target image information from code stream, and determines target image (tool
Say body, be target image block), and the first configured information (information after entropy code process) obtaining.
Alternatively, the first configured information should be obtained from target code stream, including:
From target code stream, obtain the first configured information, wherein, this first configured information in this target code stream with skip
Mode flags position or fusion MERGE mode flags position information are adjacent.
Specifically, in embodiments of the present invention, in target code stream, can by first configured information configuration with skip
The adjacent position of mode flags position information.Specifically,
For example, it is possible to as target image block in target before the first configured information is placed in skip mode zone bit information
First information in code stream, as target image after can also being placed in skip mode zone bit information using the first configured information
Second information in target code stream for the block.Above-mentioned skip mode can be pattern same as the prior art, and it determines method
Can be same as the prior art with allocation position, here, in order to avoid repeating, the description thereof will be omitted.
Again for example, it is possible to the first configured information is placed in the position adjacent with MERGE mode flags position information.Specifically,
Before first configured information being placed in MERGE mode flags position information, the first configured information can also be placed in MERGE mould
After formula zone bit information.This MERGE can be pattern same as the prior art, and it determines that method and allocation position can be with
Prior art is identical, and here, in order to avoid repeating, the description thereof will be omitted.
Thereafter, decoding end can carry out entropy decoding process to the first configured information obtaining.
In embodiments of the present invention, this first configured information can be a binary flags position.Therefore, referring to first
When showing that information carries out entropy decoding, the first configured information binaryzation need not be made.
Thereafter, can select for the first configured information of binaryzation is carried out with the context using when entropy code is processed,
Alternatively, the first configured information should be obtained from target code stream, including:
Whether the reference image block according to being located at predeterminated position in this enhancement layer image is decoded locating using reference information
Reason, determines context;
According to this context, carry out entropy decoding process, to determine this first configured information.
Specifically.As shown in Table 1 above, this context can comprise 0,1,2 three contexts.The present embodiment is according to a left side
Which context whether the image block of side and top determined using using basic layer model.In the present embodiment, for example, may be used
With the image block according to the target image block left side and top whether using its respective first configured information, determine using any
Context, for example, if the image block of the target image block left side and top does not use the first configured information, then select index
Context model for 0, if the image block of the target image block left side and top has use first configured information, then choosing
Select the context model for 1 for the index, if the image block of the target image block left side and top is all using the first configured information, then
Select the context for 2 for the index.
Thereafter, binary arithmetic decoding can be carried out to this first configured information according to the context of as above this selection, and
Update used context model.In embodiments of the present invention, this process can be same as the prior art, herein for avoiding
Repeat, the description thereof will be omitted.
In embodiments of the present invention, it can be stated that for example, when the first configured information is 1, then decoding end need using with
Coding side identical method obtains the first reference information of first object image subblock, then decoding end needs to use and coding side phase
Same method obtains the first reference information of first object image subblock.
It should be understood that the indicating means of the first configured information listed above is merely illustrative, the present invention does not limit
In this.
Therefore, decoding end can be according to this first configured information, it is determined whether need to obtain first object image subblock
First reference information, below, coding side is needed the situation obtaining the first reference information of first object image subblock say
Bright.
In embodiments of the present invention, hierarchical coding is being carried out to image, for example, when spatial scalable encodes, can be by
Image enters to be about to resolution processes and obtains low-resolution image, as a comparison original image is referred to as high-definition picture, encoder
Respectively coded treatment is carried out to this low-resolution image and this high-definition picture.For convenience of describing, herein that quality is high
Image to be encoded be referred to as enhancement layer image, corresponding low-quality image (for example described low-resolution image) to be encoded is claimed
Make basic tomographic image.
In embodiments of the present invention, target image is the image being processed using layered coding technique, and Primary layer refers to
Quality (including the parameters such as frame rate, spatial resolution, temporal resolution, noise specific strength or credit rating) in hierarchical coding
Relatively low layer, enhancement layer refers to that quality in hierarchical coding (includes frame rate, spatial resolution, temporal resolution, signal to noise ratio strong
Degree or the parameter such as credit rating) higher layer.It should be noted that in embodiments of the present invention, in embodiments of the present invention, right
In a given enhancement layer, Primary layer corresponding thereto can be less than any layer of this enhancement layer for quality, for example, if
There is currently five layers, coding quality improves (that is, ground floor quality is minimum, layer 5 quality highest) successively, if enhancement layer is
4th layer, then Primary layer can be ground floor or the second layer, can also be third layer, can also be the 4th layer.With
Reason, for a given Primary layer, enhancement layer corresponding thereto can for quality less than this Primary layer any layer.
Enhancement layer image is the image in currently processed enhancement layer, basic tomographic image be Primary layer in enhancement layer image
Image in synchronization.
In sum, in embodiments of the present invention, the quality of this basic tomographic image is less than the quality of this enhancement layer image.
Target image block is the image block processing in this enhancement layer image.
Primary layer image block is the figure that there is corresponding relation in basic tomographic image with this target image block on locus
As block.
In embodiments of the present invention, the corresponding relation of the image block in Primary layer and the image block in enhancement layer can basis
Basic resolution proportionate relationship between tomographic image and enhancement layer image is calculated.For example, x direction and y direction are being included
In system, if the resolution in x direction and y direction for the enhancement layer image is 2 times of basic tomographic image respectively, for enhancement layer
The pixel coordinate in the middle upper left corner is (2x, 2y) and the image block for (2m) × (2n) for the size, the corresponding blocks in its basic tomographic image
Can be the pixel coordinate in the upper left corner be the image block that (x, y) and size are m × n.
In embodiments of the present invention, aftermentioned sub-block refers to the sub-block (strengthening in the layer image block) of target image block, aftermentioned
Corresponding sub-block refers to the corresponding in the base layer image block of this sub-block.
In embodiments of the present invention, movable information can be included in prediction direction, reference picture index or motion vector
One or more, wherein, prediction direction can be divided into unidirectional and bi-directional predicted, single directional prediction can be divided into again forward prediction with backward
Prediction, forward prediction refers to using forward direction reference picture list, and that is, the reference picture in list (list) 0 produces prediction signal, after
Refer to using backward reference picture list, i.e. reference picture generation prediction signal in list 1 to prediction, bi-directional predicted finger is simultaneously
Produce prediction signal using the reference picture in list 0 and list 1;For single directional prediction, need a reference picture index
Indicate the reference picture selected in list0 or list 1, for bi-directional predicted, need two reference picture index, respectively
Indicate the reference picture selected in list 0 and list 1;Each motion vector includes horizontal direction component x and vertically
Durection component y, can be denoted as (x, y), for single directional prediction, need a motion vector indication predicting signal in selected list
Displacement in 0 or list 1 reference picture, for bi-directional predicted, need two motion vectors, indicates respectively forward prediction signal
With displacement in selected list 0 reference picture with list 1 reference picture for the back forecast signal.
In embodiments of the present invention, target image block is considered as by least two sub-blocks (that is, target image sub-block) structure
Become, wherein, the size of this sub-block can determine according to preset value, for convenience of description, below, with sub-block size for 4 × 4 is
Example illustrates.For example, if the size of target image block is 16 × 16, can determine that this target image block includes 16 sons
Block (size is 4 × 4).Thus, in embodiments of the present invention it may be determined that each of this target image block sub-block (first
Target image sub-block) corresponding sub-block (the first Primary layer image subblock) in the base layer, and determine the motion of this correspondence sub-block
Information.
In embodiments of the present invention, can be according to the coordinate of a certain pixel in sub-block (first object image subblock)
(it is denoted as:“(Ex,Ey) "), determine that this pixel coordinate of correspondence position in basic tomographic image (is denoted as:“(Bx,By) "), and will
Image block in the Primary layer of corresponding position coordinateses belonging to comprising is as described corresponding sub-block (the first Primary layer image subblock).
In embodiments of the present invention, the motion vector of first object image subblock can be calculated to formula 10 according to equation 1 below
(EMVx,EMVy).
Here, if it should be noted that basic tomographic image is identical with the resolution of enhancement layer image, need not carry out on
The zoom operations stated, and the movable information movable information as sub-block of corresponding sub-block can be directly used.
By above method it may be determined that the corresponding sub-block that can include from basic tomographic image in target image block obtains motion
The sub-block of information.
Thus, the corresponding sub-block that can include is obtained to the sub-block of movable information from basic tomographic image, can be right by it
Answer the movable information of sub-block
Alternatively, in embodiments of the present invention, the method also includes:
According to the coding mode of described basic tomographic image, determine first Primary layer corresponding with first object image subblock
Whether image subblock includes movable information.
Specifically, in embodiments of the present invention, can be according to the coding mould of described basic tomographic image (correspondence image block)
Formula determines whether this correspondence sub-block includes movable information.For example, if basic tomographic image uses intra-prediction code mode,
Can determine that this correspondence sub-block does not comprise movable information (that is, the movable information of the first Primary layer image subblock is sky).
Thus, when this correspondence sub-block includes movable information, process can determine this correspondence sub-block and obtain as described above
Take its movable information, when this correspondence sub-block does not include movable information, can be by with lower section method 4, obtaining this first object
Second reference information of image subblock.
Method 4
Specifically, the reconstruction picture of corresponding with this first object image subblock sub-block in basic tomographic image can be obtained
Element, and up-sampling process is carried out to this reconstruction pixel, the prediction signal generating this first object image subblock is as reference information.
Alternatively, this, according to this reference information, is decoded to this target image block processing, including:
According to this reference information, motion compensation process is carried out to this first object image subblock.
In S230, this first object sub-block can be decoded process according to the reference information obtaining, in this reference letter
Breath be obtained by method 3 in the case of, can be according to the reference information of first object image subblock (specifically motion letter
Breath), this first object image subblock is decoded process (specifically, being motion compensation process).Specifically, Ke Yigen
According to the movable information being the filling of this first object image subblock as mentioned above, motion compensation is carried out to this first object image subblock
Process.
This reference information be by method 4 obtain in the case of, can obtain in basic tomographic image with this first object
The reconstruction pixel of the corresponding sub-block of image subblock, and up-sampling process is carried out to this reconstruction pixel, generate this first object figure
As the prediction signal of sub-block is as reference information.(specifically, can be this according to the reference information of first object image subblock
Prediction signal), this first object image subblock is carried out with coded treatment (being specifically, that predictive coding is processed).Specifically,
Rebuild the prediction signal as current sub-block after pixel up-samples.It is possible to right after the prediction signal getting current block
Current block carries out motion compensation process, and, can also be superimposed and in addition decode the residual signals obtaining, to obtain reconstruction signal.
In embodiments of the present invention, for the 3rd target image sub-block (the corresponding son in Primary layer in target image block
Block includes movable information), its movable information can be obtained by method same as the prior art, and it is decoded locate
Reason, this process can be same as the prior art, and here, in order to avoid repeating, the description thereof will be omitted.
Alternatively, this, according to this reference information, is decoded to this target image block processing, including:
It is pointed to the pixel near the border between this target image sub-block and carry out block-eliminating effect filtering process.
The method for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and according to corresponding with this first object image subblock on locus
The first Primary layer image subblock reconstruction pixel, determine the reference information for this first object image subblock, and according to this
Reference information carries out coded treatment, it is possible to increase the coding efficiency of this first object image subblock.
Above, in conjunction with Fig. 8 to Fig. 9, describe the method for image procossing according to embodiments of the present invention in detail, under
Face, will describe the device for image procossing according to embodiments of the present invention in detail in conjunction with Figure 10 to Figure 11.
Figure 10 shows the schematic block diagram of the device 900 for image procossing according to embodiments of the present invention.As Figure 10
Shown, this device 900 includes:
For working as, acquiring unit 910, determines that first corresponding with the first object image subblock of target image block is basic
The movable information of tomographic image sub-block is space-time, according to the reconstruction pixel of this first Primary layer image subblock, determine for this
The second reference information that one target image sub-block is encoded, wherein, this first Primary layer image subblock is in Primary layer figure
Image block in picture, this target image block is located in enhancement layer image, and the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image,
And this first primary image block sub-block locus in this basic tomographic image and this first object image subblock are in this enhancing
Locus in tomographic image are corresponding;
Coding unit 920, for encoding to this target image block, to generate target code stream and to be contained in this target
The 4th configured information in code stream.
Alternatively, this coding unit 920 is specifically for being pointed to the pixel near the border between this target image sub-block
Carry out block-eliminating effect filtering process.
Alternatively, this coding unit 920 is specifically for carrying out entropy code to the 4th configured information, so that the 4th instruction
Information is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this coding unit 920 is specifically for according to the reference picture being located at predeterminated position in this enhancement layer image
Whether block is encoded using reference information, determines context;
For according to this context, entropy code being carried out to the 4th configured information.
Device 900 for image procossing according to embodiments of the present invention may correspond in the method for the embodiment of the present invention
Each unit in coding side, and, the device 900 of this image procossing is module and other operations above-mentioned and/or function is respectively
Realize the corresponding flow process of the method 700 in Fig. 8, for sake of simplicity, will not be described here.
The device for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and according to corresponding with this first object image subblock on locus
The first Primary layer image subblock reconstruction pixel, determine the reference information for this first object image subblock, and according to this
Reference information carries out coded treatment, it is possible to increase the coding efficiency of this first object image subblock.
Figure 11 shows the schematic block diagram of the device 1000 for image procossing according to embodiments of the present invention.As Figure 11
Shown, this device 1000 includes:
Decoding unit 1010, for, from target code stream, obtaining the 4th configured information;
For working as, acquiring unit 1020, determines that first corresponding with the first object image subblock of target image block is basic
The movable information of tomographic image sub-block is space-time, and the 4th configured information being obtained based on this decoding unit, according to this first Primary layer
The reconstruction pixel of image subblock, determines the second reference information for being encoded to this first object image subblock, wherein, should
First Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located in enhancement layer image, should
Basic tomographic image corresponding with this enhancement layer image, and this first primary image block sub-block space in this basic tomographic image
Position is corresponding with this first object image subblock locus in this enhancement layer image;
This coding unit 1010 is additionally operable to this object code stream is decoded, to obtain this target image block.
Alternatively, this decoding unit 1010 is specifically for being pointed to the pixel near the border between this target image sub-block
Carry out block-eliminating effect filtering process.
Alternatively, this decoding unit 1010 specifically for from target code stream, obtains the 4th configured information, and wherein, this
Four configured informations are adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this decoding unit 1010 is specifically for according to the reference picture being located at predeterminated position in this enhancement layer image
Whether block is decoded using reference information, determines context;
For according to this context, carrying out entropy decoding, to determine the 4th configured information.
Device 1000 for image procossing according to embodiments of the present invention may correspond in the method for the embodiment of the present invention
Decoding end, and, this each unit of being used in the device 1000 of image procossing is module and other operations above-mentioned and/or function is divided
Not in order to realize the corresponding flow process of the method 800 in Fig. 9, for sake of simplicity, will not be described here.
The device for image procossing according to embodiments of the present invention, for can not in the target image block of enhancement layer image
The corresponding sub-block including from basic tomographic image obtains the first object image subblock of movable information, by according to this first object figure
As the position of sub-block determines the second target image sub-block, and according to corresponding with this first object image subblock on locus
The first Primary layer image subblock reconstruction pixel, determine the reference information for this first object image subblock, and according to this
Reference information carries out coded treatment, it is possible to increase the coding efficiency of this first object image subblock.
Above, in conjunction with Fig. 8 to Figure 11, describe in detail the method for image procossing according to embodiments of the present invention and
Device, below in conjunction with Figure 12 and Figure 13, describes the encoder for image procossing according to embodiments of the present invention in detail and conciliates
Code device.
Figure 12 shows the schematic block diagram of the encoder 1100 for image procossing according to embodiments of the present invention.As figure
Shown in 12, this encoder 1100 can include:
Bus 1110;
The processor 1120 being connected with this bus;
The memorizer 1130 being connected with this bus;
Wherein, this processor 1120 passes through this bus 1110, calls the program of storage in this memorizer 1130, for working as
The movable information of the determination first Primary layer image subblock corresponding with the first object image subblock of target image block is space-time,
According to the reconstruction pixel of this first Primary layer image subblock, determine for this first object image subblock is encoded second
Reference information, wherein, this first Primary layer image subblock is in the image block in basic tomographic image, and this target image block is located at
In enhancement layer image, the basic tomographic image being somebody's turn to do is corresponding with this enhancement layer image, and this first primary image block sub-block is in this base
Locus in this tomographic image are corresponding with this first object image subblock locus in this enhancement layer image;
This target image block is encoded, to generate target code stream and to be contained in the 4th instruction in this target code stream
Information.
Alternatively, this processor 1120 enters specifically for the pixel being pointed near the border between this target image sub-block
Row block-eliminating effect filtering is processed.
Alternatively, this processor 1120 is specifically for carrying out entropy code to the 4th configured information, so that the 4th instruction
Information is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this processor 1120 is specifically for according to the reference image block being located at predeterminated position in this enhancement layer image
Whether encoded using reference information, determined context;
For according to this context, entropy code being carried out to the 4th configured information.
The method that encoder 1100 for image procossing according to embodiments of the present invention may correspond to the embodiment of the present invention
Middle coding side, and, this each unit being used in the encoder 1100 of image procossing is module and other operations above-mentioned and/or work(
Can be respectively in order to realize the corresponding flow process of the method 700 in Fig. 8, for sake of simplicity, will not be described here.
The encoder for image procossing according to embodiments of the present invention, in the target image block of enhancement layer image not
The corresponding sub-block that can include from basic tomographic image obtains the first object image subblock of movable information, by according to this first object
The position of image subblock determines the second target image sub-block, and according to relative with this first object image subblock on locus
The reconstruction pixel of the first Primary layer image subblock answered, determines the reference information for this first object image subblock, and according to
This reference information carries out coded treatment, it is possible to increase the coding efficiency of this first object image subblock.
Figure 13 shows the schematic block diagram of the decoder 1200 for image procossing according to embodiments of the present invention.As figure
Shown in 7, this decoder 1200 can include:
Bus 1210;
The processor 1220 being connected with this bus;
The memorizer 1230 being connected with this bus;
Wherein, this processor 1220 passes through this bus 1210, calls in this memorizer 1230 program of storage, for from
In target code stream, obtain the 4th configured information;
For when the determination first Primary layer image subblock corresponding with the first object image subblock of target image block
Movable information is space-time, based on the 4th configured information, according to the reconstruction pixel of this first Primary layer image subblock, determines and is used for
The second reference information that this first object image subblock is encoded, wherein, this first Primary layer image subblock is in base
Image block in this tomographic image, this target image block is located in enhancement layer image, the basic tomographic image that is somebody's turn to do and this enhancement layer image
Corresponding, and this first primary image block sub-block locus in this basic tomographic image and this first object image subblock exist
Locus in this enhancement layer image are corresponding;
For being decoded to this object code stream, to obtain this target image block.
Alternatively, this processor 1220 enters specifically for the pixel being pointed near the border between this target image sub-block
Row block-eliminating effect filtering is processed.
Alternatively, this processor 1220 is specifically for, from target code stream, obtaining the 4th configured information, wherein, the 4th
Configured information is adjacent with skip mode flag bit or fusion MERGE mode flags position information in this target code stream.
Alternatively, this processor 1220 is specifically for according to the reference image block being located at predeterminated position in this enhancement layer image
Whether it is decoded using reference information, determine context;
For according to this context, carrying out entropy decoding, to determine the 4th configured information.
The method that decoder 1200 for image procossing according to embodiments of the present invention may correspond to the embodiment of the present invention
Middle decoding end, and, this each unit being used in the decoder 1200 of image procossing is module and other operations above-mentioned and/or work(
Can be respectively in order to realize the corresponding flow process of the method 800 in Fig. 9, for sake of simplicity, will not be described here.
The decoder for image procossing according to embodiments of the present invention, in the target image block of enhancement layer image not
The corresponding sub-block that can include from basic tomographic image obtains the first object image subblock of movable information, by according to this first object
The position of image subblock determines the second target image sub-block, and according to relative with this first object image subblock on locus
The reconstruction pixel of the first Primary layer image subblock answered, determines the reference information for this first object image subblock, and according to
This reference information carries out coded treatment, it is possible to increase the coding efficiency of this first object image subblock.
It should be noted that in order that coding side is consistent with the reference information that decoding end obtains, it requires coding side with
The method of the acquisition reference information that decoding end uses is consistent, i.e. if coding side using method 1 obtains reference information (the first ginseng
Examine information), then decoding end wants using method 2 (corresponding with method 1) acquisition reference information (the first reference information).In other words,
Decoding end processing method can be determined according to described coding side processing method is corresponding, or according to described decoding end
Processing method is corresponding to determine coding side processing method.
It should be understood that the terms "and/or", a kind of only incidence relation of description affiliated partner, expression can be deposited
In three kinds of relations, for example, A and/or B, can represent:, there are A and B, these three situations of individualism B in individualism A simultaneously.
In addition, character "/" herein, typically represent forward-backward correlation to as if a kind of relation of "or".
It should be understood that in various embodiments of the present invention, the size of the sequence number of above-mentioned each process is not meant to execute suitable
The priority of sequence, the execution sequence of each process should be determined with its function and internal logic, and should not be to the enforcement of the embodiment of the present invention
Process constitutes any restriction.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example of the embodiments described herein description
Unit and algorithm steps, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These functions are actually
To be executed with hardware or software mode, the application-specific depending on technical scheme and design constraint.Professional and technical personnel
Each specific application can be used different methods to realize described function, but this realization is it is not considered that exceed
The scope of the present invention.
Those skilled in the art can be understood that, for convenience and simplicity of description, the system of foregoing description,
Device and the specific work process of unit, may be referred to the corresponding process in preceding method embodiment, will not be described here.
It should be understood that disclosed system, apparatus and method in several embodiments provided herein, permissible
Realize by another way.For example, device embodiment described above is only schematically, for example, described unit
Divide, only a kind of division of logic function, actual can have other dividing mode when realizing, for example multiple units or assembly
Can in conjunction with or be desirably integrated into another system, or some features can be ignored, or does not execute.Another, shown or
The coupling each other discussing or direct-coupling or communication connection can be by some interfaces, the indirect coupling of device or unit
Close or communicate to connect, can be electrical, mechanical or other forms.
The described unit illustrating as separating component can be or may not be physically separate, show as unit
The part showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.The mesh to realize this embodiment scheme for some or all of unit therein can be selected according to the actual needs
's.
In addition, can be integrated in a processing unit in each functional unit in each embodiment of the present invention it is also possible to
It is that unit is individually physically present it is also possible to two or more units are integrated in a unit.
If described function realized using in the form of SFU software functional unit and as independent production marketing or use when, permissible
It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
Partly being embodied in the form of software product of part that prior art is contributed or this technical scheme, this meter
Calculation machine software product is stored in a storage medium, including some instructions with so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) execution each embodiment methods described of the present invention all or part of step.
And aforesaid storage medium includes:USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory are deposited
Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by described scope of the claims.
Claims (42)
1. a kind of method for image procossing is it is characterised in that methods described includes:
When the movable information determining the first Primary layer image subblock corresponding with the first object image subblock of target image block
For space-time, the size of each the target image sub-block being included according to the size of described target image block, described target image block and
For indicating the second configured information of position in described target image block for the first object image subblock, determine the second target figure
As sub-block;
According to the movable information of described second target image sub-block, determine for encoding to described first object image subblock
The first reference information, wherein, described first Primary layer image subblock is in the image block in basic tomographic image, described target
Image block is located in enhancement layer image, and described basic tomographic image is corresponding with described enhancement layer image, and described first basic
Locus in described basic tomographic image for the image block sub-block and described first object image subblock are in described enhancement layer image
In locus corresponding;
Described target image block is encoded, to generate target code stream and to be contained in the first instruction in described target code stream
Information.
2. method according to claim 1 it is characterised in that:Described size according to target image block, described target figure
The size of the target image sub-block including as block and for indicating position in described target image block for the first object image subblock
The second configured information put, determines the second target image sub-block, including:
According to following arbitrary formula, determine described second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating position in described target image block for the described second target image sub-block
Show information, Idx1Represent described second configured information, N is the size and described target image sub-block according to described target image block
Size determine.
3. method according to claim 1 is it is characterised in that the described motion according to described second target image sub-block is believed
Breath, determines the first reference information for being encoded to described first object image subblock, including:
If the movable information of described second target image sub-block is empty it is determined that described first reference information is zero motion letter
Breath.
4. method according to claim 1 it is characterised in that:Described coding that described target image block is carried out includes:
According to described first reference information, motion compensation process is carried out to described first object image subblock.
5. method according to claim 1 it is characterised in that:Described coding that described target image block is carried out includes:
It is pointed to the pixel near the border between described target image sub-block and carry out block-eliminating effect filtering process.
6. method according to claim 1 it is characterised in that:Described coding that described target image block is carried out includes:
Entropy code is carried out to described first configured information, so that described first configured information and skips mould in described target code stream
Formula flag bit or fusion MERGE mode flags position information are adjacent.
7. method according to claim 1 it is characterised in that:Described coding that described target image block is carried out includes:
Whether the reference image block according to being located at predeterminated position in described enhancement layer image is encoded using reference information, determines
Context;
According to described context, entropy code is carried out to described first configured information.
8. a kind of method for image procossing is it is characterised in that methods described includes:
From target code stream, obtain the first configured information;
When the movable information of the first corresponding Primary layer image subblock of the first object image subblock with target image block is sky
When, based on described first configured information, each target of being included according to the size of described target image block, described target image block
The size of image subblock and the second instruction letter for indicating position in described target image block for the first object image subblock
Breath, determines the second target image sub-block;
According to the movable information of described second target image sub-block, determine for being decoded to described first object image subblock
The first reference information, wherein, described first Primary layer image subblock is in the image block in basic tomographic image, described target
Image block is located in enhancement layer image, and described basic tomographic image is corresponding with described enhancement layer image, and described first basic
Locus in described basic tomographic image for the image block sub-block and described first object image subblock are in described enhancement layer image
In locus corresponding;
Described object code stream is decoded, to obtain described target image block.
9. method according to claim 8 it is characterised in that:Described size according to target image block, described target figure
The size of the target image sub-block including as block and for indicating position in described target image block for the first object image subblock
The second configured information put, determines the second target image sub-block, including:
According to following arbitrary formula, determine described second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating position in described target image block for the described second target image sub-block
Show information, Idx1Represent described second configured information, N is the size and described target image sub-block according to described target image block
Size determine.
10. method according to claim 8 is it is characterised in that the described motion according to described second target image sub-block
Information, determines the first reference information for being encoded to described first object image subblock, including:
If the movable information of described second target image sub-block is empty it is determined that described first reference information is zero motion letter
Breath.
11. methods according to claim 8 it is characterised in that:Described described object code stream is decoded including:
According to described first reference information, motion compensation process is carried out to described first object image subblock.
12. methods according to claim 8 it is characterised in that:Described described object code stream is decoded including:
It is pointed to the pixel near the border between described target image sub-block and carry out block-eliminating effect filtering process.
13. methods according to claim 8 it is characterised in that:The described code stream from target, obtain the first configured information,
Including:
From target code stream, obtain the first configured information, wherein, described first configured information in described target code stream with skip
Mode flags position or fusion MERGE mode flags position information are adjacent.
14. methods according to claim 8 it is characterised in that:The described code stream from target, obtain the first configured information,
Including:
Whether the reference image block according to being located at predeterminated position in described enhancement layer image is decoded using reference information, determines
Context;
According to described context, carry out entropy decoding, to determine described first configured information.
A kind of 15. devices for image procossing are it is characterised in that described device includes:
Acquiring unit, for when determination first basic tomographic image corresponding with the first object image subblock of target image block
The movable information of block is space-time, each target image being included according to the size of described target image block, described target image block
The size of sub-block and the second configured information for indicating position in described target image block for the first object image subblock, really
Fixed second target image sub-block;
For the movable information according to described second target image sub-block, determine for carrying out to described first object image subblock
First reference information of coding, wherein, described first Primary layer image subblock is in the image block in basic tomographic image, described
Target image block is located in enhancement layer image, and described basic tomographic image is corresponding with described enhancement layer image, and described first
Locus in described basic tomographic image for the primary image block sub-block and described first object image subblock are in described enhancement layer
Locus in image are corresponding;
Coding unit, encodes to described target image block, to generate target code stream and to be contained in described target code stream
The first configured information.
16. devices according to claim 15 it is characterised in that:Described acquiring unit is specifically for according to following arbitrary public affairs
Formula, determines described second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating position in described target image block for the described second target image sub-block
Show information, Idx1Represent described second configured information, N is the size and described target image sub-block according to described target image block
Size determine.
If 17. devices according to claim 15 are it is characterised in that described acquiring unit is specifically for described second mesh
The movable information of logo image sub-block is empty it is determined that described first reference information is zero movable information.
18. devices according to claim 15 it is characterised in that:Described coding unit is specifically for according to the described first ginseng
Examine information, motion compensation process is carried out to described first object image subblock.
19. devices according to claim 15 it is characterised in that:Described coding unit is specifically for being pointed to described target
The pixel near border between image subblock carries out block-eliminating effect filtering process.
20. devices according to claim 15 it is characterised in that:Described coding unit is specifically for indicating to described first
Information carries out entropy code so that described first configured information in described target code stream with skip mode flag bit or fusion
MERGE mode flags position information is adjacent.
21. devices according to claim 15 it is characterised in that:Described coding unit is specifically for according to described enhancement layer
Whether the reference image block being located at predeterminated position in image is encoded using reference information, determines context;
For according to described context, entropy code being carried out to described first configured information.
A kind of 22. devices for image procossing are it is characterised in that described device includes:
Decoding unit, for, from target code stream, obtaining the first configured information;
Acquiring unit, for when the first Primary layer image subblock corresponding with the first object image subblock of target image block
Movable information is space-time, and described first configured information being obtained based on described decoding unit is big according to described target image block
The size of each target image sub-block that little, described target image block includes and be used for indicating first object image subblock described
Second configured information of the position in target image block, determines the second target image sub-block;
For the movable information according to described second target image sub-block, determine for carrying out to described first object image subblock
First reference information of decoding, wherein, described first Primary layer image subblock is in the image block in basic tomographic image, described
Target image block is located in enhancement layer image, and described basic tomographic image is corresponding with described enhancement layer image, and described first
Locus in described basic tomographic image for the primary image block sub-block and described first object image subblock are in described enhancement layer
Locus in image are corresponding;
Described decoding unit is additionally operable to described object code stream is decoded, to obtain described target image block.
23. devices according to claim 22 it is characterised in that:Described acquiring unit is specifically for according to following arbitrary public affairs
Formula, determines described second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating position in described target image block for the described second target image sub-block
Show information, Idx1Represent described second configured information, N is the size and described target image sub-block according to described target image block
Size determine.
If 24. devices according to claim 22 are it is characterised in that described acquiring unit is specifically for described second mesh
The movable information of logo image sub-block is empty it is determined that described first reference information is zero movable information.
25. devices according to claim 22 it is characterised in that:Described decoding unit is specifically for according to the described first ginseng
Examine information, motion compensation process is carried out to described first object image subblock.
26. devices according to claim 22 it is characterised in that:Described decoding unit is specifically for being pointed to described target
The pixel near border between image subblock carries out block-eliminating effect filtering process.
27. devices according to claim 22 it is characterised in that:Described decoding unit specifically for from target code stream,
Obtain the first configured information, wherein, described first configured information in described target code stream with skip mode flag bit or fusion
MERGE mode flags position information is adjacent.
28. devices according to claim 22 it is characterised in that:Described decoding unit is specifically for according to described enhancement layer
Whether the reference image block being located at predeterminated position in image is decoded using reference information, determines context;
For according to described context, carrying out entropy decoding, to determine described first configured information.
A kind of 29. encoders for image procossing are it is characterised in that described encoder includes:
Bus;
The processor being connected with described bus;
The memorizer being connected with described bus;
Wherein, described processor, by described bus, calls the program of storage in described memorizer, for when determination and target
The movable information of the first corresponding Primary layer image subblock of the first object image subblock of image block is space-time, according to described mesh
The size of each target image sub-block that the size of logo image block, described target image block include and be used for indicating first object figure
As the second configured information of position in described target image block for the sub-block, determine the second target image sub-block;
For the movable information according to described second target image sub-block, determine for carrying out to described first object image subblock
First reference information of coding, wherein, described first Primary layer image subblock is in the image block in basic tomographic image, described
Target image block is located in enhancement layer image, and described basic tomographic image is corresponding with described enhancement layer image, and described first
Locus in described basic tomographic image for the primary image block sub-block and described first object image subblock are in described enhancement layer
Locus in image are corresponding;
For encoding to described target image block, to generate target code stream and to be contained in described target code stream first
Configured information.
30. encoders according to claim 29 it is characterised in that:Described processor is specifically for according to following arbitrary public affairs
Formula, determines described second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating position in described target image block for the described second target image sub-block
Show information, Idx1Represent described second configured information, N is the size and described target image sub-block according to described target image block
Size determine.
If 31. encoders according to claim 29 are it is characterised in that described processor is specifically for described second mesh
The movable information of logo image sub-block is empty it is determined that described first reference information is zero movable information.
32. encoders according to claim 29 it is characterised in that:Described processor is specifically for according to described reference letter
Breath, carries out motion compensation process to described first object image subblock.
33. encoders according to claim 29 it is characterised in that:Described processor is specifically for being pointed to described target
The pixel near border between image subblock carries out block-eliminating effect filtering process.
34. encoders according to claim 29 it is characterised in that:Described processor is specifically for indicating to described first
Information carries out entropy code so that described first configured information in described target code stream with skip mode flag bit or fusion
MERGE mode flags position information is adjacent.
35. encoders according to claim 29 it is characterised in that:Described processor is specifically for according to described enhancement layer
Whether the reference image block being located at predeterminated position in image is encoded using reference information, determines context;
For according to described context, entropy code being carried out to described first configured information.
A kind of 36. decoders for image procossing are it is characterised in that described decoder includes:
Bus;
The processor being connected with described bus;
The memorizer being connected with described bus;
Wherein, described processor, by described bus, calls the program of storage in described memorizer, for from target code stream
In, obtain the first configured information;
For when the movable information of the first Primary layer image subblock corresponding with the first object image subblock of target image block
For space-time, based on described first configured information, included according to the size of described target image block, described target image block each
The size of target image sub-block and the second finger for indicating position in described target image block for the first object image subblock
Show information, determine the second target image sub-block;
For the movable information according to described second target image sub-block, determine for carrying out to described first object image subblock
First reference information of decoding, wherein, described first Primary layer image subblock is in the image block in basic tomographic image, described
Target image block is located in enhancement layer image, and described basic tomographic image is corresponding with described enhancement layer image, and described first
Locus in described basic tomographic image for the primary image block sub-block and described first object image subblock are in described enhancement layer
Locus in image are corresponding;
For being decoded to described object code stream, to obtain described target image block.
37. decoders according to claim 36 it is characterised in that:Described processor is specifically for according to following arbitrary public affairs
Formula, determines described second target image sub-block,
Idx2=Idx1/N×N+((Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (Idx1%N/ (N/4) %2)) × N/4;
Idx2=Idx1/N×N+((1-Idx1%N/ (N/2)) × 2+ (1-Idx1%N/ (N/4) %2)) × N/4;
Wherein, Idx2Represent the 3rd finger for indicating position in described target image block for the described second target image sub-block
Show information, Idx1Represent described second configured information, N is the size and described target image sub-block according to described target image block
Size determine.
If 38. decoders according to claim 36 are it is characterised in that described processor is specifically for described second mesh
The movable information of logo image sub-block is empty it is determined that described first reference information is zero movable information.
39. decoders according to claim 36 it is characterised in that:Described processor is specifically for according to the described first ginseng
Examine information, motion compensation process is carried out to described first object image subblock.
40. decoders according to claim 36 it is characterised in that:Described processor is specifically for being pointed to described target
The pixel near border between image subblock carries out block-eliminating effect filtering process.
41. decoders according to claim 36 it is characterised in that:Described processor specifically for from target code stream,
Obtain the first configured information, wherein, described first configured information in described target code stream with skip mode flag bit or fusion
MERGE mode flags position information is adjacent.
42. decoders according to claim 36 it is characterised in that:Described processor is specifically for according to described enhancement layer
Whether the reference image block being located at predeterminated position in image is decoded using reference information, determines context;
For according to described context, carrying out entropy decoding, to determine described first configured information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210375019.5A CN103716629B (en) | 2012-09-29 | 2012-09-29 | Image processing method, device, coder and decoder |
PCT/CN2013/084504 WO2014048372A1 (en) | 2012-09-29 | 2013-09-27 | Method and device for image processing, coder and decoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210375019.5A CN103716629B (en) | 2012-09-29 | 2012-09-29 | Image processing method, device, coder and decoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103716629A CN103716629A (en) | 2014-04-09 |
CN103716629B true CN103716629B (en) | 2017-02-22 |
Family
ID=50387015
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210375019.5A Active CN103716629B (en) | 2012-09-29 | 2012-09-29 | Image processing method, device, coder and decoder |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103716629B (en) |
WO (1) | WO2014048372A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2815734C1 (en) * | 2023-03-31 | 2024-03-21 | Хуавэй Текнолоджиз Ко., Лтд. | Method and device for motion information storage |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10567755B2 (en) | 2014-07-06 | 2020-02-18 | Lg Electronics Inc. | Method for processing video signal, and apparatus therefor |
CN109996075B (en) * | 2017-12-29 | 2022-07-12 | 华为技术有限公司 | Image decoding method and decoder |
CN112040243B (en) * | 2018-06-04 | 2021-06-29 | 华为技术有限公司 | Method and device for obtaining motion vector |
CN117714717A (en) * | 2018-09-10 | 2024-03-15 | 华为技术有限公司 | Video decoding method and video decoder |
HUE064061T2 (en) | 2019-08-26 | 2024-02-28 | Huawei Tech Co Ltd | Method and apparatus for motion information storage |
CN112598572B (en) * | 2019-10-01 | 2022-04-15 | 浙江大学 | Method and device for screening subblock images and processing units |
CN114339262B (en) * | 2020-09-30 | 2023-02-14 | 华为技术有限公司 | Entropy encoding/decoding method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1764280A (en) * | 2004-10-21 | 2006-04-26 | 三星电子株式会社 | Method and apparatus based on multilayer effective compressing motion vector in video encoder |
CN101198064A (en) * | 2007-12-10 | 2008-06-11 | 武汉大学 | Movement vector prediction method in resolution demixing technology |
CN101755458A (en) * | 2006-07-11 | 2010-06-23 | 诺基亚公司 | Scalable video coding |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100704626B1 (en) * | 2005-02-07 | 2007-04-09 | 삼성전자주식회사 | Method and apparatus for compressing multi-layered motion vectors |
US8315308B2 (en) * | 2006-01-11 | 2012-11-20 | Qualcomm Incorporated | Video coding with fine granularity spatial scalability |
-
2012
- 2012-09-29 CN CN201210375019.5A patent/CN103716629B/en active Active
-
2013
- 2013-09-27 WO PCT/CN2013/084504 patent/WO2014048372A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1764280A (en) * | 2004-10-21 | 2006-04-26 | 三星电子株式会社 | Method and apparatus based on multilayer effective compressing motion vector in video encoder |
CN101755458A (en) * | 2006-07-11 | 2010-06-23 | 诺基亚公司 | Scalable video coding |
CN101198064A (en) * | 2007-12-10 | 2008-06-11 | 武汉大学 | Movement vector prediction method in resolution demixing technology |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2815734C1 (en) * | 2023-03-31 | 2024-03-21 | Хуавэй Текнолоджиз Ко., Лтд. | Method and device for motion information storage |
Also Published As
Publication number | Publication date |
---|---|
CN103716629A (en) | 2014-04-09 |
WO2014048372A1 (en) | 2014-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103716629B (en) | Image processing method, device, coder and decoder | |
CN104796709B (en) | The method and apparatus that the coding unit of image boundary is encoded and decodes | |
CN105072442B (en) | The equipment that video data is decoded | |
CN105532001B (en) | For using the difference vector based on depth to carry out interlayer coding method and coding/decoding method and equipment to video | |
CN104811703B (en) | The coding method of video and the coding/decoding method and device of device and video | |
CN101313578B (en) | Method and apparatus for defining and reconstructing regions of interest in scalable video coding | |
CN103959790B (en) | Scanning of prediction residuals in high efficiency video coding | |
CN104935942B (en) | The method that intra prediction mode is decoded | |
CN103119945B (en) | The method and apparatus that image is coded and decoded by intra prediction | |
CN104780380B (en) | Inter-frame prediction method | |
CN106797467A (en) | Video coding and decoding method and apparatus for image completion region | |
CN105141955A (en) | Apparatus for encoding and decoding image by skip encoding and method for same | |
CN106464889A (en) | Inter-layer video decoding method and apparatus therefor performing sub-block-based prediction, and inter-layer video encoding method and apparatus therefor performing sub-block-based prediction | |
CN103067704B (en) | A kind of method for video coding of skipping in advance based on coding unit level and system | |
CN103875245A (en) | Layered signal decoding and signal reconstruction | |
CN103716631B (en) | For the method for image procossing, device, encoder | |
CN106031175A (en) | Interlayer video encoding method using brightness compensation and device thereof, and video decoding method and device thereof | |
CN102497545B (en) | Content adaptive and art directable scalable video coding | |
CN1589028B (en) | Predicting device and method based on pixel flowing frame | |
CN106105208A (en) | Scalable video/coding/decoding method and equipment | |
CN107005710A (en) | Multi-view image coding/decoding method and device | |
CN101584220B (en) | Method and system for encoding a video signal, encoded video signal, method and system for decoding a video signal | |
CN110495178A (en) | The device and method of 3D Video coding | |
CN107005705A (en) | The method and apparatus for being encoded or being decoded to multi-layer image using inter-layer prediction | |
WO2013160460A1 (en) | Scalable encoding and decoding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |