[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022206199A1 - 在视频解码装置中进行图像处理的方法、装置及系统 - Google Patents

在视频解码装置中进行图像处理的方法、装置及系统 Download PDF

Info

Publication number
WO2022206199A1
WO2022206199A1 PCT/CN2022/076367 CN2022076367W WO2022206199A1 WO 2022206199 A1 WO2022206199 A1 WO 2022206199A1 CN 2022076367 W CN2022076367 W CN 2022076367W WO 2022206199 A1 WO2022206199 A1 WO 2022206199A1
Authority
WO
WIPO (PCT)
Prior art keywords
decoded
block
memory
power consumption
preset
Prior art date
Application number
PCT/CN2022/076367
Other languages
English (en)
French (fr)
Inventor
赵娟萍
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022206199A1 publication Critical patent/WO2022206199A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements

Definitions

  • the present application belongs to the technical field of electronic devices, and in particular, relates to a method, device, storage medium, electronic device and system for performing image processing in a video decoding device.
  • the video decoding apparatus may decode video images.
  • data of multiple frames of decoded video images are usually referenced.
  • Embodiments of the present application provide a method, an apparatus, a storage medium, and an electronic device for performing image processing in a video decoding apparatus, which can reduce the power consumption of the video decoding apparatus.
  • an embodiment of the present application provides a method for performing image processing in a video decoding device, the method comprising:
  • a reference position that needs to be stored in the preset memory is determined from the one or more reference positions, and stored in the preset memory.
  • the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset power consumption threshold;
  • the object to be decoded is decoded.
  • an embodiment of the present application provides a method for performing image processing in a video decoding device, the method comprising:
  • MV Reference Motion Vector
  • one or more reference blocks that need to be stored in the preset memory are determined from the one or more reference blocks, and stored In the preset memory, the power consumption generated by storing the reference block in the preset memory and reading the reference block from the preset memory is less than or equal to the preset power consumption threshold; as well as
  • the block to be decoded or a sub-block of the block to be decoded is decoded according to the reference block.
  • an embodiment of the present application provides a device for performing image image processing in a video decoding device, the device comprising:
  • the acquisition module is used to acquire the video stream
  • a first determining module configured to determine one or more reference positions from the video code stream
  • a second determining module configured to determine the reference times of the one or more reference positions
  • the third determining module is configured to determine the position to be stored in the preset memory from the one or more reference positions according to the preset power consumption threshold and the reference times of the one or more reference positions, and set the position to be stored in the preset memory. It is stored in the preset memory, and the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset power consumption threshold; and
  • a decoding module configured to decode the object to be decoded according to the reference position.
  • an embodiment of the present application provides an image processing apparatus, and the apparatus includes:
  • the first acquisition module is used to acquire the video stream
  • a second obtaining module configured to obtain one or more reference motion vectors according to the video code stream
  • a third obtaining module configured to obtain one or more corresponding reference blocks from one or more image frames of the video code stream according to the one or more reference motion vectors
  • a first determining module configured to determine the reference times of the one or more reference blocks
  • the third determining module is configured to determine, from the one or more reference blocks, one or more reference blocks that need to be stored in the preset memory according to the preset power consumption threshold and the reference times of the one or more reference blocks
  • the reference block is stored in the preset memory, and the power consumption generated by storing the reference block in the preset memory and reading the reference block from the preset memory is less than or equal to the the preset power consumption threshold;
  • a decoding module configured to decode a block to be decoded or a sub-block of the block to be decoded according to the reference block.
  • an embodiment of the present application provides a storage medium on which a computer program is stored, and when the computer program is executed on a computer, causes the computer to execute the image decoding in a video decoding apparatus provided by the embodiment of the present application method of processing.
  • the embodiments of the present application further provide an electronic device, including a memory, a processor, and a video decoding apparatus, where the processor is configured to execute the computer program stored in the memory by invoking the computer program provided in the embodiments of the present application.
  • a method of image processing in a video decoding device is configured to execute the computer program stored in the memory by invoking the computer program provided in the embodiments of the present application.
  • an embodiment of the present application further provides an image processing system, including a video decoding device, a first memory, and a second memory, wherein the power consumption of the first memory is greater than the power consumption of the second memory, and the first memory
  • One memory stores a reference position with a reference number of times, or stores a reference position with a reference number of one and multiple times, and stores a reference position with a reference number of times in the second memory.
  • FIG. 1 is a first schematic flowchart of a method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a video decoding system in the related art.
  • FIG. 3 is a schematic diagram of data storage in a video decoding apparatus in the related art.
  • FIG. 4 is a schematic diagram of increasing the number of channels of a dynamic random access memory (DRAM) for data access in the related art.
  • DRAM dynamic random access memory
  • FIG. 5 is a schematic diagram of a power consumption curve when reading and writing data from a multi-channel DRAM during video decoding in the related art.
  • FIG. 6 is a scene schematic diagram of a reference relationship between image frames in an image group in a current video code stream provided by an embodiment of the present application.
  • FIG. 7 is a second schematic flowchart of a method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram illustrating a comparison of the energy consumed by a static random access memory (Static Random-Access Memory, SRAM) and a dynamic random access memory when reading data provided by an embodiment of the present application.
  • SRAM Static Random-Access Memory
  • FIG. 9 is a schematic diagram of a scenario in which image data that is referenced multiple times is stored in a system cache (system cache, Sys$) or a system buffer memory (System Buffer, SysBuf) provided by an embodiment of the present application.
  • system cache system cache
  • System Buffer System Buffer
  • FIG. 10 is a schematic diagram of a scene for roughly analyzing the reference relationship of multiple image frames in a video code stream according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a video decoding system using a system cache provided by an embodiment of the present application.
  • FIG. 12 is another schematic structural diagram of a video decoding system using a system cache provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a video decoding system using a system buffer memory provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a power consumption curve when reading and writing data from Sys$ or SysBuf according to an embodiment of the present application.
  • FIG. 15 is a third schematic flowchart of a method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a scenario of a reference relationship between blocks in each image frame in an image group in a video code stream provided by an embodiment of the present application.
  • FIG. 17 is a fourth schematic flowchart of a method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a scene for finely analyzing the reference relationship of blocks in multiple image frames in a video code stream provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a scene decoded by a video decoding apparatus provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 21 is another schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 22 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 23 is another schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 24 is a schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 25 is another schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 1 is a first schematic flowchart of a method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • the method for performing image processing in a video decoding apparatus can be applied to a video decoding apparatus.
  • the flow of the method for performing image processing in a video decoding device may include:
  • the video decoding apparatus may decode video images.
  • data of multiple frames of decoded video images are usually referenced.
  • the power consumption of the video decoding apparatus is relatively large.
  • FIG. 2 is a schematic structural diagram of a video decoding system in the related art.
  • a central processing unit Central Processing Unit/Processor, CPU
  • a video decoding device and a display processor Display Processing Unit, DISP
  • a dynamic random access memory controller Dynamic Random Access Memory Controller, DRAMC
  • DRAMC Dynamic Random Access Memory Controller
  • Video decoding devices attach great importance to cost.
  • DRAM is usually used as the main storage space.
  • FIG. 3 is a schematic diagram of data storage in a video decoding apparatus in the related art.
  • the bit stream (Bitstreams), the image frame (Image frame) to be buffered and the temporary data (Temporary data) are all stored in the DRAM in the video decoding device.
  • the temporary data may be a Temporal Motion Vector (TMV) and other data.
  • TMV Temporal Motion Vector
  • the video decoding device can adopt the internal cache strategy of the video decoding device for various video bitstreams
  • the video bitstream can be the first video and audio lossy compression organized by the Moving Picture Experts Group Standard (Moving Picture Experts Group Phase 1, MPEG-1), the 2nd film and audio lossy compression standard organized by the Moving Picture Experts Group (Moving Picture Experts Group 2, MPEG-2), the 4th standard organized by the Moving Picture Experts Group Video and audio lossy compression standards (Moving Picture Experts Group 4, MPEG-4), essential video coding (Essential Video Coding, MPEG-5/EVC), International Telecommunication Union Telecommunication Standardization Sector (ITU Telecommunication Standardization Sector, ITU-T ), the low bit rate video coding standard H.263, Advanced Video Coding (H.264/AVC), High Efficiency Video Coding (H.265/HEVC), Versatile Video Coding (H.266/VVC), Video Predictor 8 (VP8), Video Predictor 9 (VP9), Open Media Alliance Video Coding 1 (Alliance for
  • FIG. 4 is a schematic diagram of data access by increasing the number of DRAM channels in the related art.
  • the bandwidth and frequency can be increased to increase the data throughput speed of the DRAM, but it will cause greater power consumption.
  • the bandwidth of the system DRAM consumes a large amount of energy. But whether the video decoding device performs immediate or non-instant operation, it is very important to maintain the highest efficiency. In the method in the related art, when the video decoding device completes the decoding at an expected time, it will cause a huge power consumption of the DRAM.
  • FIG. 5 is a schematic diagram of a power consumption curve when reading and writing data from a multi-channel DRAM during video decoding in the related art.
  • the abscissa is the position of the reference image frame, for example, the top position, middle position, and bottom position of the reference image frame
  • the ordinate is the power consumption of reading and writing data during video decoding.
  • a video code stream is obtained, and the video code stream may include one or more groups of pictures (Group of pictures, GOP), and one image group includes multiple image frames.
  • the video code stream includes an image group as an example for description.
  • the acquired video code stream is an encoded video code stream, and the image frames in the video code stream may not yet be decoded, or some image frames may have been decoded, and another part of the image frames may be waiting to be decoded. It should be noted that the decoded image frame can be used as a reference image frame for subsequent decoding of other image frames.
  • the reference position may include an image frame, a reference slice (slice) or a reference area .
  • the to-be-decoded image frame is compared with the reference image frame, reference slice or reference area, that is, the to-be-decoded image frame needs to refer to the already decoded image frame, slice or area.
  • the image frame to be decoded is an image frame to be decoded. It should be noted that one slice contains part or all of the data of one image frame, in other words, one image frame can be encoded as one or more slices.
  • a strip contains at least one block and at most data for an entire image frame.
  • the number of slices formed by images in the same image frame is not necessarily the same.
  • the purpose of designing stripes in H.264 is mainly to prevent the spread of errors.
  • their decoding operations are independent. The data referenced by the decoding process of a certain slice cannot cross the boundary of the slice.
  • the video code stream can be analyzed by the software or hardware of the video decoding device, and one or more reference positions can be roughly determined from the video code stream, for example, one or more reference images can be determined from the video code stream.
  • frame, reference slice or reference area that is, one or more reference image frames are determined from the video code stream, or one or more reference slices are determined from the video code stream, or one or more reference image frames are determined from the video code stream.
  • a plurality of reference regions, and the determined one or more reference picture frames, reference slices or reference regions form a reference queue, so that these reference picture frames, reference slices or reference regions can be referred to when decoding the to-be-decoded picture frame.
  • the reference area may be an area in an image frame, and the area needs to be referenced by the image to be decoded.
  • the reference area may be an area in a slice, and the area needs to be referenced by the image to be decoded, and so on.
  • the reference times of the one or more reference positions may be determined.
  • FIG. 6 is a scene schematic diagram of a reference relationship between image frames in an image group in a video code stream provided by an embodiment of the present application.
  • FIG. 6 is taken as an example that one image group includes 9 image frames. In other embodiments, the number of image frames included in the image group can be adjusted according to specific requirements.
  • the display order of the picture frames may or may not be the same as the decoding order. The display order of the picture frames in the picture group as shown in FIG. 6 is different from the decoding order.
  • each image frame is referenced by other image frames in the image group, that is, the number of times each image frame is referenced by the image frame pointed by the arrow can be determined according to the direction of the arrow.
  • the image frame can be used as a reference image frame.
  • the reference times of the I frame in FIG. 6 is four times
  • the reference times of the B frame with the display order 1 is two
  • the reference times of the B frame with the display order 3 is once
  • the number of references is five
  • the number of references for a B frame with a display order of 6 is two, and so on.
  • FIG. 6 only a rough frame-level analysis is performed through the relevant hardware or software of the video decoding apparatus, and the reference relationship of each block in the reference frame image cannot be analyzed by this rough analysis method.
  • the I frame with the display order of 0, the B frame with the display order of 2, the P frame with the display order of 4, the B frame with the display order of 6, and the B frame with the display order of 8 are imaged.
  • the reference times of other image frames in the group are all multiple times.
  • the part that needs to be referenced will go in and out of DRAM many times, that is, the part that needs to be referenced will be read from DRAM many times. If a piece of data is expected to be read many times, the energy consumed is repeatedly A hundred times as much when reading. In order to improve the compression rate of today's video standards, it is common to use multiple reference image frames for encoding. This behavior means that some data are usually repeatedly used because of their high temporal correlation. In addition, with more and more high frame rate video streams, the repeatability in the time domain increases, and the encoding effect of the same reference image frame can be reused, and most video encoders will generate such video streams. If the parts read repeatedly during decoding are stored in a low-power storage medium, the energy consumption during video playback will be greatly reduced.
  • a reference position that needs to be stored in the preset memory may be determined from one or more reference positions, for example, a reference image frame, a reference strip, or a reference area that has been referenced multiple times may be stored in the preset memory.
  • the preset memory it is convenient to read the image data of the reference image frame, the reference slice or the reference area when the object to be decoded (such as the to-be-decoded image frame, the to-be-decoded slice or the to-be-decoded area) is subsequently decoded.
  • the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset power consumption threshold.
  • the number of times the reference image frame, reference slice or reference area stored in the preset memory is referenced by the object to be decoded can be multiple times. Therefore, during decoding, the reference image frame, reference slice or reference slice stored in the preset memory The band or reference area will be read multiple times.
  • the object to be decoded can refer to the image data of the reference image frame, reference slice or reference area, that is, according to the read reference The image data of the image frame, reference slice or reference area to decode the object to be decoded.
  • the object to be decoded may include a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area. Then the block to be decoded in the image frame to be decoded can be decoded according to the image data of the read reference image frame, the block to be decoded in the slice to be decoded can be decoded according to the image data of the read reference slice, or the block to be decoded in the slice to be decoded can be decoded according to the read image data of the reference slice.
  • the read image data of the reference area decodes the to-be-decoded block in the to-be-decoded area.
  • the video decoding apparatus may acquire a video code stream, and determine one or more reference positions from the video code stream. Then, the reference times of the one or more reference positions are determined, and the reference position that needs to be stored in the preset memory is determined from the one or more reference positions according to the preset power consumption threshold and the reference times of the one or more reference positions , and store it in the preset memory, and the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset power consumption threshold. After that, the object to be decoded is decoded according to the reference position.
  • the determined image data of the reference position to be stored in the preset memory is stored in the preset memory with low power consumption, so as to achieve the purpose of reducing the power consumption of the video decoding apparatus. Therefore, the embodiments of the present application can reduce the power consumption of the video decoding apparatus.
  • FIG. 7 is a second schematic flowchart of a method for performing image processing in a video decoding apparatus according to an embodiment of the present application.
  • the method for performing image processing in a video decoding apparatus can be applied to a video decoding apparatus.
  • the flow of the method for performing image processing in a video decoding device may include:
  • step 201 For the specific implementation of step 201, reference may be made to the embodiment of step 101, and details are not described herein again.
  • the slice header information of the image frame in the video code stream or the slice header (Slice header) information of one or more slices in the image frame determine one or more reference image frames, reference slices or slice headers from the video code stream. Reference area.
  • the reference relationship of each image frame may be determined according to the frame header information of each image frame in the video code stream or the slice header information of the slice in the image frame.
  • the data of each image frame can be regarded as a Network Abstraction Layer (NAL) unit
  • the frame header information is used to distinguish the beginning of an image frame
  • the frame header information can also be regarded as the NAL unit header information.
  • the frame header information can determine which image frame it is, that is, the reference image frame.
  • the slice header is used to store the overall information of the slice, such as the type of the current slice, etc.
  • the slice header information can be used to determine which slice it is, so the reference slice can be determined.
  • the reference area can be an area in an image frame or an area in a strip.
  • 203 Determine the reference times of one or more reference image frames, reference slices or reference regions by using preset parameters, where the preset parameters include any one or more of the following: network abstraction layer parsing parameters, slice header parsing parameters, Reference picture list correction parameter and reference picture frame marker parameter.
  • the reference position may include a reference image frame, a reference slice or a reference area.
  • reference slices or reference areas are determined from the video code stream, further determination is required. The number of times each reference image frame, reference slice or reference area is referenced by the object to be decoded is calculated, so as to obtain the read times of each reference frame, reference slice or reference area.
  • the reference times of one or more reference image frames, reference strips or reference areas can be determined through preset parameters.
  • the preset parameters may include any one or more of the following: network abstraction layer parsing parameters, slice header parsing parameters, reference picture list correction parameters, reference picture frame marking parameters, and the like.
  • the network abstraction layer parsing parameter may be the nal_unit() function
  • the slice header parsing parameter may be the slice_header() function
  • the reference picture list modification parameter may be the ref_pic_list_modification() function
  • the reference picture frame marking parameter may be the ref_pic_list_modification() function.
  • the nal_unit() function analyzes the NAL units starting with 00 00 00 01 and 00 00 01 from the H.264 image frame, and then directly fills the length of the NAL unit.
  • the nal_ref_idc variable represents the reference level, which represents the reference situation of other image frames. The higher the reference level, the more important the reference image frame is.
  • the num_ref_idx_active_override_flag variable represents whether the actual number of available reference image frames of the current image frame needs to be overloaded.
  • the syntax elements num_ref_idx_l0_active_minus1 and num_ref_idx_l1_active_minus1 already present in the picture parameter set specify the number of reference frames actually available in the current reference picture frame queue. This pair of syntax elements can be overridden in the slice header to give more flexibility to a particular image frame.
  • the position of the stripe can be known through the num_ref_idx_active_override_flag variable.
  • the ref_pic_list_modification() function is the reference picture list modification function, which can be stored in the structure of the slice header.
  • the definition of the ref_pic_list_modification() function is as follows: when ref_pic_list_modification_flag_l0 is 1, the reference picture list RefPicList0 is modified; when ref_pic_list_modification_flag_l1 is 1, the reference picture list is modified RefPicList1 is modified.
  • the dec_ref_pic_marking() function identifies the decoded reference picture frame, and the marking operation is used to move the reference picture frame into or out of the reference picture frame queue, specifying the symbol of the reference picture.
  • the preset memory may include a first memory and a second memory, and the power consumption of the first memory is greater than the power consumption of the second memory.
  • the first memory may include a dynamic random access device provided outside the video decoding device.
  • the second memory may include a system cache set outside the video decoding device, after determining the number of times that one or more reference image frames, reference slices or reference regions are referenced by the object to be decoded, if the number of references is multiple times, Then, one or more reference image frames, reference slices or reference regions with the reference number of times can be stored in the system cache and stored in the dynamic random access memory to wait for the video decoding device to read when decoding. Pick.
  • the size of the data amount of one or more reference image frames, reference stripes or reference regions stored in the system cache with multiple reference times can be adjusted.
  • the reference times can be stored first.
  • One or more reference image frames, reference strips, or reference regions for multiple times are stored in the system cache and then stored in dynamic random access memory, or, the reference times can be one or more times.
  • Multiple reference image frames, reference slices, or reference regions are stored simultaneously in dynamic random access memory and the system cache, or one or more reference image frames, reference slices, or reference counts may be referenced first Regions are stored in dynamic random access memory before they are stored in the system cache.
  • reference times of a reference image frame, reference slice or reference area may be the number of times referenced by the same image frame to be decoded, slice to be decoded or area to be decoded, or may be referred to by other images in an image group.
  • a reference image frame, a reference slice or a reference area with a number of times of reference in the system cache and in the first memory it may be a frame unit, a slice unit or an area unit to store.
  • the power consumption of the dynamic random access memory is greater than the power consumption of the system cache, and the storage to the dynamic random access memory and the system cache and the storage from the dynamic random access memory and the system cache
  • the power consumption generated when reading the reference image frame, reference strip or reference area is less than the preset power consumption threshold, which can reduce the power consumption when reading and writing data.
  • the preset power consumption threshold may be considered as the power consumption generated when the reference image frame, reference strip or reference area are all stored and read by the dynamic random access memory.
  • the first memory may include a dynamic random access memory disposed outside the video decoding device
  • the second memory may include a system buffer memory disposed outside the video decoding device, when determining one or more reference image frames, reference slices or After the number of times the reference area is referenced by the object to be decoded, if the number of times of reference is multiple, one or more reference image frames, reference slices or reference areas with the number of times of reference may be stored in the system buffer memory for Read while waiting for the video decoding device to decode.
  • the data amount of one or more reference image frames, reference strips or reference regions stored in the system buffer memory with multiple reference times can be adjusted.
  • reference times of a reference image frame, reference slice or reference area may be the number of times referenced by the same image frame to be decoded, slice to be decoded or area to be decoded, or may be referred to by other images in an image group.
  • a reference image frame, a reference slice or a reference area with multiple reference times in the system buffer memory it may be stored in a frame unit, a slice unit or an area unit.
  • the power consumption of the dynamic random access memory is greater than the power consumption of the system buffer memory, and the storage to the dynamic random access memory and the system buffer memory, and the storage from the dynamic random access memory and the system buffer memory.
  • the power consumption generated when reading the reference image frame, reference strip or reference area is less than the preset power consumption threshold, which can reduce the power consumption when reading and writing data.
  • the preset power consumption threshold may be considered as the power consumption generated when the reference image frame, reference strip or reference area are all stored and read by the dynamic random access memory.
  • the first memory may include a dynamic random access memory set outside the video decoding device. After determining the number of times that one or more reference image frames, reference slices or reference regions are referenced by the object to be decoded, if the number of times of reference is one , then one or more reference image frames, reference slices or reference regions whose reference times are once may be stored in the dynamic random access memory to wait for the video decoding apparatus to read when decoding.
  • reference times of a reference image frame, reference slice or reference area may be the number of times referenced by the same image frame to be decoded, slice to be decoded or area to be decoded, or may be referred to by other images in an image group.
  • one or more reference image frames, reference slices, or reference regions whose reference times are once will be read once when they are referenced by the object to be decoded (for example, one or more reference image frames whose reference times are once will be read once.
  • One or more reference image frames will be read once when referenced by the image frame to be decoded.
  • one or more reference slices whose reference times are once will be read once when they are referenced by the slice to be decoded.
  • one or more reference regions whose reference times are once will be read once when referenced by the region to be decoded).
  • the power consumption generated when the reference image frame, reference strip or reference area is stored in the dynamic random access memory and read from the dynamic random access memory is less than the preset power consumption threshold, which can reduce reading and writing. power consumption during data.
  • the preset power consumption threshold may be considered as the power consumption generated when the reference image frame, reference strip or reference area are all stored and read by the dynamic random access memory.
  • the first memory may include a dynamic random access memory disposed outside the video encoder, that is, the first memory may include a DRAM disposed outside the video decoding device, and the second memory may include a video
  • the system cache or system buffer memory outside the decoding device, that is, the second memory may include Sys$ or SysBuf provided outside the video decoding device.
  • the second memory may also be other low-power consumption memory or the like.
  • Sys$ or SysBuf is composed of multiple SRAMs
  • the first memory may be a DRAM
  • the power consumption of the DRAM is greater than the power consumption of the Sys$ or SysBuf outside the video decoding device
  • the Sys$ and DRAM are stored and stored in and from the video decoding device.
  • the power consumption of Sys$ and DRAM to read reference image frames, reference stripes or reference regions is less than a preset power consumption threshold, or, to store and read reference image frames, reference stripes or reference regions from SysBuf and DRAM to and from SysBuf and DRAM
  • the power consumption is less than the preset power consumption threshold. In this way, the power consumption when reading and writing data can be reduced, and the preset power consumption threshold can be considered as the power consumption generated when the reference image frame, reference strip or reference area are all stored and read by DRAM.
  • FIG. 8 is a schematic diagram illustrating a comparison of energy consumed when reading data in a static random access memory and a dynamic random access memory according to an embodiment of the present application.
  • the difference in energy consumption between reading data in SRAM and reading data in DRAM is about 100 times, that is, the power consumption of reading data in SRAM is far less than the power consumption of reading data in DRAM.
  • the amount of data in and out of high-energy-consuming storage (such as DRAM) for video stream parsing is limited, so this parsing step does not become a bottleneck in energy consumption, whether it is NAL unit parsing and slice header parsing, or entropy decoding (Entropy decoding) , the motion vector or other symbols are decoded from the video stream.
  • the part that requires a high amount of data input and output is the data required by the motion compensation step, and the motion compensation step of the video decoding apparatus will require a large bandwidth provided by the DRAM.
  • the reference image frames that are referenced many times or the parts of the reference image frames that are used many times can be stored in low-energy storage in advance (such as Sys$ or SysBuf composed of SRAM), thereby ensuring that During video decoding, the power consumption of reading data is controlled near the expected value, and the hardware or software of the video decoding device can complete the decoding work as soon as possible.
  • FIG. 9 is a schematic diagram of a scenario in which image data that is referenced multiple times is stored in a system cache or a system buffer memory provided by an embodiment of the present application.
  • storing the image data referenced by the object to be decoded many times in Sys$ and DRAM, or storing the image data referenced by the object to be decoded many times in SysBuf can greatly reduce the overall video playback requirements. energy consumption.
  • the video decoding device After a rough frame-level analysis or estimation is performed by the relevant hardware or software of the video decoding device, it can be determined that some reference image frames will be used multiple times, and certain reference image frames can be determined.
  • the image frame is referenced multiple times by the image frame to be decoded.
  • the reference image frames with multiple reference times can be stored in the power-saving Sys$ and stored in the DRAM, or the reference image frames with multiple reference times can be stored only in the power-saving SysBuf, so that the video decoding device Try to maintain the expected low power consumption state, improve the usage time of the video decoding system and prevent the system from overheating.
  • the entire reference image frame in the video code stream or the image data in the reference image frame that is referenced by the object to be decoded multiple times is stored in a low-power storage space, for example, stored in Sys$ and DRAM, or only stored in SysBuf, etc., which can effectively maintain the power consumption of the entire video decoding system when the video decoding device operates, thereby improving user experience.
  • the prediction can be performed by the hardware or software of the video decoding device to actually analyze the video stream, or it can be estimated which image data is suitable for writing into something like Sys$ or SysBuf because of known factors such as the application scenario or the Group of pictures structure. in low-power memory.
  • the reference times of these reference image frames are usually used as the priority.
  • the reference image frame with the most reference times is first stored in Sys$ or SysBuf, and so on, and the reference image frames with multiple reference times are stored in Sys$ or SysBuf in the order of the reference times from high to low.
  • the power consumption and energy consumption can be reduced to the expected target to the greatest extent.
  • this judgment method is quite rough, it cannot be 100% close to or below the expected power consumption saving target.
  • FIG. 10 is a schematic diagram of a scene for roughly analyzing the reference relationship of multiple image frames in a video code stream according to an embodiment of the present application.
  • the H.264 video stream is roughly analyzed.
  • the header information or slice header information of the NAL unit it can be analyzed that some image frames are referenced by other image frames. More times than other image frames are referenced.
  • the reference image list can be known, and the amount of data in and out of DRAM can be calculated. Then, select some image frames that can satisfy the maximum probability and store them in Sys$ or SysBuf.
  • the reference count is one, one or more reference image frames, reference slices or reference regions with one reference count may be stored in the DRAM for reading when the video decoding apparatus decodes.
  • the reference times of image frame 0 is once
  • the reference times of image frame 1 is three times
  • the reference times of image frame 2 is three times
  • the reference times of image frame 3 is once
  • the reference list contains image frame 0, Image frame 1, image frame 2, and image frame 3, since image frame 1 and image frame 2 are referenced three times, store image frame 1 and image frame 2 in Sys$ and also in DRAM, or Only image frame 1 and image frame 2 are stored in SysBuf.
  • Image frame 0 and image frame 3 are stored in DRAM.
  • the access power consumption models of DRAM, system bus, Sys$, and SysBuf can be obtained through some simple measurements or experiments, and details are not repeated here. Assuming that there is already a power consumption model related to data flow in and out of DRAM, system bus, Sys$ and SysBuf, it is possible to calculate how much power consumption or power consumption is reduced corresponding to how much data access to DRAM is reduced. According to the code stream header information of the image frame or strip, it can be determined which image frames or strips need to be written into Sys$ or SysBuf, which can achieve the expected power consumption reduction value during decoding. This rough analysis method can achieve the expected power reduction, but not the optimal power reduction.
  • FIG. 11 is a schematic structural diagram of a video decoding system using a system cache provided by an embodiment of the present application.
  • FIG. 12 is another schematic structural diagram of a video decoding system using a system cache provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a video decoding system using a system buffer memory provided by an embodiment of the present application.
  • Stored in Sys$ or SysBuf are reference image frames, reference strips or reference regions that are referenced multiple times.
  • FIG. 11 is only an architecture of the video decoding system when the system cache is used.
  • the video decoding system may also adopt other architectures.
  • the video decoding system also includes a display processor.
  • the video decoding device needs to perform decoding, it can directly read the image data of the reference image frame, reference strip or reference area stored in Sys$ with multiple reference times.
  • Sys$ also reads from DRAM through DramC The image data of the reference image frame, the reference slice or the reference area whose reference times are one, are then read by the video decoding apparatus.
  • the power consumption can be correspondingly reduced according to requirements, and the storage mode of the measured image data can be intelligently selected, thereby reducing the power consumption of the video decoding device.
  • the image data storage location can be changed according to the frame reference relationship during decoding, so that the number of repeated readings of reference image frames stored in a low-power memory such as Sys$ is appropriately increased, so as to appropriately reduce power consumption and ensure that the video decoding device enters and exits data.
  • the resulting power consumption can always maintain the expected state. If low-power memory such as Sys$ also has high-speed bandwidth, the bandwidth of DRAM can be further reduced.
  • FIG. 12 and FIG. 13 is only one of the architectures, and in specific applications, corresponding modifications can be made according to actual requirements, such as adding a display processor and so on.
  • the image data of the required reference image frame, reference strip or reference area from the preset memory, if the image data of the reference image frame is read, then treat the image data according to the read image data of the reference image frame.
  • the block to be decoded in the decoded image frame is decoded. If the image data of the reference slice is read, the block to be decoded in the slice to be decoded is decoded according to the read image data of the reference slice. is the image data of the reference area, decode the to-be-decoded block in the to-be-decoded area according to the read image data of the reference area.
  • an image frame, strip or region can be divided into non-overlapping blocks that form a rectangular array, where each block is a block of N ⁇ N pixels, for example, can be a block of 4 ⁇ 4 pixels , 32x32 pixel blocks, 128x128 pixel blocks, etc.
  • the original number of readings from DRAM can be divided into several readings from SRAM, and the other several readings from DRAM, which can reduce the power consumption of reading data as a whole. It should be noted that the number of times of reading from SRAM and the number of times of reading from DRAM can be adjusted to meet different power consumption requirements.
  • a reference strip or a reference area for which the required number of times of reference is multiple when reading the image data of a reference image frame, a reference strip or a reference area for which the required number of times of reference is multiple, it can be read from Sys$ first, and when the number of times of reading is greater than When it is equal to the preset number of times threshold, it switches to reading the image data that has not been read from the DRAM.
  • DRAM consumes 100 times more energy than SRAM. Therefore, by reading a part of the image data of the reference image frame, the reference strip or the reference area with multiple times of reference from the Sys$, and the other part of the data from the DRAM, the time required for reading the data can be reduced. power consumption.
  • the reference strip or the reference area for which the required reference times are multiple when reading the image data of the reference image frame, the reference strip or the reference area for which the required reference times are multiple, the data is directly read from the SysBuf.
  • DRAM consumes 100 times more energy than SRAM. Therefore, by reading the image data of the reference image frame, the reference strip or the reference area for which the number of times of reference is required multiple times from the SysBuf, the power consumption of reading the data can be reduced.
  • the block to be decoded in the image frame to be decoded is decoded according to the image data of the read reference image frame. If the image data of the reference slice is read. data, decode the block to be decoded in the slice to be decoded according to the read image data of the reference slice, and if the image data of the reference area is read, then decode the block to be decoded according to the read image data of the reference area. The block to be decoded is decoded.
  • FIG. 14 is a schematic diagram of a power consumption curve when reading and writing data from Sys$ or SysBuf according to an embodiment of the present application.
  • the video decoding device replaces a large amount of DRAM power consumption with the power consumption of Sys$ or SysBuf, which can greatly reduce the power consumption.
  • the reference times of the to-be-decoded image frame, the to-be-decoded slice, or the to-be-decoded area is multiple, store the decoded block of the to-be-decoded block in the system cache and in the dynamic random access memory.
  • the decoded block of the to-be-decoded block is stored in the system cache and stored in the dynamic random access memory.
  • the to-be-decoded slice, or the to-be-decoded block in the to-be-decoded area if the decoded block of the to-be-decoded block is subsequently decoded by other to-be-decoded blocks (for example, other to-be-decoded image frames) , the to-be-decoded slice or the to-be-decoded block in the to-be-decoded area) is referred to multiple times, and the decoded block of the to-be-decoded block is stored in the system buffer memory.
  • the reference times of the to-be-decoded image frame, the to-be-decoded slice, or the to-be-decoded area is one, store the decoded block of the to-be-decoded block in the dynamic random access memory.
  • the to-be-decoded slice, or the to-be-decoded block in the to-be-decoded area if the decoded block of the to-be-decoded block is subsequently decoded by other to-be-decoded blocks (for example, other to-be-decoded image frames) , the to-be-decoded slice or the to-be-decoded block in the to-be-decoded area) is referenced once, and the decoded block of the to-be-decoded block is stored in the dynamic random access memory.
  • the other to-be-decoded block is decoded. If all the blocks to be decoded in the image frame to be decoded, the slice to be decoded or the area to be decoded have been decoded, decode other image frames, slices or areas until all image frames and slices to be decoded are completed. or region decoding.
  • the embodiments of the present application are based on predictable data access behavior (ie, repeated reading behavior) during video decoding, so as to realize intelligent selection of a data storage mode to reduce power consumption of the video decoding apparatus.
  • the image data storage location can be changed according to the frame reference relationship during decoding, so that the number of repeated readings of reference image frames stored in a low-power memory such as Sys$ is appropriately increased, so as to appropriately reduce power consumption and ensure that the video decoding device enters and exits data.
  • the resulting power consumption can always maintain the expected state. If low-power memory such as Sys$ also has high-speed bandwidth, the bandwidth of DRAM can be further reduced.
  • the embodiments of the present application can ensure that the power consumption of the video decoding device is controllable, and the hardware or software of the video decoding device can complete the decoding work as soon as possible, and make full use of the video decoding device, which may repeatedly read the reference image frame or the reference strip for many times.
  • the behavior can be expected to change the storage characteristics of the read data, because accessing the data saves power, and getting data in and out does not introduce a power bottleneck, allowing the video decoding device to maintain its operating speed while reducing power consumption.
  • the speed of reading data is not limited by power consumption, so the video decoding device does not overheat.
  • the SRAM in Sys$ or SysBuf has low latency when reading and writing, which can improve the processing frame rate and reduce the response latency. Since the power consumption can be greatly reduced, the usage time of the battery in the video decoding device can be increased, and the user experience can be improved.
  • FIG. 15 is a third schematic flowchart of a method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • the method for performing image processing in a video decoding apparatus can be applied to a video decoding apparatus.
  • the flow of the method for performing image processing in a video decoding device may include:
  • step 301 For the specific implementation of step 301, reference may be made to the embodiment of step 101, and details are not described herein again.
  • each reference motion vector corresponds to a reference block.
  • the relative displacement between the reference block and the block to be encoded can be used as a reference motion vector.
  • refined analysis can be achieved.
  • the number of times each reference block is referenced by other blocks to be decoded in a group image group may be one time or multiple times. For example, when the reference block is referenced by only one block to be decoded, the reference times of the reference block is one time, and when the reference block is referenced by multiple blocks to be decoded, the reference times of the reference block is multiple times.
  • each reference motion vector corresponds to a reference block, it can be obtained from one or more reference motion vectors of the video code stream according to the one or more reference motion vectors.
  • One or more corresponding reference blocks are obtained from each image frame.
  • the reference times of the one or more reference blocks may be determined, that is, determining The number of times the one or more reference blocks are referenced by the block to be decoded or the sub-blocks of the block to be decoded.
  • FIG. 16 is a schematic diagram of a scene of reference relationships between blocks in each image frame in an image group in a video code stream provided by an embodiment of the present application.
  • FIG. 16 is taken as an example in which one image group includes 9 image frames. In other embodiments, the number of image frames included in the image group can be adjusted according to specific requirements.
  • the display order of the picture frames may or may not be the same as the decoding order. The display order of the picture frames in the picture group as shown in FIG. 16 is different from the decoding order.
  • each block is referenced by other blocks in the group image group, that is, the number of times each block is referenced by other blocks pointed to by the arrow can be determined according to the direction of the arrow.
  • the block can be used as a reference block.
  • the reference times of the reference block in the I frame in FIG. 16 are four
  • the reference times of the reference block in the B frame with the display order 2 are two
  • the reference times of the reference block in the B frame with the display order 3 are
  • the reference block in the P frame with the display order 4 is referenced five times
  • the reference block in the B frame with the display order 6 is referenced twice, and so on.
  • one or more reference blocks that need to be stored in the preset memory may be determined from one or more reference blocks.
  • the determined reference blocks that need to be stored in the preset memory may be It is a reference block whose reference times are multiple times, that is, a reference block that is referenced multiple times by the block to be decoded or the sub-block of the block to be decoded is determined to be stored in the preset memory, which is convenient for subsequent decoding of the block to be decoded or the sub-block of the block to be decoded.
  • the power consumption generated by storing the reference block in the preset memory and reading the reference block from the preset memory is less than or equal to the preset power consumption threshold, and the data is read and written by using the preset memory with low power consumption. , the power consumption of the video decoding device can be reduced.
  • the number of times the reference block stored in the preset memory is referenced by the block to be decoded or the sub-blocks of the block to be decoded can be multiple times, so during decoding, the reference block stored in the preset memory will be read repeatedly.
  • the block to be decoded or the sub-blocks of the block to be decoded can refer to the image data of the reference block, that is, the block to be decoded or the block to be decoded can be determined according to the image data of the read reference block.
  • the sub-blocks of the block to be decoded are decoded.
  • the video decoding apparatus may obtain a video code stream, and obtain one or more reference motion vectors according to the video code stream. Then, obtain one or more corresponding reference blocks from one or more image frames of the video code stream according to the one or more reference motion vectors; determine the number of times of reference of the one or more reference blocks; according to the preset power consumption threshold and The reference times of one or more reference blocks, one or more reference blocks that need to be stored in the preset memory are determined from the one or more reference blocks, stored in the preset memory, and sent to the preset memory.
  • the power consumption of storing the reference block and reading the reference block from the preset memory is less than or equal to the preset power consumption threshold.
  • the block to be decoded or a sub-block of the block to be decoded is decoded according to the reference block. That is, in the embodiment of the present application, by storing the determined image data of the reference image frame or reference slice that needs to be stored in the preset memory in the preset memory with low power consumption, the function of the video decoding device can be reduced. consumption purpose. Therefore, the embodiments of the present application can reduce the power consumption of the video decoding apparatus.
  • FIG. 17 is a fourth schematic flowchart of the method for performing image processing in a video decoding apparatus provided by an embodiment of the present application.
  • the method for performing image processing in a video decoding apparatus can be applied to a video decoding apparatus.
  • the flow of the method for performing image processing in a video decoding device may include:
  • step 401 For the specific implementation of step 401, reference may be made to the embodiment of step 101, and details are not described herein again.
  • MVD Motion Vector Difference
  • entropy decoding can be performed on the video code stream, such as decoding the frame header information of the image frame, the header information of the NAL unit or the slice header.
  • entropy decoding one or more motion vector difference, and the quantized residual can also be obtained.
  • the residual refers to the difference between the block to be coded and one or more blocks with the least coding cost.
  • performing entropy decoding on the video code stream in 402 to obtain one or more motion vector difference values may include:
  • Entropy decoding is performed on the video stream to obtain one or more motion vector differences and a quantized first residual.
  • the quantized first residual refers to the first residual obtained after performing forward transformation and quantization on the residual during encoding, wherein the residual can be obtained by subtracting the two-dimensional pixels of the block to be encoded by searching The difference obtained after the 2D pixel at the corresponding position of the block.
  • one or more reference motion vector values can be obtained according to the one or more motion vector difference values and the corresponding motion vector prediction value , for example, the sum of the motion vector difference value and the motion vector prediction value is added as a reference motion vector.
  • each reference motion vector corresponds to a reference block, so one or more corresponding reference blocks can be determined from one or more image frames in the video stream according to one or more reference motion vectors, so that the Corresponding reference block, wherein the reference block is a block that has been decoded.
  • the reference times of one or more reference blocks may be determined, where the reference times refer to the times that the reference blocks are referenced by the blocks to be decoded or subblocks in the blocks to be decoded.
  • the data throughput during video decoding is maintained at the expected low power consumption value as much as possible, which improves the usage time of the video decoding system and prevents the video decoding system from overheating.
  • FIG. 18 is a schematic diagram of a scene for finely analyzing the reference relationship of blocks in multiple image frames in a video code stream provided by an embodiment of the present application.
  • the H.264 video stream is used as an example, and the H.264 video stream is finely analyzed. Through the analysis of the main body of the strip, it can be analyzed that some blocks are referenced by other blocks more than other blocks. number of times. For example, for the code streams corresponding to the 5 image frames in Figure 18, finely analyze the video code streams corresponding to the 5 image frames, each part of each image frame (such as each macroblock, also called a block) It can be determined by reference motion vector analysis how many times it will be referenced by other blocks.
  • the reference times of block 0, block 2, and block C in image frame 0 are all once
  • the reference times of block 8 in image frame 0 is twice
  • the reference times of block 6 in image frame 1 are Twice
  • the reference times of block 4 in image frame 1 is three times
  • the reference times of block 9 in image frame 1 is four times
  • the reference times of block 5, block A, and block D in image frame 1 are all once
  • the reference times of block 7, block 3, and block B in image frame 2 are all one time.
  • reference times are multiple times, store one or more reference blocks with multiple reference times in the system cache according to the preset power consumption threshold, and store them in the dynamic random access memory.
  • the preset memory may include a first memory and a second memory, and the power consumption of the first memory is greater than that of the second memory.
  • the first memory may include a dynamic random access memory disposed outside the video decoding apparatus
  • the second memory may include a system cache disposed outside the video decoding apparatus.
  • the bandwidth of the dynamic random access memory that needs to be reduced can be calculated in advance for each image frame. By pre-calculating which areas (reference blocks) in the image frame need to be stored in Sys$, we can try to meet or be lower than the data input and output limit of dynamic random access memory during subsequent motion compensation. This limit comes from the need to reduce How much power or energy consumption.
  • the power consumption of the dynamic random access memory is greater than the power consumption of the system cache.
  • block 9 in image frame 1 is referenced four times, then block 8 in image frame 0, block 6 in image frame 1, block 4 in image frame 1, and block 9 in image frame 1 are stored in the system cache , and stored in dynamic random access memory. It should be noted that the power consumption of storing and reading one or more reference blocks with multiple reference times to the DRAM and the system cache and from the DRAM and the system cache is less than or equal to the preset power consumption. Set the power consumption threshold, which can reduce the power consumption when reading data.
  • the preset memory may include a second memory including a system buffer memory provided outside the video decoding apparatus.
  • the reference times of one or more reference blocks are determined, that is, after the number of times that one or more reference blocks are referenced by the block to be decoded or the sub-blocks in the block to be decoded is determined, if the number of references is multiple times, the The power consumption threshold is set to store one or more reference blocks with multiple reference times in the system buffer memory.
  • the bandwidth of the dynamic random access memory that needs to be reduced can be calculated in advance for each image frame. By pre-calculating which areas (reference blocks) in the image frame need to be stored in SysBuf, it is possible to meet or be lower than the data input and output limit of the first memory during subsequent motion compensation, which is derived from how much power consumption needs to be reduced or energy consumption.
  • If the number of times of reference is one time, store one or more reference blocks whose number of times of reference is one in the dynamic random access memory according to the preset power consumption threshold.
  • the preset memory may include a first memory, and the first memory may include a dynamic random access memory disposed outside the video decoding device.
  • the preset The power consumption threshold stores one or more reference blocks whose reference times are once in the dynamic random access memory, so that the video decoding apparatus can read it when decoding.
  • the first memory may include a DRAM disposed outside the video decoding device
  • the second memory may include a system cache or a system buffer memory disposed outside the video decoding device, that is, the second memory may include Sys disposed outside the video decoding device. $ or SysBuf.
  • the second memory may also be other low-power consumption memory or the like.
  • DRAM consumes more power than Sys$ or SysBuf.
  • there is already a power consumption model related to data flow in and out of DRAM, system bus, Sys$ and SysBuf it is possible to calculate how much power consumption or power consumption is reduced corresponding to how much data access to DRAM is reduced.
  • the code stream header information of the image frame or strip it can be determined which blocks need to be written into Sys$ or SysBuf, and the expected power consumption reduction value during decoding can be achieved.
  • Which blocks are suitable to be written into Sys$ or SysBuf are determined by the information of the reference motion vector decoded from the detailed information of the code stream. Usually, the more times the reference block is referenced by the block to be decoded or the sub-blocks in the block to be decoded, the more suitable it is to write to Sys$ or SysBuf, and a smaller Sys$ or SysBuf occupancy can achieve more decoding functions. consumption reduction.
  • the refined analysis method only seeks to achieve the expected power consumption/energy consumption reduction, and does not pursue the optimal power consumption/energy consumption reduction. As long as you calculate how much data in and out of the reconstructed area/area will be saved to Sys$ or SysBuf, you can calculate how much power/energy consumption is saved.
  • the energy difference between reading SRAM and reading DRAM is about 100 times different, that is, the energy of reading SRAM is much smaller than that of reading DRAM.
  • a block may include multiple sub-blocks, and the multiple sub-blocks are arranged in a rectangular array.
  • the image data of the reference block is directly read from the first memory , if the reference times of the reference block to which the block to be decoded or the subblock in the block to be decoded needs to be referenced is multiple times, it can be read from the first memory once, and the other times can be read from Sys$, or, if When the reference number of the reference block to be referenced by the block to be decoded or the sub-block in the block to be decoded is multiple times, it can be read from SysBuf.
  • the number of times read from Sys$ can be greater than the number of times read from DRAM, and the number of times read from Sys$ can be smaller than the number of times read from DRAM.
  • the number of times, or the number of times of reading from Sys$ can be equal to the number of times of reading from DRAM.
  • the number of times to read from DRAM and Sys$, respectively needs to be set according to specific scenarios, and the embodiment of this application does not do this. specific restrictions. Therefore, by reading a part of the image data of the reference block from Sys$ and the other part of the data from DRAM, the power consumption of reading data can be reduced.
  • the video decoding device replaces a large amount of DRAM power consumption with the power consumption of Sys$ or SysBuf, which greatly reduces the power consumption.
  • decoding the block to be decoded or the sub-block in the block to be decoded according to the read image data of the reference block in 409 may include:
  • the second residual and the predicted value of the block to be decoded or the subblock in the block to be decoded obtain the decoded block of the block to be decoded or the subblock decoded of the block to be decoded subblock.
  • decoding the block to be decoded or a sub-block in the block to be decoded according to the read image data of the reference block in 409 may further include:
  • the video stream decoded data is acquired according to the decoded block of the block to be decoded or the decoded sub-block of the sub-block in the block to be decoded.
  • FIG. 19 is a schematic diagram of a scene decoded by a video decoding apparatus provided by an embodiment of the present application.
  • the video code stream is described by taking the H.264 video code stream as an example.
  • entropy decoding is performed on the video code stream, one or more motion vector difference values and the quantized first residual are obtained.
  • the reference motion vector can be obtained, so that the reference block used for motion compensation can be known more precisely.
  • the entropy decoding can be implemented by an independent hardware design, and can also be implemented by software.
  • the parsing of the current video stream and the image buffering can be implemented in software through a driver or Open Media Acceleration (OpenMAX).
  • OpenMAX Open Media Acceleration
  • the second residual can be obtained.
  • the predicted value of the block to be decoded or the subblock in the block to be decoded can be obtained.
  • the predicted value of the block to be decoded or the sub-block in the block to be decoded may be obtained through an intra-frame prediction method or a motion compensation method.
  • Inverse quantization and inverse transformation, intra/inter mode selection, intra prediction, motion compensation, and deblocking filtering in the decoding process can be implemented by an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the second residual is added to the predicted value of the block to be decoded or the subblock in the block to be decoded to obtain the decoded block or the block to be decoded.
  • the decoded sub-block (actual value) of the sub-block in the decoding block can be smoothed by performing block effect filter filtering according to the decoded block of the to-be-decoded block or the decoded sub-block of the sub-block in the to-be-decoded block.
  • Video stream decoded data After obtaining the predicted value of the block to be decoded or the subblock in the block to be decoded, the second residual is added to the predicted value of the block to be decoded or the subblock in the block to be decoded to obtain the decoded block or the block to be decoded.
  • the decoded sub-block (actual value) of the sub-block in the decoding block can be smoothed by performing block effect filter filtering according to the decoded block of the to-be-decoded block or
  • the decoded block of the block to be decoded is subsequently referenced by other blocks to be decoded, or the sub-block after the sub-block in the block to be decoded is decoded.
  • the sub-blocks in other blocks to be decoded will be referenced many times later, and the decoded blocks of the blocks to be decoded or the sub-blocks decoded from the sub-blocks of the blocks to be decoded are stored in the system cache and stored in the dynamic random memory. It can be used as a reference block when decoding other blocks to be decoded or sub-blocks in the block to be decoded.
  • the decoded block of the block to be decoded is subsequently referenced by other blocks to be decoded, or the sub-block after the sub-block in the block to be decoded is decoded
  • the sub-blocks in other blocks to be decoded will be referenced multiple times in the future, then the decoded blocks of the blocks to be decoded or the sub-blocks decoded from the sub-blocks of the blocks to be decoded are stored in the system buffer memory as other blocks to be decoded. Or the reference block when sub-blocks in the block to be decoded are decoded.
  • the decoded block of the to-be-decoded block is subsequently referenced by other blocks to be decoded, or the sub-block after the decoded sub-block in the to-be-decoded block follows It will be referenced once by the sub-blocks in other blocks to be decoded, then the decoded block of the block to be decoded or the decoded sub-blocks of the sub-blocks in the block to be decoded is stored in the dynamic random access memory as other blocks to be decoded. Or the reference block when sub-blocks in the block to be decoded are decoded.
  • the other to-be-decoded blocks or sub-blocks of other to-be-decoded blocks in the to-be-decoded image frame, to-be-decoded slice, or to-be-decoded area the other to-be-decoded blocks or sub-blocks of other to-be-decoded blocks are decoded. If all to-be-decoded blocks or sub-blocks of the to-be-decoded block in the to-be-decoded image frame, slice or area to be decoded have been decoded, decode other image frames, slices or areas until all required Decoding of decoded image frames, slices or regions.
  • the flow of the method for performing image processing in a video decoding apparatus in FIG. 7 and the flow of the method for performing image processing in a video decoding apparatus in FIG. 17 may be combined into one system, and they are not mutually exclusive. Interference, that is, when decoding a video stream, it may switch to another process in the middle. A reasonable switch point is when a new image frame or slice starts decoding.
  • the embodiments of the present application are based on predictable data access behavior (ie, repeated reading behavior) during video decoding, so as to realize intelligent selection of a data storage mode to reduce power consumption of the video decoding apparatus.
  • the image data storage location can be changed according to the frame reference relationship during decoding, so that the repeated reading times of the reference image frames stored in the low-power memory such as Sys$ or SysBuf can be appropriately increased, so as to appropriately reduce the power consumption and ensure that the video decoding device is in the video decoding device.
  • the power consumption of incoming and outgoing data remains as expected. If low-power memory such as Sys$ also has high-speed bandwidth, the bandwidth of DRAM can be further reduced.
  • the embodiment of the present application can ensure that the power consumption of the video decoding device is controllable, and the hardware or software of the video decoding device can complete the decoding work as soon as possible, and make full use of the predictable behavior that the video decoding device will repeatedly read the reference block for many times to change
  • the storage characteristic of the read data saves power when accessing data, so that data entry and exit will not cause a power consumption bottleneck, so that the video decoding device can maintain its running speed and reduce power consumption at the same time.
  • the speed of reading data is not limited by power consumption, so the video decoding device does not overheat.
  • the SRAM in Sys$ or SysBuf has low latency when reading and writing, which can improve the processing frame rate and reduce the response latency. Since the power consumption can be greatly reduced, the usage time of the battery in the video decoding device can be increased, and the user experience can be improved.
  • the target position or attribute of data reading can be selected according to the long-term playback requirement of the playback device and the relatively large power consumption caused by the predictable behavior.
  • the data that needs to be read repeatedly is read from Sys$ and DRAM, or from SysBuf, instead of all read from DRAM. Because the same data is read, the power consumption of SRAM is much smaller than that of DRAM. Therefore, the embodiment of the present application can greatly reduce the power consumption when reading data.
  • the embodiments of the present application describe in detail how to reduce the power consumption of reading data by taking video decoding as an example.
  • it can also be applied to all modules and applications that require high bandwidth but predictable data access behavior, such as video encoding devices, frame rate up conversion devices, etc.
  • the behavior of these modules and applications is usually predictable, such as the number of repeated reads.
  • the corresponding storage characteristics can be pre-allocated, that is, the repeatedly read data is stored in low-power memory, such as
  • the energy consumption corresponding to different levels of memory is selected according to the access times of the image data of all or part of the frames, that is, the energy consumption corresponding to different levels is selected according to the access times of the image data of all or part of the image frames.
  • the number of times of reading data from Sys$ and DRAM can be reasonably allocated, or the number of times of reading data from SysBuf can be reasonably allocated.
  • the video encoding device can also determine the behavior of accessing data by analyzing the video code stream in advance, and the frame rate increasing device can know which areas will be used multiple times during processing through simple analysis, and so on. It can also be applied to fixed artificial intelligence (AI) network behavior.
  • AI artificial intelligence
  • the repeated reading part of AI network behavior is the feature map part, and the AI network behavior is predictable.
  • the image processing apparatus 500 may include: an acquisition module 501 , a first determination module 502 , a second determination module 503 , a third determination module 504 , and a decoding module 505 .
  • an acquisition module 501 configured to acquire a video stream
  • a first determining module 502 configured to determine one or more reference positions from the video code stream
  • a second determining module 503, configured to determine the reference times of the one or more reference positions
  • the third determination module 504 is configured to determine a reference position to be stored in the preset memory from the one or more reference positions according to the preset power consumption threshold and the reference times of the one or more reference positions, and store it in the preset memory, and the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset memory power consumption thresholds; and
  • the decoding module 505 is configured to decode the object to be decoded according to the reference position.
  • the reference position includes a reference image frame, a reference strip or a reference area
  • the first determining module 502 may be used to:
  • one or more reference image frames, reference slices or slices are determined from the video code stream. Reference area.
  • the reference position includes a reference image frame, a reference strip or a reference area
  • the second determining module 503 may be used to:
  • the reference times of the one or more reference image frames, reference slices or reference regions are determined by preset parameters, where the preset parameters include any one or more of the following: network abstraction layer parsing parameters, slice header parsing parameter, reference picture list correction parameter and reference picture frame marker parameter.
  • the preset memory includes a first memory and a second memory, and the power consumption of the first memory is greater than the power consumption of the second memory.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device
  • the second memory includes a system cache disposed outside the video decoding device
  • the reference location includes a reference image frame, reference strip or reference area
  • the third determining module 504 may be used to:
  • one or more reference image frames, reference slices or reference regions with multiple times of reference are stored in the system cache according to the preset power consumption threshold, and stored in the system cache. described in dynamic random access memory.
  • the second memory includes a system buffer memory provided outside the video decoding device
  • the reference position includes a reference image frame, a reference slice or a reference area
  • the third determining module 504 can be used for :
  • one or more reference image frames, reference slices or reference regions of which the number of times of reference is multiple are stored in the system buffer memory according to the preset power consumption threshold.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device, and the reference position includes a reference image frame, a reference slice or a reference area, and the third determining module 504 may Used for:
  • one or more reference image frames, reference slices or reference regions for which the number of references is one time are stored in the dynamic random access memory according to the preset power consumption threshold.
  • the object to be decoded includes a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area, and the decoding module 505 may be used for :
  • the data decodes the to-be-decoded block in the to-be-decoded image frame, and if the image data of the reference slice is read, the to-be-decoded slice is decoded according to the read image data of the reference slice.
  • the to-be-decoded block in the to-be-decoded area is decoded, and if the image data of the reference area is read, the to-be-decoded block in the to-be-decoded area is decoded according to the read image data of the reference area;
  • the decoded block of the to-be-decoded block is stored in the system cache, and stored in the dynamic random access to memory.
  • the object to be decoded includes a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area, and the decoding module 505 may be used for :
  • the data decodes the to-be-decoded block in the to-be-decoded image frame, and if the image data of the reference slice is read, the image data in the to-be-decoded slice is decoded according to the read image data of the reference slice.
  • the block to be decoded is decoded, and if the image data of the reference area is read, the block to be decoded in the area to be decoded is decoded according to the read image data of the reference area;
  • the decoded block of the to-be-decoded block is stored in the system buffer memory.
  • the object to be decoded includes a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area, and the decoding module 505 may be used for :
  • the data decodes the to-be-decoded block in the to-be-decoded image frame, and if the image data of the reference slice is read, the image data in the to-be-decoded slice is decoded according to the read image data of the reference slice.
  • the block to be decoded is decoded, and if the image data of the reference area is read, the block to be decoded in the area to be decoded is decoded according to the read image data of the reference area;
  • the decoded block of the to-be-decoded block is stored in the dynamic random access memory.
  • the image processing apparatus 600 may include: a first obtaining module 601 , a second obtaining module 602 , a third obtaining module 603 , a first determining module 604 , a second determining module 605 , and a decoding module 606 .
  • the first obtaining module 601 is used to obtain a video stream
  • a third obtaining module 603, configured to obtain one or more corresponding reference blocks from one or more image frames of the video code stream according to the one or more reference motion vectors;
  • a first determining module 604 configured to determine the reference times of the one or more reference blocks
  • the second determination module 605 is configured to determine, from the one or more reference blocks, one or more reference blocks that need to be stored in the preset memory according to the preset power consumption threshold and the reference times of the one or more reference blocks and storing the reference blocks in the preset memory, and the power consumption generated by storing the reference blocks in the preset memory and reading the reference blocks from the preset memory is less than or equal to the preset power consumption threshold;
  • a decoding module 606 configured to decode a block to be decoded or a sub-block of the block to be decoded according to the reference block.
  • the second obtaining module 602 may be used to:
  • Entropy decoding is performed on the video code stream to obtain one or more motion vector differences
  • the one or more reference motion vectors are obtained according to the one or more motion vector differences and the corresponding motion vector predictors.
  • the preset memory includes a first memory and a second memory, and the power consumption of the first memory is greater than the power consumption of the second memory.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device
  • the second memory includes a system cache disposed outside the video decoding device
  • the second determining module 605 Can be used for:
  • one or more reference blocks with multiple reference times are stored in the system cache according to the preset power consumption threshold, and stored in the dynamic random access memory middle.
  • the second memory includes a system buffer memory provided outside the video decoding device, and the second determining module 605 can be used for:
  • one or more reference blocks whose reference times are multiple times are stored in the system buffer memory according to the preset power consumption threshold.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device, and the second determining module 605 may be used for:
  • one or more reference blocks whose reference number is one time are stored in the dynamic random access memory according to the preset power consumption threshold.
  • the decoding module 606 may be used to:
  • If the reference times of the to-be-decoded block or the sub-block in the to-be-decoded block is multiple times, store the decoded block of the to-be-decoded block or the decoded sub-block of the sub-block in the to-be-decoded block into storage in the system cache and stored in the dynamic random access memory.
  • the decoding module 606 may be used to:
  • If the reference times of the to-be-decoded block or the sub-block in the to-be-decoded block is multiple times, store the decoded block of the to-be-decoded block or the decoded sub-block of the sub-block in the to-be-decoded block into storage in the system buffer memory.
  • the decoding module 606 may be used to:
  • If the reference times of the block to be decoded or the sub-block in the block to be decoded is once, store the decoded block of the block to be decoded or the sub-block decoded of the sub-block in the block to be decoded in the in the dynamic random access memory.
  • the second obtaining module 602 may be used to:
  • Entropy decoding is performed on the current video code stream to obtain one or more motion vector differences and the quantized first residual;
  • the decoding module 606 can be used to:
  • the second residual and the predicted value of the block to be decoded or the subblock in the block to be decoded obtain the decoded block of the block to be decoded or the subblock decoded of the block to be decoded subblock.
  • the decoding module 606 may be used to:
  • the video stream decoded data is acquired according to the decoded block of the block to be decoded or the decoded sub-block of the sub-block in the block to be decoded.
  • the predicted value of the block to be decoded or a sub-block in the block to be decoded is obtained by using an intra-frame prediction method or a motion compensation method.
  • An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is made to execute the image decoding in a video decoding apparatus as provided in this embodiment. Process in the method of processing.
  • An embodiment of the present application further provides an electronic device, including a memory, a processor, and a video decoding apparatus.
  • the processor is configured to execute the video decoding apparatus provided in this embodiment by calling a computer program stored in the memory. The flow in the method of image processing.
  • the above-mentioned electronic device may be a mobile terminal such as a tablet computer or a smart phone.
  • FIG. 22 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 700 may include a video decoding apparatus 701, a memory 702, a processor 703 and other components.
  • a video decoding apparatus 701 may include a memory 702, a processor 703 and other components.
  • FIG. 22 does not constitute a limitation on the electronic device, and may include more or less components than the one shown, or combine some components, or arrange different components.
  • the video decoding device 701 can be used to decode the encoded video image to restore the original video image.
  • Memory 702 may be used to store applications and data.
  • the application program stored in the memory 702 contains executable code.
  • Applications can be composed of various functional modules.
  • the processor 703 executes various functional applications and data processing by executing the application programs stored in the memory 702 .
  • the processor 703 is the control center of the electronic device, uses various interfaces and lines to connect various parts of the entire electronic device, and executes the electronic device by running or executing the application program stored in the memory 702 and calling the data stored in the memory 702.
  • the various functions and processing data of the device are used to monitor the electronic equipment as a whole.
  • the processor 703 in the electronic device loads the executable code corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 703 executes and stores it in the memory The application in 702, thus executing:
  • a reference position that needs to be stored in the preset memory is determined from the one or more reference positions, and stored in the preset memory.
  • the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset power consumption threshold;
  • one or more reference blocks that need to be stored in the preset memory are determined from the one or more reference blocks, and stored In the preset memory, the power consumption generated by storing the reference block in the preset memory and reading the reference block from the preset memory is less than or equal to the preset power consumption threshold; as well as
  • the block to be decoded or a sub-block of the block to be decoded is decoded according to the reference block.
  • the electronic device 700 may include components such as a video decoding apparatus 701 , a memory 702 , a processor 703 , a battery 704 , an input unit 705 , and an output unit 706 .
  • the video decoding device 701 can be used to decode the encoded video image to restore the original video image.
  • Memory 702 may be used to store applications and data.
  • the application program stored in the memory 702 contains executable code.
  • Applications can be composed of various functional modules.
  • the processor 703 executes various functional applications and data processing by executing the application programs stored in the memory 702 .
  • the processor 703 is the control center of the electronic device, uses various interfaces and lines to connect various parts of the entire electronic device, and executes the electronic device by running or executing the application program stored in the memory 702 and calling the data stored in the memory 702.
  • the various functions and processing data of the device are used to monitor the electronic equipment as a whole.
  • the battery 704 can be used to provide electrical support for various components of the electronic device, thereby ensuring the normal operation of the various components.
  • the input unit 705 can be used to receive an encoded input video stream of video images, for example, can be used to receive a video stream that needs to be decoded.
  • the output unit 706 may be used to output the decoded video stream.
  • the processor 703 in the electronic device loads the executable code corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 703 executes and stores it in the memory The application in 702, thus executing:
  • a reference position that needs to be stored in the preset memory is determined from the one or more reference positions, and stored in the preset memory.
  • the power consumption generated by storing the reference position in the preset memory and reading the reference position from the preset memory is less than or equal to the preset power consumption threshold;
  • one or more reference blocks that need to be stored in the preset memory are determined from the one or more reference blocks, and stored In the preset memory, the power consumption generated by storing the reference block in the preset memory and reading the reference block from the preset memory is less than or equal to the preset power consumption threshold; as well as
  • the block to be decoded or a sub-block of the block to be decoded is decoded according to the reference block.
  • the reference position includes a reference image frame, a reference slice or a reference area
  • the processor 703 may also Executing: determining the one or more reference image frames from the video code stream according to the frame header information of the image frame in the video code stream or the slice header information of one or more slices in the image frame , reference strip or reference area.
  • the reference position includes a reference image frame, a reference strip or a reference area
  • the processor 703 may further perform: by The preset parameters determine the reference times of the one or more reference image frames, reference slices or reference regions, and the preset parameters include any one or more of the following: network abstraction layer parsing parameters, slice header parsing parameter, reference picture list correction parameter and reference picture frame marker parameter.
  • the preset memory includes a first memory and a second memory, and the power consumption of the first memory is greater than the power consumption of the second memory.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device
  • the second memory includes a system cache disposed outside the video decoding device
  • the reference location includes a reference image frame, reference strip or reference area
  • the processor 703 executes the reference times according to the preset power consumption threshold and the one or more reference positions, and determines from the one or more reference positions that the storage needs to be stored
  • the one or more reference image frames, reference slices or reference regions are stored in the system cache and stored in the dynamic random access memory.
  • the second memory includes a system buffer memory provided outside the video decoding device, and the reference position includes a reference image frame, a reference slice or a reference area;
  • the processor 703 executes the pre-prediction Set the power consumption threshold and the reference times of the one or more reference positions, determine the reference position that needs to be stored in the preset memory from the one or more reference positions, and store it in the preset memory
  • the number of times of reference is multiple, it may also be performed: according to the preset power consumption threshold, one or more reference image frames, reference slices or reference regions with the number of times of reference being multiple times are stored in the in the system buffer memory.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device, and the reference position includes a reference image frame, a reference slice or a reference area;
  • the processor 703 executes the According to the preset power consumption threshold and the reference times of the one or more reference positions, a reference position that needs to be stored in the preset memory is determined from the one or more reference positions, and stored in the preset memory.
  • it can also be executed: if the number of references is one time, according to the preset power consumption threshold, one or more reference image frames, reference strips or reference regions with the reference number of times being one or more are stored in the dynamic random access memory.
  • the object to be decoded includes a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area
  • the processor 703 executes the When decoding the object to be decoded according to the reference position, it is also possible to perform: read the required image data of the reference image frame, reference strip or reference area from the preset memory, if the reference image frame is read image data of the reference image frame, decode the block to be decoded in the image frame to be decoded according to the read image data of the reference image frame, if the image data of the reference strip is read, then according to the read The image data of the reference slice is decoded to the to-be-decoded block in the to-be-decoded slice.
  • the image data of the reference area is read, the image data of the reference area is read according to the read image data of the reference area.
  • the to-be-decoded block in the to-be-decoded area is decoded; if the reference times of the to-be-decoded image frame, the to-be-decoded slice or the to-be-decoded area is multiple times, the decoded block of the to-be-decoded block is stored in the system cache and stored in the dynamic random access memory.
  • the object to be decoded includes a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area
  • the processor 703 executes the When decoding the object to be decoded according to the reference position, it is also possible to perform: read the required image data of the reference image frame, reference strip or reference area from the preset memory, if the reference image frame is read image data of the reference image frame, decode the block to be decoded in the image frame to be decoded according to the read image data of the reference image frame, if the image data of the reference strip is read, then according to the read The image data of the reference slice is decoded to the to-be-decoded block in the to-be-decoded slice.
  • the image data of the reference area is read, the image data of the reference area is read according to the read image data of the reference area.
  • the to-be-decoded block in the to-be-decoded area is decoded; if the reference times of the to-be-decoded image frame, the to-be-decoded slice or the to-be-decoded area is multiple times, the decoded block of the to-be-decoded block is stored in the system in the buffer memory.
  • the object to be decoded includes a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, or a to-be-decoded block in a to-be-decoded area
  • the processor 703 executes the When decoding the object to be decoded according to the reference position, it is also possible to perform: read the required image data of the reference image frame, reference strip or reference area from the preset memory, if the reference image frame is read image data of the reference image frame, decode the block to be decoded in the image frame to be decoded according to the read image data of the reference image frame, if the image data of the reference strip is read, then according to the read The image data of the reference slice is decoded to the to-be-decoded block in the to-be-decoded slice.
  • the image data of the reference area is read, the image data of the reference area is read according to the read image data of the reference area.
  • the to-be-decoded block in the to-be-decoded area is decoded; if the reference times of the to-be-decoded image frame, the to-be-decoded slice, or the to-be-decoded area is once, the decoded block of the to-be-decoded block is stored in the dynamic random access to memory.
  • the processor 703 when the processor 703 performs the obtaining of one or more reference motion vectors according to the video code stream, the processor 703 may further perform: performing entropy decoding on the video code stream to obtain one or more reference motion vectors. Motion vector difference value; obtain the one or more reference motion vectors according to the one or more motion vector difference values and the corresponding motion vector prediction value.
  • the preset memory includes a first memory and a second memory, and the power consumption of the first memory is greater than the power consumption of the second memory.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device
  • the second memory includes a system cache disposed outside the video decoding device
  • the processor 703 executes the When the one or more reference blocks that need to be stored in the preset memory are determined from the one or more reference blocks according to the preset power consumption threshold and the reference times of the one or more reference blocks, it is also possible to Execute: if the reference times are multiple times, store one or more reference blocks with multiple reference times in the system cache according to the preset power consumption threshold, and store them in the dynamic random memory. fetch into memory.
  • the second memory includes a system buffer memory provided outside the video decoding device, and the processor 703 executes the reference number of times according to a preset power consumption threshold and the one or more reference blocks , when it is determined from the one or more reference blocks that one or more reference blocks need to be stored in the preset memory, it may also be performed: if the number of times of reference is multiple times, according to the preset power consumption
  • the threshold stores one or more reference blocks that are referenced multiple times in the system buffer memory.
  • the first memory includes a dynamic random access memory disposed outside the video decoding device, and the processor 703 executes the processing according to a preset power consumption threshold and the one or more reference blocks.
  • the consumption threshold stores one or more reference blocks that are referenced once in the dynamic random access memory.
  • the processor 703 when the processor 703 executes the decoding of the block to be decoded or the sub-block of the block to be decoded according to the reference block, the processor 703 may further execute: reading the data from the preset memory required image data of the reference block, and decode the block to be decoded or a sub-block in the block to be decoded according to the read image data of the reference block; if the block to be decoded or the block to be decoded The reference times of the sub-block is multiple times, then the decoded block of the to-be-decoded block or the decoded sub-block of the sub-block in the to-be-decoded block is stored in the system cache, and stored in the dynamic random access memory.
  • the processor 703 when the processor 703 performs the decoding of the block to be decoded or a sub-block in the block to be decoded according to the reference block, the processor 703 may also perform: reading from the preset memory required image data of the reference block, and decode the block to be decoded or a sub-block in the block to be decoded according to the read image data of the reference block; if the block to be decoded or the block to be decoded If the reference times of the sub-block in is multiple times, the decoded block of the to-be-decoded block or the decoded sub-block of the sub-block in the to-be-decoded block is stored in the system buffer memory.
  • the processor 703 when the processor 703 performs the decoding of the block to be decoded or a sub-block in the block to be decoded according to the reference block, the processor 703 may also perform: reading from the preset memory required image data of the reference block, and decode the block to be decoded or a sub-block in the block to be decoded according to the read image data of the reference block; if the block to be decoded or the block to be decoded The reference times of the sub-block in the block is once, and the decoded block of the block to be decoded or the sub-block decoded of the sub-block in the block to be decoded is stored in the dynamic random access memory.
  • the processor 703 when the processor 703 performs entropy decoding on the video code stream to obtain one or more motion vector difference values, the processor 703 may also perform: entropy decoding on the video code stream, One or more motion vector differences and the quantized first residual are obtained.
  • the processor 703 may also perform: inverse the first residual. quantization and inverse transformation to obtain a second residual; according to the reference motion vector and the reference block, obtain the predicted value of the block to be decoded or the sub-block in the block to be decoded; according to the second residual and the obtains the decoded block of the to-be-decoded block or the decoded sub-block of the sub-block in the to-be-decoded block.
  • the processor 703 when the processor 703 performs the decoding of the block to be decoded or a sub-block in the block to be decoded according to the read image data of the reference block, the processor 703 may also perform: Obtain the decoded data of the video stream from the decoded block of the block to be decoded or the decoded sub-block of the sub-block in the block to be decoded.
  • the predicted value of the block to be decoded or a sub-block in the block to be decoded is obtained by an intra-frame prediction method or a motion compensation method.
  • FIG. 24 is a schematic structural diagram of the image processing system provided by the embodiment of the present application.
  • FIG. 25 is another schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • the image processing system 800 includes a video decoding device 801, a first memory 802 and a second memory 803, wherein the power consumption of the first memory 802 is greater than the power consumption of the second memory 803, for example, the first memory may be a DRAM, and the second memory 803 may be Sys$ or SysBuf, the first memory 802 stores a reference position with one reference count, or stores a reference position with one and multiple reference counts, and the second memory 803 stores a reference position with multiple reference counts.
  • the number of references is the number of times that the reference position is referenced by the object to be decoded.
  • the reference locations may include reference image frames, reference slices, reference regions, or reference blocks.
  • the first memory 802 may store a reference image frame, a reference slice or a reference block with a reference count of one time, or store a reference image frame, a reference slice, a reference area or a reference block with a reference count of one or more times, that is, The first memory 802 can store reference image frames, reference slices or reference blocks with one or more reference times, and the second memory 803 stores reference image frames, reference slices or reference blocks with multiple reference times.
  • the video decoding device 801 When the video decoding device 801 is decoding, the video decoding device 801 reads a reference position with one reference number from the first memory 802, and reads a reference position with multiple reference times from the second memory 803, The object to be decoded is decoded according to the reference position.
  • the to-be-decoded object may include a to-be-decoded block in a to-be-decoded image frame, a to-be-decoded block in a to-be-decoded slice, a to-be-decoded block in a to-be-decoded area, or a sub-block of the to-be-decoded block.
  • the video decoding apparatus 801 when it is decoding, it may read a reference image frame, a reference slice or a reference block from the first memory 802 with a reference number of times, and read from the second memory 803 with a reference number of times a number of times.
  • decode the block to be decoded in the image frame to be decoded according to the reference image frame decode the block to be decoded in the slice to be decoded according to the reference slice, and decode the block to be decoded in the slice to be decoded according to the reference slice, and decode the block to be decoded in the slice to be decoded according to the reference image frame.
  • the to-be-decoded block is decoded, or, the to-be-decoded block or the sub-block of the to-be-decoded block is decoded according to the reference block.
  • the second memory 803 may be Sys$.
  • the reference image frame, reference slice, reference area or reference block to be referenced is referenced once, the reference image is directly read from the first memory 802
  • the image data of the frame, reference slice, reference area or reference block if the reference image frame, reference slice, reference area or reference block needs to be referenced for multiple times, the reference image frame, reference slice, When the reference area or reference block is read, it may be read from the first memory 802 once, and the other times may be read from Sys$.
  • the number of times of reading from Sys$ may be greater than the number of times of reading from the first memory 802, the number of times of reading from Sys$ may be less than the number of times of reading from the first memory 802, or the number of times of reading from Sys$.
  • the number of times of reading from the $ may be equal to the number of times of reading from the first memory 802.
  • the number of times of reading from the first memory 802 and Sys$ is respectively, and corresponding settings should be made according to specific scenarios, and this embodiment of the present application does not apply to this. make specific restrictions.
  • the second memory 803 may be a SysBuf.
  • the reference image frame, reference slice, reference area or reference block to be referenced is referenced once, the reference image is directly read from the first memory 802.
  • the image data of the frame, reference slice, reference area or reference block if the reference image frame, reference slice, reference area or reference block needs to be referenced for multiple times, the reference image frame, reference slice, When reading from a reference area or reference block, it can be read from SysBuf.
  • the image processing apparatus provided in the embodiment of the present application and the method for performing image processing in a video decoding apparatus in the above embodiments belong to the same concept, and the image processing apparatus in a video decoding apparatus may run on the image processing apparatus.
  • the specific implementation process of the method can be referred to in the embodiment of the method for performing image processing in a video decoding apparatus, which will not be repeated here.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), and the like.
  • each functional module may be integrated into one processing chip, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium, such as a read-only memory, a magnetic disk or an optical disk, etc. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种在视频解码装置中进行图像处理的方法、装置、存储介质、电子设备及系统。该方法包括:获取视频码流;根据预设功耗阈值及从视频码流中确定的一个或多个参考位置的参考次数,从一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在预设存储器中,向预设存储器中存储参考位置及从预设存储器中读取参考位置产生的功耗小于或等于预设功耗阈值;根据参考位置对待解码对象进行解码。

Description

在视频解码装置中进行图像处理的方法、装置及系统
本申请要求于2021年4月1日提交中国专利局、申请号为202110357601.8、申请名称为“在视频解码装置中进行图像处理的方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于电子设备技术领域,尤其涉及一种在视频解码装置中进行图像处理的方法、装置、存储介质、电子设备及系统。
背景技术
随着技术的不断发展,视频解码装置的功能越来越强大。视频解码装置可以对视频图像进行解码。在对一帧视频图像进行解码时,通常会参考多帧已解码视频图像的数据。
发明内容
本申请实施例提供一种在视频解码装置中进行图像处理的方法、装置、存储介质及电子设备,可以降低视频解码装置的功耗。
第一方面,本申请实施例提供一种在视频解码装置中进行图像处理的方法,所述方法包括:
获取视频码流;
从所述视频码流中确定一个或多个参考位置;
确定所述一个或多个参考位置的参考次数;
根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
根据所述参考位置,对待解码对象进行解码。
第二方面,本申请实施例提供一种在视频解码装置中进行图像处理的方法,所述方法包括:
获取视频码流;
根据所述视频码流获取一个或多个参考运动矢量(Motion Vector,MV);
根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
确定所述一个或多个参考块的参考次数;
根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
根据所述参考块对待解码块或所述待解码块的子块进行解码。
第三方面,本申请实施例提供一种在视频解码装置中进行图像图像处理的装置,所述装置包括:
获取模块,用于获取视频码流;
第一确定模块,用于从所述视频码流中确定一个或多个参考位置;
第二确定模块,用于确定所述一个或多个参考位置的参考次数;
第三确定模块,用于根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
解码模块,用于根据所述参考位置对待解码对象进行解码。
第四方面,本申请实施例提供一种图像处理装置,所述装置包括:
第一获取模块,用于获取视频码流;
第二获取模块,用于根据所述视频码流获取一个或多个参考运动矢量;
第三获取模块,用于根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
第一确定模块,用于确定所述一个或多个参考块的参考次数;
第三确定模块,用于根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
解码模块,用于根据所述参考块对待解码块或所述待解码块的子块进行解码。
第五方面,本申请实施例提供一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行本申请实施例提供的在视频解码装置中进行图像处理的方法。
第六方面,本申请实施例还提供一种电子设备,包括存储器,处理器以及视频解码装置,所述处理器通过调用所述存储器中存储的计算机程序,用于执行本申请实施例提供的在视频解码装置中进行图像处理的方法。
第七方面,本申请实施例还提供一种图像处理系统,包括视频解码装置、第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗,所述第一存储器中存储参考次数为一次的参考位置,或者存储参考次数为一次和多次的参考位置,所述第二存储器中存储参考次数为多次的参考位置,所述视频解码装置在解码时,从所述第一存储器中读取参考次数为一次的参考位置,以及从所述第二存储器读取参考次数为多次的参考位置,根据所述参考位置对待解码对象进行解码。
附图说明
下面结合附图,通过对本申请的具体实施方式详细描述,将使本申请的技术方案及其有益效果显而易见。
图1是本申请实施例提供的在视频解码装置中进行图像处理的方法的第一种流程示意图。
图2是相关技术中视频解码系统的结构示意图。
图3是相关技术中视频解码装置中数据存储的示意图。
图4是相关技术中增加动态随机存取内存(Dynamic Random Access Memory,DRAM)的通道(channel)数量进行数据存取的示意图。
图5是相关技术中在进行视频解码时从多通道DRAM读写数据时的功耗曲线示意图。
图6是本申请实施例提供的当前视频码流中一个图像群组中各图像帧之间参考关系的场景示意图。
图7是本申请实施例提供的在视频解码装置中进行图像处理的方法的第二种流程示意图。
图8是本申请实施例提供的静态随机存取存储器(Static Random-Access Memory,SRAM)与动态随机存取内存在读取数据时所消耗的能量的对比示意图。
图9是本申请实施例提供的将多次被参考的图像数据存储在系统高速缓存(system cache,Sys$)或系统缓冲存储器(System Buffer,SysBuf)的场景示意图。
图10是本申请实施例提供的粗略分析视频码流中多个图像帧的参考关系的场景示意图。
图11是本申请实施例提供的使用系统高速缓存的视频解码系统的一种架构示意图。
图12是本申请实施例提供的使用系统高速缓存的视频解码系统的另一种架构示意图。
图13是本申请实施例提供的使用系统缓冲存储器的视频解码系统的架构示意图。
图14是本申请实施例提供的从Sys$或SysBuf读写数据时的功耗曲线示意图。
图15是本申请实施例提供的在视频解码装置中进行图像处理的方法的第三种流程示意图。
图16是本申请实施例提供的视频码流中一个图像群组中各图像帧中块之间参考关系的场景示意图。
图17是本申请实施例提供的在视频解码装置中进行图像处理的方法的第四种流程示意图。
图18是本申请实施例提供的精细分析视频码流中多个图像帧中块的参考关系的场景示意图。
图19是本申请实施例提供的视频解码装置解码的场景示意图。
图20是本申请实施例提供的图像处理装置的结构示意图。
图21是本申请实施例提供的图像处理装置的另一结构示意图。
图22是本申请实施例提供的电子设备的结构示意图。
图23是本申请实施例提供的电子设备的另一结构示意图。
图24是本申请实施例提供的图像处理系统的结构示意图。
图25是本申请实施例提供的图像处理系统的另一结构示意图。
具体实施方式
请参照图示,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
请参阅图1,图1是本申请实施例提供的在视频解码装置中进行图像处理的方法的第一种流程示意图。该在视频解码装置中进行图像处理的方法可以应用于视频解码装置中。该在视频解码装置中进行图像处理的方法的流程可以包括:
101、获取视频码流。
随着技术的不断发展,视频解码装置的功能越来越强大。视频解码装置可以对视频图像进行解码。在对一帧视频图像进行解码时,通常会参考多帧已解码视频图像的数据。然而,相关技术中,在对需要参考的已解码视频图像的数据进行读取时,视频解码装置的功耗较大。
请参阅图2,图2为相关技术中视频解码系统的结构示意图。该视频解码系统中,中央处理器(Central Processing Unit/Processor,CPU)、视频解码装置和显示处理器(Display Processing Unit,DISP)通过总线和动态随机存取内存控制器(Dynamic Random Access Memory Controller,DRAMC)从DRAM读写数据,中央处理器、视频解码装置和显示处理器分时共用带宽,中央处理器和显示像处理器的优先级高于视频编码器的优先级。需要说明的是,可以根据具体需求,在视频解码系统中可以设置显示处理器,也可以不用设置显示处理器。视频解码装置在进行解码时需要进行运动补偿(Motion Compensation,MC),会占用较大的带宽。
视频解码装置非常重视成本的高低,在帧缓冲时,为了达到最低成本与最高生产良率,通常都是以DRAM作为主要的存放空间。请参阅图3,图3是相关技术中视频解码装置中数据存储的示意图。其中,比特流(Bitstreams)、需要缓冲的图像帧(Image frame)以及临时数据(Temporary data)都存储在视频解码装置中的DRAM中。然而,DRAM提供的带宽较小。其中,临时数据可以是时域运动矢量(Temporal Motion Vector,TMV)以及其它数据。
虽然视频解码装置针对各种视频比特流(video bitstream)可以采用视频解码装置的内部高速缓存(internal cache)策略,比如视频比特流可以是动态影像专家小组组织的第1个影片和音讯有损压缩标准(Moving Picture Experts Group Phase 1,MPEG-1)、动态影像专家小组组织的第2个影片和音讯有损压缩标准(Moving Picture Experts Group 2,MPEG-2)、动态影像专家小组组织的第4个影片和音讯有损压缩标准(Moving Picture Experts Group 4,MPEG-4)、必要影像编码(Essential Video Coding,MPEG-5/EVC)、国际电信联盟电信标准化部门(ITU Telecommunication Standardization Sector,ITU-T)制定的视频会议用的低码率视频编码标准H.263、进阶视讯编码(Advanced Video Coding,H.264/AVC)、高效率视讯编码(High Efficiency Video Coding,H.265/HEVC)、多功能影像编码(Versatile Video Coding,H.266/VVC)、影像预测8代标准(Video Predictor 8,VP8)、影像预测9代标准(Video Predictor 9,VP9)、开放媒体联盟影像编码1代标准(Alliance for Open Media Video 1,AV1)等标准的视频比特流。
但随着新型视频标准的出现,如H.265/HEVC、H.266/VVC、AV1、MPEG-5等标准等,其针对越来越大的画面尺寸且越来越高帧率。基于此,通常使用增加DRAM的带宽或提高DRAM频率的方式以达到加速吞吐数据量。
请参阅图4,图4是相关技术中通过增加DRAM的通道数量进行数据存取的示意图。通过增加DRAM的通道数量,可以增大带宽,提高频率,以增加DRAM吞吐数据的速度,但会造成较大的功耗。
如,为了满足视频解码装置达到预期的解码速度的需求,系统DRAM的带宽消耗较大的能量。但不论视频解码装置是执行即时操作还是非即时操作,维持最高效率是非常重要的。相关技术中的方法,当视频解码装置在预期时间完成解码的情况下,会造成DRAM极大的功耗。
请参阅图5,图5是相关技术中在进行视频解码时从多通道DRAM读写数据时的功耗曲线示意图。图5中,横坐标是参考图像帧的位置,比如,参考图像帧的顶端位置、中间位置、底端位置等,纵坐标是视频解码时读写数据的功耗。在进行视频解码时,有数据需要进出DRAM,在视频解码装置过度依赖DRAM或其它便宜但耗电的存储以及高带宽的情况下,因视频解码系统提供的功耗上限是有限的,会使视频解码装置无法满足解码速度要求,或者会使视频解码系统过热。如果考虑功耗上限,则从DRAM读写数据的速度受限,不能达到未考虑功耗上限时的读写速度。
本申请实施例中,获取视频码流,视频码流中可以包括一个或多个图像群组(Group of pictures,GOP),一个图像群组中包括多个图像帧。本申请实施例中以视频码流中包括一个图像群组作为示例进行说明。所获取的视频码流是编码后的视频码流,该视频码流中的图像帧有可能还未解码,还有可能一部分图像帧已经解码,另外一部分图像帧等待解码等。需要说明的是,已经解码的图像帧可以作为后续其他图像帧解码时的参考图像帧。
102、从视频码流中确定一个或多个参考位置。
比如,本申请实施例中,在对待解码图像帧进行解码时,通常需要与参考位置进行比对,在一种实施方式中,该参考位置可以包括图像帧、参考条带(slice)或参考区域。如,将待解码图像帧与参考图像帧、参考条带或参考区域进行比对,即待解码图像帧需要参考已经解码的图像帧、条带或区域。其中,待解码图像帧为需要解码的图像帧。需要说明的是,一个条带包含一个图像帧的部分数据或全部数据,换言之,一个图像帧可以编码为一个或多个条带。一个条带最少包含一个块,最多包含整个图像帧的数据。
在不同的编码实现中,同一个图像帧中的图像所构成的条带数目不一定相同。比如,在H.264中设计条带的目的主要在于防止误码的扩散。因为不同的条带之间,其解码操作是独立的。某一个条带的解码过程所参考的数据不能越过该条带的边界。
本申请实施例中,可以通过视频解码装置的软件或硬件对视频码流进行分析,可以从视频码流中粗略确定一个或多个参考位置,如从视频码流中确定一个或多个参考图像帧、参考条带或参考区域,即从视频码流中确定一个或多个参考图像帧,或者,从视频码流中确定一个或多个参考条带,或者,从视频码流中确定一个或多个参考区域,确定出的一个或多个参考图像帧、参考条带或参考区域形成参考队列,以便于对待解码图像帧解码时参考这些参考图像帧、参考条带或参考区域。
比如,在一个实施方式中,参考区域可以是图像帧中的区域,该区域需要被待解码图像参考。再比如,在另一个实施方式中,参考区域可以是条带中的区域,该区域需要被待解码图像参考,等等。
103、确定一个或多个参考位置的参考次数。
比如,当确定出一个或多个参考位置后,可以确定出该一个或多个参考位置的参考次数。
比如,以参考图像帧为例,请参阅图6,图6是本申请实施例提供的视频码流中一个图像群组中各图像帧之间参考关系的场景示意图。图6是以一个图像群组中包括9个图像帧为例进行说明的。在其他实施方式中,图像群组中所包括的图像帧的数量是可以根据具体需求进行调整的。在一个图像群组中,图像帧的显示顺序可能与解码顺序相同,也可能不同。如图6中显示的图像群组中的图像帧的显示顺序与解码顺序是不同的。
从图6中的箭头方向,可以看出每个图像帧被图像群组中其他图像帧参考的次数,即可以根据箭头方向确定每个图像帧被箭头所指向的图像帧参考的次数,当某个图像帧被图像群组中其它图像帧参考一次或多次时,则该图像帧可以作为参考图像帧。比如,图6中的I帧的参考次数为四次,显示顺序为1的B帧的参考次数为两次,显示顺序为3的B帧的参考次数为一次,显示顺序为4的P帧的参考次数为五次,显示顺序为6的B帧的参考次数为两次,等等。图6中仅仅是通过视频解码装置的相关硬件或软件粗略的帧级分析,通过该粗略的分析方式不能分析出参考帧图像中具体每个块的参考关系。
104、根据预设功耗阈值以及一个或多个参考位置的参考次数,从一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在预设存储器中,向预设存储器中存储参考位置以及从预设存储器中读取参考位置所产生的功耗小于或等于预设功耗阈值。
比如,从图6中可以得知显示顺序为0的I帧、显示顺序为2的B帧、显示顺序为4的P帧、显示顺序为6的B帧、显示顺序为8的B帧被图像群组中其它图像帧参考的次数均为多次。
然而,在进行视频解码时,需要参考的部分将会进出DRAM多次,即需要参考的部分将会从DRAM读取多次,如果一份数据预期被读取多次,消耗的能量在重复被读取时达到百倍之多。现今的视频标准为了提高压缩率,常见的使用多参考图像帧的方法进行编码,这种行为代表有部分数据通常因为在时域上关联性很高而不停的重复被使用。另外随着高帧率的视频码流越来越多,时域上的重复性提高,可以重复使用同一个参考图像帧的编码效果,而且大部分视频编码器都会产生这样的视频码流。如果将解码时重复读取的部分存储在低功耗的存储媒介中,将会大大降低视频播放时的能量消耗。
比如,本申请实施例中,可以从一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,如将参考次数为多次的参考图像帧、参考条带或参考区域存储在预设存储器中,便于待解码对象(如待解码图像帧、待解码条带或待解码区域)后续解码时读取参考图像帧、参考条带或参考区域的图像数据。另外,向预设存储器中存储参考位置以及从预设存储器中读取参考位置所产生的功耗小于或等于预设功耗阈值,通过采用功耗小的预设存储器读写数据,可以降低视频解码装置的功耗。
105、根据参考位置对待解码对象进行解码。
比如,存储在预设存储器中的参考图像帧、参考条带或参考区域被待解码对象参考的次数可以为多次,因此在进行解码时,存储在预设存储器中的参考图像帧、参考条带或参考区域将会被读取多次。当从预设存储器中读取参考图像帧、参考条带或参考区域的图像数据后,待解码对象就可以参考参考图像帧、参考条带或参考区域的图像数据,即可以根据读取的参考图像帧、参考条带或参考区域的图像数据,对待解码对象进行解码。
比如,在一种实施方式中,待解码对象可以包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块。则可以根据读取的参考图像帧的图像数据对待解码图像帧中的待解码块进行解码,可以根据读取的参考条带的图像数据对待解码条带中的待解码块解码,或者,可以根据读取的参考区域的图像数据对待解码区域中的待解码块进行解码。
可以理解的是,在本申请实施例中,视频解码装置可以获取视频码流,从视频码流中确定一个或多个参考位置。然后,确定一个或多个参考位置的参考次数,根据预设功耗阈值以及一个或多个参考位置的参考次数,从一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在预设存储器中,向预设存储器中存储参考位置以及从预设存储器中读取参考位置所产生的功耗小于或等于预设功耗阈值。之后,根据参考位置对待解码对象进行解码。即,本申请实施例中,通过将确定出的需要存储在预设存储器中的参考位置的图像数据存放在功耗较小的预设存储器中,以达到降低视频解码装置功耗的目的。因此,本申请实施例可以降低视频解码装置的功耗。
请参阅图7,图7为本申请实施例提供的在视频解码装置中进行图像处理的方法的第二种流程示意图。该在视频解码装置中进行图像处理的方法可以应用于视频解码装置中。该在视频解码装置中进行图像处理的方法的流程可以包括:
201、获取视频码流。
步骤201的具体实施可参见步骤101的实施例,在此不再赘述。
202、根据视频码流中图像帧的帧头信息或图像帧中一个或多个条带的条带头(Slice header)信息,从视频码流中确定一个或多个参考图像帧、参考条带或参考区域。
比如,可以根据视频码流中各图像帧的帧头信息或图像帧中条带的条带头信息,确定出各图像帧的参考关系。其中,每个图像帧的数据可以看成是一个网络抽象层(Network Abstraction Layer,NAL)单元,帧头信息用于分辨一个图像帧的开始,帧头信息也可以认为是NAL单元头信息,通过帧头信息可以确定出是哪一个图像帧,即可以确定出参考图像帧。条带头用于保存条带的总体信息,如当前条带的 类型等,通过条带头信息可以确定出是哪一个条带,因此可以确定出参考条带。参考区域可以是图像帧中的区域,也可以是条带中的区域。
203、通过预设参数确定一个或多个参考图像帧、参考条带或参考区域的参考次数,预设参数包括以下中的任一项或多项:网络抽象层解析参数、条带头解析参数、参考图像列表修正参数和参考图像帧标记参数。
比如,在一种实施方式中,参考位置可以包括参考图像帧、参考条带或参考区域,在从视频码流中确定一个或多个参考图像帧、参考条带或参考区域后,需要进一步确定出每个参考图像帧、参考条带或参考区域被待解码对象参考的次数,以便于获取每个参考帧、参考条带或参考区域的读取次数。
需要说明的是,在确定每个参考图像帧、参考条带或参考区域的参考次数时,可以通过预设参数确定出一个或多个参考图像帧、参考条带或参考区域的参考次数,该预设参数可以包括以下中的任一项或多项:网络抽象层解析参数、条带头解析参数、参考图像列表修正参数和参考图像帧标记参数等。例如,网络抽象层解析参数可以是nal_unit()函数,条带头解析参数可以是slice_header()函数,参考图像列表修正参数可以是ref_pic_list_modification()函数,参考图像帧标记参数可以是ref_pic_list_modification()函数。
例如,以H.264为例,在进行粗略的帧级分析时,在解析完多个图像帧包含到的NAL单元头信息或条带头信息时,就可以分辨出哪几张参考图像帧会被多次参考到,如可以利用nal_unit()函数中的nal_ref_idc变量,slice_header()函数中的num_ref_idx_active_override_flag变量,ref_pic_list_modification()函数,dec_ref_pic_marking()函数等信息事先判断出。
比如,nal_unit()函数是从H.264的图像帧中分析出以00 00 00 01和00 00 01开头的NAL单元,然后直接填充出该NAL单元的长度。nal_ref_idc变量代表的是参考级别,代表被其它图像帧的参考情况,参考级别越高,表示该参考图像帧的越重要。
num_ref_idx_active_override_flag变量代表的是当前图像帧的实际可用参考图像帧的数量是否需要重载。在图像参数集中已经出现的句法元素num_ref_idx_l0_active_minus1和num_ref_idx_l1_active_minus1指定当前参考图像帧队列中实际可用的参考帧的数目。在条带头可以重载这对句法元素,以给某特定图像帧更大的灵活度。通过num_ref_idx_active_override_flag变量,可以知道条带的位置。
ref_pic_list_modification()函数为参考图像列表修正函数,可以保存在条带头的结构中,ref_pic_list_modification()函数的定义如下:ref_pic_list_modification_flag_l0为1时,对参考图像列表RefPicList0进行修改,ref_pic_list_modification_flag_l1为1时,对参考图像列表RefPicList1进行修改。dec_ref_pic_marking()函数为解码的参考图像帧标识,而标记(marking)操作用于将参考图像帧移入或移出参考图像帧队列、指定参考图像的符号。
204、若参考次数为多次,则根据预设功耗阈值将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在系统高速缓存中,并将其存储在动态随机存取内存中。
比如,预设存储器可以包括第一存储器和第二存储器,第一存储器的功耗大于第二存储器的功耗,需要说明的是,第一存储器可以包括设置在视频解码装置外部的动态随机存取内存,第二存储器可以包括设置在视频解码装置外部的系统高速缓存,当确定一个或多个参考图像帧、参考条带或参考区域被待解码对象参考的次数后,若参考次数为多次,则可以将该参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在系统高速缓存中,并且存储在动态随机存取内存中,以等待视频解码装置解码时进行读取。
需要说明的是,根据预设功耗阈值的大小,可以调整存储在系统高速缓存中的参考次数为多次的一个或多个参考图像帧、参考条带或参考区域的数据量的大小。
可以理解的是,在将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在系统高速缓存中,并且存储在动态随机存取内存中时,可以先将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在系统高速缓存中,然后再将其存储在动态随机存取内存中,或者,可以将参考 次数为多次的一个或多个参考图像帧、参考条带或参考区域同时存储在动态随机存取内存和系统高速缓存中,或者,可以先将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在动态随机存取内存中,然后再将其存储在系统高速缓存中。
需要说明的是,参考图像帧、参考条带或参考区域的参考次数可以是被同一个待解码图像帧、待解码条带或待解码区域参考的次数,也可以是被一个图像群组中其它几个待解码图像帧、待解码条带或待解码区域参考的次数。在将参考次数为多次的参考图像帧、参考条带或参考区域存储在系统高速缓存,并且存储在第一存储器中时,可以是以帧为单位、以条带为单位或以区域为单位进行存储。
需要说明的是,本申请实施例中,动态随机存取内存的功耗大于系统高速缓存的功耗,且向动态随机存取内存和系统高速缓存存储以及从动态随机存取内存和系统高速缓存中读取参考图像帧、参考条带或参考区域时所产生的功耗小于预设功耗阈值,这样可以降低读写数据时的功耗。其中,预设功耗阈值可以认为是参考图像帧、参考条带或参考区域全部由动态随机存取内存进行存储和读取时所产生的功耗。
205、若参考次数为多次,则根据预设功耗阈值将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在系统缓冲存储器中。
比如,第一存储器可以包括设置在视频解码装置外部的动态随机存取内存,第二存储器可以包括设置在视频解码装置外部的系统缓冲存储器,当确定一个或多个参考图像帧、参考条带或参考区域被待解码对象参考的次数后,若参考次数为多次,则可以将该参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在系统缓冲存储器中,以等待视频解码装置解码时进行读取。
需要说明的是,根据预设功耗阈值的大小,可以调整存储在系统缓冲存储器中的参考次数为多次的一个或多个参考图像帧、参考条带或参考区域的数据量的大小。
需要说明的是,参考图像帧、参考条带或参考区域的参考次数可以是被同一个待解码图像帧、待解码条带或待解码区域参考的次数,也可以是被一个图像群组中其它几个待解码图像帧、待解码条带或待解码区域参考的次数。在将参考次数为多次的参考图像帧、参考条带或参考区域存储在系统缓冲存储器时,可以是以帧为单位、以条带为单位或以区域为单位进行存储。
需要说明的是,本申请实施例中,动态随机存取内存的功耗大于系统缓冲存储器的功耗,且向动态随机存取内存和系统缓冲存储器存储以及从动态随机存取内存和系统缓冲存储器中读取参考图像帧、参考条带或参考区域时所产生的功耗小于预设功耗阈值,这样可以降低读写数据时的功耗。其中,预设功耗阈值可以认为是参考图像帧、参考条带或参考区域全部由动态随机存取内存进行存储和读取时所产生的功耗。
206、若参考次数为一次,则根据预设功耗阈值将参考次数为一次的一个或多个参考图像帧、参考条带或参考区域存储在动态随机存取内存中。
比如,第一存储器可以包括设置在视频解码装置外部的动态随机存取内存,当确定一个或多个参考图像帧、参考条带或参考区域被待解码对象参考的次数后,若参考次数为一次,则可以将该参考次数为一次的一个或多个参考图像帧、参考条带或参考区域存储在动态随机存取内存中,以等待视频解码装置解码时进行读取。
需要说明的是,参考图像帧、参考条带或参考区域的参考次数可以是被同一个待解码图像帧、待解码条带或待解码区域参考的次数,也可以是被一个图像群组中其它几个待解码图像帧、待解码条带或待解码区域参考的次数。在将参考次数为一次的参考图像帧、参考条带或参考区域存储在动态随机存取内存中时,可以是以帧为单位、以条带为单位或以区域为单位进行存储。
需要说明的是,本申请实施例中,参考次数为一次的一个或多个参考图像帧、参考条带或参考区域在被待解码对象参考时会被读取一次(例如,参考次数为一次的一个或多个参考图像帧在被待解码图像帧参考时会被读取一次,再如,参考次数为一次的一个或多个参考条带在被待解码条带参考时会被读取一次,又如,参考次数为一次的一个或多个参考区域在被待解码区域参考时会被读取一次)。
本申请实施例中,向动态随机存取内存存储以及从动态随机存取内存读取参考图像帧、参考条带或 参考区域时所产生的功耗小于预设功耗阈值,这样可以降低读写数据时的功耗。其中,该预设功耗阈值可以认为是参考图像帧、参考条带或参考区域全部由动态随机存取内存进行存储和读取时所产生的功耗。
比如,在一种实施方式中,第一存储器可以包括设置在视频编码器外部的动态随机存取内存,即第一存储器可以包括设置在视频解码装置外部的DRAM,第二存储器可以包括设置在视频解码装置外部的系统高速缓存或系统缓冲存储器,即第二存储器可以包括设置在视频解码装置外部的Sys$或SysBuf。当然,第二存储器还可以是其它低功耗存储器等。
本申请实施例中,Sys$或SysBuf由多个SRAM组成,第一存储器可以为DRAM,DRAM的功耗大于视频解码装置外部的Sys$或SysBuf的功耗,且向Sys$和DRAM存储以及从Sys$和DRAM读取参考图像帧、参考条带或参考区域的功耗小于预设功耗阈值,或者,向SysBuf和DRAM存储以及从SysBuf和DRAM读取参考图像帧、参考条带或参考区域的功耗小于预设功耗阈值。这样可以降低读写数据时的功耗,该预设功耗阈值可以认为是将参考图像帧、参考条带或参考区域全部由DRAM进行存储和读取时所产生的功耗。
请参阅图8,图8是本申请实施例提供的静态随机存取存储器与动态随机存取内存在读取数据时所消耗的能量的对比示意图。读取SRAM中的数据与读取DRAM中的数据的所消耗的能量相差约为100倍,即读取SRAM中数据的功耗远远小于读取DRAM中数据的功耗。通过将参考次数为多次的参考图像帧、参考条带或参考区域存储在Sys$和DRAM,或者仅存储在SysBuf,当读取Sys$或SysBuf中的参考图像帧、参考条带或参考区域的图像数据时,可以降低读取数据时的功耗。由上可知,通过将被待解码对象参考多次的参考图像帧、参考条带或参考区域存储在Sys$和DRAM中,或者仅存储在SysBuf中,可以大大降低整体视频播放所需要的能量消耗。
在进行解码时,视频码流解析进出高能耗存储(如DRAM)的数据量有限,因此该解析步骤不成为能耗瓶颈,不论是NAL单元解析与条带头解析,或是熵解码(Entropy decoding),由视频码流解译出运动矢量或其他符号。在解码过程中,最需要高数据进出量的部分是运动补偿步骤需要的数据,视频解码装置的运动补偿步骤会需要DRAM提供较大的带宽。但由于经过视频码流事先分析,多次被参考的参考图像帧或参考图像帧中多次被使用到的部分能预先存在低能耗存储中(如由SRAM组成的Sys$或SysBuf),进而保证视频解码时读取数据的功耗控制在预期的数值附近,且能让视频解码装置的硬件或软件尽快完成解码工作。
比如,请参阅图9,图9是本申请实施例提供的将多次被参考的图像数据存储在系统高速缓存或系统缓冲存储器的场景示意图。图9中,将多次被待解码对象参考的图像数据存储在Sys$及DRAM中,或者将多次被待解码对象参考的图像数据仅存储在SysBuf中,可以大大降低整体视频播放所需要的能量消耗。
再比如,如图6所示,视频解码装置的相关硬件或软件进行粗略的帧级分析或预估后,可以确定出某些参考图像帧将会被使用多次,即可以确定出某些参考图像帧被待解码图像帧参考多次。可以将参考次数为多次的参考图像帧存于省电的Sys$,并存储在DRAM中,或者,将参考次数为多次的参考图像帧仅存于省电的SysBuf中,使视频解码装置尽量维持预期的低功耗状态,提升视频解码系统的使用时间且防止系统过热。
比如,在一种实施方式中,通过将视频码流中整张参考图像帧或是参考图像帧中被待解码对象参考多次的图像数据存储在低功耗存储空间,例如,存储在Sys$和DRAM中,或者仅存储在SysBuf等,这样可以有效维持视频解码装置运算时整个视频解码系统的功耗情况,进而改进使用者体验。其预测可由视频解码装置的硬件或软件真实分析视频码流,也可以是因为应用场景或图像群组结构(Group of pictures structure)等已知因素推算哪些图像数据适合写进类似Sys$或SysBuf等低功耗存储器中。
比如,假设只做到粗略判断参考图像帧的情况下,则通常会将这些参考图像帧的参考次数作为优先级,参考次数越多,则存储在Sys$或SysBuf的优先级越高。例如,将参考次数最多的参考图像帧最先存储在Sys$或SysBuf中,以此类推,按照参考次数从高到低的顺序将参考次数为多次的参考图像帧依 次存储在Sys$或SysBuf中,让解码过程中可以最大程度的降低功耗与能耗到预期目标。但由于此判断方法相当粗略,无法100%做到靠近或低于预期的功耗节省目标。
请参阅图10,图10是本申请实施例提供的粗略分析视频码流中多个图像帧的参考关系的场景示意图。图10中以H.264视频码流作为示例,对H.264视频码流进行粗略分析,通过对NAL单元的头信息或条带头信息的分析,可以分析出某些图像帧被其它图像帧参考的次数多于其它图像帧被参考的次数。比如,对于图10中的7个图像帧对应的码流,粗略分析这7个图像帧对应的码流,就可以知道参考图像列表,可以计算出省去多少DRAM的数据进出量。然后,选择最大概率能满足的某几个图像帧存储在Sys$或SysBuf中。
比如,通过将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在Sys$中,并且存储在DRAM中,以便于视频解码装置在解码时,可以分别从DRAM和Sys$中进行读取。
需要说明的是,若参考次数为一次,则可以将参考次数为一次的一个或多个参考图像帧、参考条带或参考区域存储在DRAM中,以便视频解码装置解码时进行读取。
比如,图10中,图像帧0的参考次数为一次,图像帧1的参考次数为三次,图像帧2的参考次数为三次,图像帧3的参考次数为一次,参考列表中包含图像帧0、图像帧1、图像帧2和图像帧3,由于图像帧1和图像帧2的参考次数为三次,因此将图像帧1和图像帧2存储在Sys$中,并且也会存储在DRAM中,或者仅将图像帧1和图像帧2存储在SysBuf中。将图像帧0和图像帧3存储在DRAM中。
比如,在一种实施方式中,DRAM、系统总线、Sys$和SysBuf的存取功耗模型经过一些简单的量测或实验就能取得,在此不再赘述。假设已经有了数据流动进出DRAM、系统总线、Sys$和SysBuf相关的功耗模型,则可以推算出降低多少功耗或能耗对应降低多少DRAM的数据存取量。根据图像帧或条带的码流头部信息可以判断出将哪一些图像帧或条带需要写入Sys$或SysBuf,可以达到解码时的预期功耗降低值。该粗略分析的方法可以达到预期功耗降低量,而达不到最佳的功耗降低量。
比如,若把参考次数为多次的参考图像帧、参考条带或参考区域事先存放在视频解码装置外部的Sys$和DRAM中,或者事先存放在视频解码装置外部的SysBuf中,请一并参阅图11至图13,图11是本申请实施例提供的使用系统高速缓存的视频解码系统的一种架构示意图。图12是本申请实施例提供的使用系统高速缓存的视频解码系统的另一种架构示意图。图13是本申请实施例提供的使用系统缓冲存储器的视频解码系统的架构示意图。在Sys$或SysBuf中存储的是参考次数为多次的参考图像帧、参考条带或参考区域。
以图11为例,Sys$可以通过DramC从DRAM中读取数据,且Sys$通过DramC从DRAM读取的数据可以被中央处理器和视频解码装置读取。图11仅仅是使用系统高速缓存时的视频解码系统的一种架构而已,在使用系统高速缓存时,视频解码系统还可以采用其它架构,比如,视频解码系统中还包括显示处理器等。当视频解码装置需要进行解码时,可以直接读取Sys$中存储的参考次数为多次的参考图像帧、参考条带或参考区域的图像数据,另外,Sys$还通过DramC从DRAM中读取参考次数为一次的参考图像帧、参考条带或参考区域的图像数据,之后被视频解码装置读取。
由于在视频解码时可预测图像数据的存取行为,实现按照需求相应降低功耗量,智能选择测图像数据的存储方式,进而降低视频解码装置的功耗。可以根据解码时帧参考关系改变图像数据存储的位置,使得存入诸如Sys$等低功耗存储器的参考图像帧的重复读取次数适当增高,以适当降低功耗,保证视频解码装置中进出数据带来的功耗能一直维持预期状态。若诸如Sys$等低功耗存储器同时具有高速带宽,则可以更进一步降低DRAM的带宽。
需要说明的是,图12和图13中的视频解码系统也仅仅是其中一种架构,在具体应用中,可以根据实际需求进行相应变形,如增加显示处理器等等。
207、从预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的参考图像帧的图像数据对待解码图像帧中的待解码块进行解码,若读取的是参考条带的图像数据,则根据读取的参考条带的图像数据对待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的参考区域的图像数据对待解码区域中的待解码块进 行解码。
比如,可以将图像帧、条带或区域可以划分为多个不互相重叠的块,这些块构成矩形阵列,其中每个块是N×N像素的块,比如,可以是4×4像素的块,32×32像素的块,128×128像素的块等等。
对待解码图像帧、待解码条带或待解码区域中的待解码块进行解码时,需要从预设存储器中读取所需要参考的参考图像帧、参考条带或参考区域的图像数据。例如,若待解码块在解码时需要参考多个参考图像帧、参考条带或参考区域的图像数据,则在读取该多个参考图像帧、参考条带或参考区域的图像数据时,可以从第一存储器(如DRAM)读取一次,从第二存储器(如Sys$)读取多次。
由于SRAM成本较高,DRAM成本较低,在考虑成本的情况下,SRAM一般不会做的太大,而DRAM可以做的比较大,因此,本申请实施例为了降低读取数据时的功耗,可以将原来从DRAM读取的次数,拆分成几次从SRAM读取,另外几次从DRAM读取,从整体上可以降低读取数据的功耗。需要说明的是,从SRAM读取的次数与从DRAM读取的次数是可以调整的,以适应对不同功耗的需求。
比如,在一种实施方式中,在读取所需要的参考次数为多次的参考图像帧、参考条带或参考区域的图像数据时,可以先从Sys$读取,当读取的次数大于或等于预设次数阈值时,则切换到从DRAM中读取未被读取的图像数据。当读取同样的图像数据时,DRAM消耗的能量大于SRAM消耗的能量的100倍。因此,通过将所需要的参考次数为多次的参考图像帧、参考条带或参考区域的图像数据的一部分从Sys$中读取,另一部分数据从DRAM中读取,可以降低读取数据的功耗。
比如,在一种实施方式中,在读取所需要的参考次数为多次的参考图像帧、参考条带或参考区域的图像数据时,直接从SysBuf读取。当读取同样的图像数据时,DRAM消耗的能量大于SRAM消耗的能量的100倍。因此,通过将所需要的参考次数为多次的参考图像帧、参考条带或参考区域的图像数据从SysBuf中读取,可以降低读取数据的功耗。
需要说明的是,若读取的是参考图像帧的图像数据,则根据读取的参考图像帧的图像数据对待解码图像帧中的待解码块进行解码,若读取的是参考条带的图像数据,则根据读取的参考条带的图像数据对待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的参考区域的图像数据对待解码区域中的待解码块进行解码。
请参阅图14,图14是本申请实施例提供的从Sys$或SysBuf读写数据时的功耗曲线示意图。视频解码装置将大量的DRAM功耗改由Sys$或SysBuf的功耗来取代,可以大大降低功耗。
208、若待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将待解码块解码后的块存储在系统高速缓存中,并且存储在动态随机存取内存中。
比如,对待解码图像帧、待解码条带或待解码区域中的待解码块进行解码后,若待解码块解码后的块后续会被其它需要解码的待解码块(例如,其它待解码图像帧、待解码条带或待解码区域中的待解码块)参考多次,则将该待解码块解码后的块存储在系统高速缓存中,并且存储在动态随机存取内存中。
209、若待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将待解码块解码后的块存储在系统缓冲存储器中。
比如,对待解码图像帧、待解码条带或待解码区域中的待解码块进行解码后,若待解码块解码后的块后续会被其它需要解码的待解码块(例如,其它待解码图像帧、待解码条带或待解码区域中的待解码块)参考多次,则将该待解码块解码后的块存储在系统缓冲存储器中。
210、若待解码图像帧、待解码条带或待解码区域的参考次数为一次,则将待解码块解码后的块存储在动态随机存取内存中。
比如,对待解码图像帧、待解码条带或待解码区域中的待解码块进行解码后,若待解码块解码后的块后续会被其它需要解码的待解码块(例如,其它待解码图像帧、待解码条带或待解码区域中的待解码块)参考一次,则将该待解码块解码后的块存储在动态随机存取内存中。
若该待解码图像帧、待解码条带或待解码中还有其它待解码块需要解码,则对其它待解码块进行解码。若对待解码图像帧、待解码条带或待解码区域中的所有待解码块都已解码完毕,则对其它图像帧、条带或区域进行解码,直至完成对所有需要解码的图像帧、条带或区域的解码。
可以理解的是,本申请实施例基于视频解码时可预测数据存取行为(即重复读取的行为),从而实现智能选择数据存储方式,以降低视频解码装置的功耗。可以根据解码时帧参考关系改变图像数据存储的位置,使得存入诸如Sys$等低功耗存储器的参考图像帧的重复读取次数适当增高,以适当降低功耗,保证视频解码装置中进出数据带来的功耗能一直维持预期状态。若诸如Sys$等低功耗存储器同时具有高速带宽,则可以更进一步降低DRAM的带宽。
本申请实施例可以保证视频解码装置的功耗可控,且能让视频解码装置的硬件或软件尽快完成解码工作,充分利用视频解码装置会有多次重复读取参考图像帧或参考条带的可预期行为来改变所读取数据的存储特性,因为存取数据省电,而使数据进出不会带来功耗瓶颈,而使视频解码装置可以维持其运行速度,同时又降低功耗。读取数据的速度不会受功耗的限制,因此视频解码装置不会过热。另外,Sys$或SysBuf中的SRAM在读写时本身的时延就低,这样可以提高处理帧率,降低反应时延。由于可以大幅降低功耗,则可以提高视频解码装置中电池的使用时间,提升用户体验。
请参阅图15,图15是本申请实施例提供的在视频解码装置中进行图像处理的方法的第三种流程示意图。该在视频解码装置中进行图像处理的方法可以应用于视频解码装置中。该在视频解码装置中进行图像处理的方法的流程可以包括:
301、获取视频码流。
步骤301的具体实施可参见步骤101的实施例,在此不再赘述。
302、根据视频码流获取一个或多个参考运动矢量。
比如,对视频码流进行解析,从该视频码流中可以获取一个或多个参考运动矢量,每个参考运动矢量均会对应一个参考块,在对待解码块进行解码时,需要对参考块进行参考。可以将参考块与待编码块的相对位移作为参考运动矢量。通过参考运动矢量获取对应的参考块,可以做到精细化分析。每个参考块被一个群像群组中其它待解码块参考的次数可能是一次,也可能是多次。比如当参考块只被一个待解码块参考时,该参考块的参考次数为一次,当参考块被多个待解码块参考时,该参考块的参考次数为多次。
303、根据一个或多个参考运动矢量从当前视频码流的一个或多个图像帧中获取对应的一个或多个参考块。
比如,当根据视频码流获取一个或多个参考运动矢量后,由于每个参考运动矢量均会对应一个参考块,因此可以根据该一个或多个参考运动矢量可以从视频码流的一个或多个图像帧中获取对应的一个或多个参考块。
304、确定一个或多个参考块的参考次数。
比如,在根据一个或多个参考运动矢量从视频码流的一个或多个图像帧中获取对应的一个或多个参考块后,可以确定该一个或多个参考块的参考次数,即,确定该一个或多个参考块被待解码块或待解码块的子块参考的次数。
比如,请参阅图16,图16是本申请实施例提供的视频码流中一个图像群组中各图像帧中块之间参考关系的场景示意图。图16是以一个图像群组中包括9个图像帧为例进行说明的。在其他实施方式中,图像群组中所包括的图像帧的数量是可以根据具体需求进行调整的。在一个图像群组中,图像帧的显示顺序可能与解码顺序相同,也可能不同。如图16中显示的图像群组中的图像帧的显示顺序与解码顺序是不同的。
从图16中的箭头方向,可以看出每个块被群像群组中其它块参考的次数,即可以根据箭头方向确定每个块被箭头指向的其它块参考的次数,当某个块被其它块参考一次或多次时,则该块可以作为参考块。比如,图16中的I帧中参考块的参考次数为四次,显示顺序为2的B帧中参考块的参考次数为两次,显示顺序为3的B帧中的参考块的参考次数为一次,显示顺序为4的P帧中参考块的参考次数为五次,显示顺序为6的B帧中参考块的参考次数为两次,等等。
305、根据预设功耗阈值以及一个或多个参考块的参考次数,从一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在预设存储器中,向预设存储器中存储参考块以及 从预设存储器中读取参考块所产生的功耗小于或等于预设功耗阈值。
比如,本申请实施例中,可以从一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,例如,确定出的需要存储在预设存储器中的参考块可以是参考次数为多次的参考块,即将被待解码块或待解码块的子块参考多次的参考块确定为存储在预设存储器中,便于后续待解码块或待解码块的子块解码时读取参考块的图像数据。需要说明的是,向该预设存储器中存储参考块以及从预设存储器中读取参考块所产生的功耗小于或等于预设功耗阈值,通过采用功耗小的预设存储器读写数据,可以降低视频解码装置的功耗。
306、根据参考块对待解码块或待解码块的子块进行解码。
比如,存储在预设存储器中的参考块被待解码块或待解码块的子块参考的次数可以为多次,因此在进行解码时,存储在预设存储器中的参考块将会被读取多次。当从预设存储器中读取参考块的图像数据后,待解码块或待解码块的子块就可以参考参考块的图像数据,即可以根据读取的参考块的图像数据,对待解码块或待解码块的子块进行解码。
可以理解的是,在本申请实施例中,视频解码装置可以获取视频码流,根据视频码流获取一个或多个参考运动矢量。然后,根据一个或多个参考运动矢量从视频码流的一个或多个图像帧中获取对应的一个或多个参考块;确定一个或多个参考块的参考次数;根据预设功耗阈值以及一个或多个参考块的参考次数,从一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在预设存储器中,向预设存储器中存储参考块以及从预设存储器中读取参考块所的功耗小于或等于预设功耗阈值。之后,根据参考块对待解码块或待解码块的子块进行解码。即,本申请实施例中,通过将确定出的需要存储在预设存储器中的参考图像帧或参考条带的图像数据存放在功耗较小的预设存储器中,以达到降低视频解码装置功耗的目的。因此,本申请实施例可以降低视频解码装置的功耗。
请参阅图17,图17是本申请实施例提供的在视频解码装置中进行图像处理的方法的第四种流程示意图。该在视频解码装置中进行图像处理的方法可以应用于视频解码装置中。该在视频解码装置中进行图像处理的方法的流程可以包括:
401、获取视频码流。
步骤401的具体实施可参见步骤101的实施例,在此不再赘述。
402、对视频码流进行熵解码,得到一个或多个运动矢量差值(Motion Vector Difference,MVD)。
比如,在获取到视频码流后,可以对视频码流进行熵解码,比如对图像帧的帧头信息、NAL单元的头部信息或条带头进行解码,经过熵解码后,可以得到一个或多个运动矢量差值,同时还可以得到量化后的残差。其中,残差指的是待编码块与编码代价最小的一个或多个块的差值。
比如,在一种实施方式中,402中的对所述视频码流进行熵解码,得到一个或多个运动矢量差值,可以包括:
对所述视频码流进行熵解码,得到一个或多个运动矢量差值以及量化后的第一残差。
比如,通过对视频码流进行熵解码,可以得到一个或多个运动矢量差值,同时还会得到量化后的第一残差。该量化后的第一残差指的是在进行编码时对残差进行正向变换和量化后得到的第一残差,其中,残差可以是待编码块的二维像素减去搜索出的块对应位置的二维像素后得到的差值。
403、根据一个或多个运动矢量差值和对应的运动矢量预测值,获取一个或多个参考运动矢量。
比如,在对视频码流进行熵解码,得到一个或多个运动矢量差值后,可以根据该一个或多个运动矢量差值以及对应的运动矢量预测值,获取到一个或多个参考运动矢量,如将运动矢量差值与运动矢量预测值相加后的和作为参考运动矢量。
404、根据一个或多个参考运动矢量从视频码流的一个或多个图像帧中获取对应的一个或多个参考块。
比如,每个参考运动矢量会对应一个参考块,因此根据一个或多个参考运动矢量可以从视频码流的一个或多个图像帧中确定出对应的一个或多个参考块,从而可以获取到对应的参考块,其中,该参考块为已经解码后的块。
405、确定一个或多个参考块的参考次数。
比如,在获取到图像帧中的参考块后,可以确定一个或多个参考块的参考次数,该参考次数是指参考块被待解码块或待解码块中的子块参考的次数。
比如,图16中通过视频解码装置的相关硬件或软件进行精细块级的分析或预估后,可以判定某些图像帧的某些区域将会被其它图像帧的某些区域在解码时参考多次,或者某些图像帧的某些块将会被其它图像帧的某些块在解码时参考多次,适合存储在省电的诸如Sys$等低功耗存储器以及DRAM中,或者直接存储在诸如SysBuf等低功耗存储器中,使视频解码时数据吞吐尽量维持在预期的低功耗值,提升视频解码系统使用时间且防止视频解码系统过热。
比如,若视频码流解析的更深入,先将解码过程中的熵解码得到的运动矢量差值取出来,还原出参考运动矢量,做到精细的参考运动矢量分析,能更深入地分析邻近几个图像帧中使用了哪些参考图像帧的哪些区域,也就能更精确的分辨出某些图像帧的区域被当成参考区域的次数,当成需要放到低能耗存储器中优先级排序的依据。例如,请参阅图18,图18是本申请实施例提供的精细分析视频码流中多个图像帧中块的参考关系的场景示意图。图18中以H.264视频码流作为示例,对H.264视频码流进行精细分析,通过对条带主体的分析,可以分析出某些块被其它块参考的次数多于其它块被参考的次数。比如,对于图18中的5个图像帧对应的码流,精细分析这5个图像帧对应的视频码流,每个图像帧的每个部分(如每个宏块,也可以称为块)都能通过参考运动矢量分析确定会被其它块参考多少次。
如图18中,图像帧0中的块0、块2、块C的参考次数均为一次,图像帧0中的块8的参考次数为两次,图像帧1中的块6的参考次数为两次,图像帧1中的块4的参考次数为三次,图像帧1中的块9的参考次数为四次,图像帧1中的块5、块A、块D的参考次数均为一次,图像帧2中的块7、块3、块B的参考次数均为一次。
406、若参考次数为多次,则根据预设功耗阈值将参考次数为多次的一个或多个参考块存储在系统高速缓存中,并且存储在动态随机存取内存中。
比如,预设存储器可以包括第一存储器和第二存储器,第一存储器的功耗大于第二存储器的功耗。第一存储器可以包括设置在视频解码装置外部的动态随机存取内存,第二存储器可以包括设置在视频解码装置外部的系统高速缓存。当确定出一个或多个参考块的参考次数后,即确定出一个或多个参考块被待解码块或待解码块中的子块参考的次数后,若参考次数为多次,则可以将参考次数为多次的一个或多个参考块存储在系统高速缓存中,并且存储在动态随机存取内存中。
比如,当预期知道降低多少功耗就能使视频解码系统运作时,就能推算出应该要降低多少动态随机存取内存的带宽。需要降低的动态随机存取内存的带宽在每个图像帧都能经过计算预先得到。通过预先计算出图像帧中的哪些区域(参考块)需要存储在Sys$,才能在后续做运动补偿时尽量满足或低于动态随机存取内存的数据进出量限制,该限制源自于需要降低多少功耗或能耗。
需要说明的是,本申请实施例中,动态随机存取内存的功耗大于系统高速缓存的功耗。在获取到参考次数为多次的一个或多个参考块后,即获取到被待解码块或待解码块中的子块参考多次的一个或多个参考块后,可以将其存储在系统高速缓存中,并且存储在动态随机存取内存中。
如图18所示,由于图像帧0中的块8的参考次数为两次,图像帧1中的块6的参考次数为两次,图像帧1中的块4的参考次数为三次,图像帧1中的块9的参考次数为四次,则将图像帧0中的块8、图像帧1中的块6、图像帧1中的块4和图像帧1中的块9存储在系统高速缓存中,并且存储在动态随机存取内存中。需要说明的是,向动态随机存取内存和系统高速缓存存储以及从动态随机存取内存和系统高速缓存读取参考次数为多次的一个或多个参考块所产生的功耗小于或等于预设功耗阈值,这样可以降低读取数据时的功耗。
407、若参考次数为多次,则根据预设功耗阈值将参考次数为多次的一个或多个参考块存储在系统缓冲存储器中。
比如,预设存储器可以包括第二存储器,第二存储器包括设置在视频解码装置外部的系统缓冲存储器。当确定出一个或多个参考块的参考次数后,即确定出一个或多个参考块被待解码块或待解码块中的 子块参考的次数后,若参考次数为多次,则根据预设功耗阈值将参考次数为多次的一个或多个参考块存储在系统缓冲存储器中。
比如,当预期知道降低多少功耗就能使视频解码系统运作时,就能推算出应该要降低多少动态随机存取内存的带宽。需要降低的动态随机存取内存的带宽在每个图像帧都能经过计算预先得到。通过预先计算出图像帧中的哪些区域(参考块)需要存储在SysBuf,才能在后续做运动补偿时尽量满足或低于第一存储器的数据进出量限制,该限制源自于需要降低多少功耗或能耗。
408、若参考次数为一次,则根据预设功耗阈值将参考次数为一次的一个或多个参考块存储在动态随机存取内存中。
比如,预设存储器可以包括第一存储器,第一存储器可以包括设置在视频解码装置外部的动态随机存取内存。当确定出一个或多个参考块的参考次数后,即确定出一个或多个参考块被待解码块或待解码块中的子块参考的次数后,若参考次数为一次,则根据预设功耗阈值将参考次数为一次的一个或多个参考块存储在动态随机存取内存中,以便于视频解码装置解码时进行读取。通过将参考次数为多次的参考块存储在低功耗存储器中,并将参考次数为一次的参考块存储在动态随机存取内存中,可以从整体上降低读取数据时产生的功耗。
比如,第一存储器可以包括设置在视频解码装置外部的DRAM,第二存储器可以包括设置在视频解码装置外部的系统高速缓存或系统缓冲存储器,即第二存储器可以包括设置在视频解码装置外部的Sys$或SysBuf。当然,第二存储器还可以是其它低功耗存储器等。DRAM的能耗大于Sys$或SysBuf的能耗。假设已经有了数据流动进出DRAM、系统总线、Sys$和SysBuf相关的功耗模型,则可以推算出降低多少功耗或能耗对应降低多少DRAM的数据存取量。根据图像帧或条带的码流头部信息可以判断出哪一些块需要写入Sys$或SysBuf,可以达到解码时的预期功耗降低值。
通过码流细部信息解译出的参考运动矢量的信息来判断哪一些块适合被写入Sys$或SysBuf。通常是参考块被待解码块或待解码块中的子块参考的次数越多,则越适合写入到Sys$或SysBuf,可以用较小的Sys$或SysBuf占用量达到较多的解码功耗降低值。该精细分析的方法只求到达预期功耗/能耗降低量,不追求最佳的功耗/能耗降低量。只要计算出重建面积/区域存储到Sys$或SysBuf会有多少的数据进出减少量,就可以推算出节省多少功耗/能耗。
请参阅图8,读取SRAM与读取DRAM的能量差异约相差100倍,即读取SRAM的能量远远小于读取DRAM的能量。通过将参考次数为多次的参考块存储在Sys$及DRAM,或者将参考次数为多次的参考块存储在SysBuf(Sys$或SysBuf由多个SRAM构成),当从Sys$以及DRAM,或者从SysBuf读取参考块的图像数据时,整体上可以降低读取数据时的功耗。
409、从预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对待解码块或待解码块中的子块进行解码。
比如,当对待解码块或待解码块中的子块进行解码时,需要参考参考块的图像数据,此时就要读取参考块的图像数据。其中,一个块中可以包括多个子块,多个子块排列成矩形阵列。例如,在读取参考块的图像数据时,若待解码块或待解码块中的子块需要参考的参考块的参考次数为一次时,则直接从第一存储器中读取参考块的图像数据,若待解码块或待解码块中的子块需要参考的参考块的参考次数为多次时,则可以从第一存储器中读取一次,其余几次从Sys$中读取,或者,若待解码块或待解码块中的子块需要参考的参考块的参考次数为多次时,则可以从SysBuf读取。
可以理解的是,比如,当第二存储器为Sys$时,从Sys$中读取的次数可以大于从DRAM中读取的次数,从Sys$中读取的次数可以小于从DRAM中读取的次数,或者从Sys$中读取的次数可以等于从DRAM中读取的次数,具体从DRAM和Sys$中分别读取几次,要根据具体场景进行相应设置,本申请实施例对此不做具体限制。因此,通过将参考块的图像数据的一部分从Sys$中读取,另一部分数据从DRAM中读取,可以降低读取数据的功耗。从图14中可以看出,视频解码装置将大量的DRAM功耗改由Sys$或SysBuf的功耗取代,大大降低功耗。
比如,在一种实施方式中,409中的根据读取的参考块的图像数据对待解码块或待解码块中的子块 进行解码,可以包括:
对所述第一残差进行反量化与反变换,得到第二残差;
根据所述参考运动矢量和参考块,得到所述待解码块或所述待解码块中的子块的预测值;
根据所述第二残差以及所述待解码块或所述待解码块中的子块的预测值,获取所述待解码块解码后的块或所述待解码块中的子块解码后的子块。
比如,在一种实施方式中,409中的根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码,还可以包括:
根据所述待解码块解码后的块或所述待解码块中的子块解码后的子块获取视频流解码数据。
比如,请参阅图19,图19是本申请实施例提供的视频解码装置解码的场景示意图。该视频码流以H.264的视频码流为例进行说明,当对视频码流进行熵解码后,得到一个或多个运动矢量差值以及量化后的第一残差。根据根据运动矢量差值和对应的运动矢量预测值,可以获取参考运动矢量,这样就能更精细的知道被拿来做运动补偿的参考块。其中,熵解码可以采用一个独立的硬件设计来实现,也可以通过软件的方式来实现。对当前视频码流的解析以及图像缓冲可以通过驱动程序或开放多媒体加速层框架(Open Media Acceleration,OpenMAX)以软件的方式来实现。对第一残差进行反量化与反变换后,可以得到第二残差。根据述参考运动矢量(参考块与待解码块或待解码块中的子块的相对位移)和参考块,可以得到待解码块或所述待解码块中的子块的预测值。需要说明的是,待解码块或待解码块中的子块的预测值可以通过帧内预测方式或运动补偿方式获取到。关于解码过程中的反量化与反变换、帧内/帧间模式选择、帧内预测、运动补偿和去块效应滤波等可以通过专用集成电路(Application Specific Integrated Circuit,ASIC)来实现。
在得到待解码块或待解码块中的子块的预测值后,将第二残差与待解码块或待解码块中的子块的预测值相加得到待解码块解码后的块或待解码块中的子块解码后的子块(实际值),根据待解码块解码后的块或待解码块中的子块解码后的子块进行区块效应滤波器滤波后,可以得到平滑的视频流解码数据。
410、若待解码块或待解码块中的子块的参考次数为多次,则将待解码块解码后的块或待解码块中的子块解码后的子块存储在系统高速缓存中,并且存储在动态随机存取内存中。
比如,当待解码块或待解码块中的子块解码后,若待解码块解码后的块后续会被其它待解码块参考多次,或者,待解码块中的子块解码后的子块后续会被其它待解码块中的子块参考多次,则将待解码块解码后的块或待解码块中的子块解码后的子块存储在系统高速缓存中,并且存储在动态随机存取内存中,以作为其它待解码块或待解码块中的子块解码时的参考块。
411、若待解码块或待解码块中的子块的参考次数为多次,则将待解码块解码后的块或待解码块中的子块解码后的子块存储在系统缓冲存储器中。
比如,当待解码块或待解码块中的子块解码后,若待解码块解码后的块后续会被其它待解码块参考多次,或者,待解码块中的子块解码后的子块后续会被其它待解码块中的子块参考多次,则将待解码块解码后的块或待解码块中的子块解码后的子块存储在系统缓冲存储器中,以作为其它待解码块或待解码块中的子块解码时的参考块。
412、若待解码块或待解码块中的子块的参考次数为一次,则将待解码块解码后的块或待解码块中的子块解码后的子块存储在动态随机存取内存中。
比如,当待解码块或待解码块中的子块解码后,若待解码块解码后的块后续会被其它待解码块参考一次,或者,待解码块中的子块解码后的子块后续会被其它待解码块中的子块参考一次,则将待解码块解码后的块或待解码块中的子块解码后的子块存储在动态随机存取内存中,以作为其它待解码块或待解码块中的子块解码时的参考块。
若该待解码图像帧、待解码条带或待解码区域中还有其它待解码块或其它待解码块的子块需要解码,则对其它待解码块或其它待解码块的子块进行解码。若对待解码图像帧、待解码条带或待解码区域中的所有待解码块或待解码块的子块都已解码完毕,则对其它图像帧、条带或区域进行解码,直至完成对所有需要解码的图像帧、条带或区域的解码。
需要说明的是,图7中在视频解码装置中进行图像处理的方法的流程与图17中在视频解码装置中进行图像处理的方法的流程可以合并在一个系统中,彼此之间并不会相互干扰,也就是说在解码一个视频码流时可能中间切换为另一个流程。比较合理的切换点是在一个新的图像帧或条带开始解码时。
可以理解的是,本申请实施例基于视频解码时可预测数据存取行为(即重复读取的行为),从而实现智能选择数据存储方式,以降低视频解码装置的功耗。可以根据解码时帧参考关系改变图像数据存储的位置,使得存入诸如Sys$或SysBuf等低功耗存储器的参考图像帧的重复读取次数适当增高,以适当降低功耗,保证视频解码装置中进出数据带来的功耗能一直维持预期状态。若诸如Sys$等低功耗存储器同时具有高速带宽,则可以更进一步降低DRAM的带宽。
本申请实施例可以保证视频解码装置的功耗可控,且能让视频解码装置的硬件或软件尽快完成解码工作,充分利用视频解码装置会有多次重复读取参考块的可预期行为来改变所读取数据的存储特性,因为存取数据省电,而使数据进出不会带来功耗瓶颈,而使视频解码装置可以维持其运行速度,同时又降低功耗。读取数据的速度不会受功耗的限制,因此视频解码装置不会过热。另外,Sys$或SysBuf中SRAM在读写时本身的时延就低,这样可以提高处理帧率,降低反应时延。由于可以大幅降低功耗,则可以提高视频解码装置中电池的使用时间,提升用户体验。
可以理解的是,本申请实施例可以根据播放装置长时间播放需求和可预测行为造成的较大功耗,可以选择数据读取的目标位置或属性。比如,将需要重复读取的数据从Sys$和DRAM进行读取,或者从SysBuf读取,而不是全部都是从DRAM读取,由于读取相同的数据,SRAM的功耗远远小于DRAM的功耗,因此本申请实施例可以大大降低读取数据时的功耗。
本申请实施例以视频解码为示例详细说明了如何降低读取数据的功耗。在其它实施方式中,还可以适用于所有需要高带宽但存取数据行为可预测的模块与应用,如视频编码装置,帧频提升(frame rate up conversion)装置等。这些模块与应用的行为通常是可以预测的,如重复读取的次数,通过这些可以预测的行为,可以预先分配相应的存储特性,即将重复读取的数据存放在低功耗的存储器中,例如根据全部帧或部分帧的图像数据的存取次数需求,来对应不同等级存储器的能量消耗,即根据全部图像帧或部分图像帧的图像数据的存取次数需求,来选择对应不同等级的能量消耗,当能量消耗不同时,可以合理分配从Sys$和DRAM读取数据的次数,或者合理分配从SysBuf读取数据的次数。
如,视频编码装置事先解析视频码流也可以确定存取数据的行为,帧频提升装置可以通过简单分析得知哪些区域在处理时会被用到多次,等等。还可以适用于固定的人工智能(Artificial Intelligence,AI)网络行为,AI网络行为重复读取的部分是特征图(feature map)部分,该AI网络行是可预期的。
请参阅图20,图20为本申请实施例提供的图像处理装置的结构示意图。该图像处理装置500可以包括:获取模块501,第一确定模块502,第二确定模块503,第三确定模块504,解码模块505。
获取模块501,用于获取视频码流;
第一确定模块502,用于从所述视频码流中确定出一个或多个参考位置;
第二确定模块503,用于确定出所述一个或多个参考位置的参考次数;
第三确定模块504,用于根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
解码模块505,用于根据所述参考位置对待解码对象进行解码。
在一种实施方式中,所述参考位置包括参考图像帧、参考条带或参考区域,所述第一确定模块502可以用于:
根据所述视频码流中图像帧的帧头信息或所述图像帧中一个或多个条带的条带头信息,从所述视频码流中确定一个或多个参考图像帧、参考条带或参考区域。
在一种实施方式中,所述参考位置包括参考图像帧、参考条带或参考区域,所述第二确定模块503可以用于:
通过预设参数确定所述一个或多个参考图像帧、参考条带或参考区域的参考次数,所述预设参数包括以下中的任一项或多项:网络抽象层解析参数、条带头解析参数、参考图像列表修正参数和参考图像帧标记参数。
在一种实施方式中,所述预设存储器包括第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二存储器包括设置在视频解码装置外部的系统高速缓存,所述参考位置包括参考图像帧、参考条带或参考区域,所述第三确定模块504可以用于:
若参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述第二存储器包括设置在视频解码装置外部的系统缓冲存储器,所述参考位置包括参考图像帧、参考条带或参考区域,所述第三确定模块504可以用于:
若参考次数为多次,则根据所述预设功耗阈值将所述参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在所述系统缓冲存储器中。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述参考位置包括参考图像帧、参考条带或参考区域,所述第三确定模块504可以用于:
若参考次数为一次,则根据所述预设功耗阈值将所述参考次数为一次的一个或多个参考图像帧、参考条带或参考区域存储在所述动态随机存取内存中。
在一种实施方式中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述解码模块505可以用于:
从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是所述参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;
若所述待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将所述待解码块解码后的块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述解码模块505可以用于:
从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;
若所述待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将所述待解码块解码后的块存储在所述系统缓冲存储器中。
在一种实施方式中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述解码模块505可以用于:
从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;
若所述待解码图像帧、待解码条带或待解码区域的参考次数为一次,则将所述待解码块解码后的块 存储在所述动态随机存取内存中。
请参阅图21,图21为本申请实施例提供的图像处理装置的另一结构示意图。该图像处理装置600可以包括:第一获取模块601,第二获取模块602,第三获取模块603,第一确定模块604,第二确定模块605,解码模块606。
第一获取模块601,用于获取视频码流;
第二获取模块602,用于根据所述视频码流获取一个或多个参考运动矢量;
第三获取模块603,用于根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
第一确定模块604,用于确定所述一个或多个参考块的参考次数;
第二确定模块605,用于根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
解码模块606,用于根据所述参考块对待解码块或所述待解码块的子块进行解码。
在一种实施方式中,所述第二获取模块602可以用于:
对所述视频码流进行熵解码,得到一个或多个运动矢量差值;
根据所述一个或多个运动矢量差值和对应的运动矢量预测值,获取所述一个或多个参考运动矢量。
在一种实施方式中,所述预设存储器包括第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二存储器包括设置在视频解码装置外部的系统高速缓存,所述第二确定模块605可以用于:
若所述参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述第二存储器包括设置在视频解码装置外部的系统缓冲存储器,所述第二确定模块605可以用于:
若所述参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考块存储在所述系统缓冲存储器中。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二确定模块605可以用于:
若所述参考次数为一次,则根据所述预设功耗阈值将参考次数为一次的一个或多个参考块存储在所述动态随机存取内存中。
在一种实施方式中,所述解码模块606可以用于:
从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;
若所述待解码块或所述待解码块中的子块的参考次数为多次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述解码模块606可以用于:
从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;
若所述待解码块或所述待解码块中的子块的参考次数为多次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述系统缓冲存储器中。
在一种实施方式中,所述解码模块606可以用于:
从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;
若所述待解码块或所述待解码块中的子块的参考次数为一次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述动态随机存取内存中。
在一种实施方式中,所述第二获取模块602可以用于:
对所述当前视频码流进行熵解码,得到一个或多个运动矢量差值以及量化后的第一残差;
所述解码模块606可以用于:
对所述第一残差进行反量化与反变换,得到第二残差;
根据所述参考运动矢量和参考块,得到所述待解码块或所述待解码块中的子块的预测值;
根据所述第二残差以及所述待解码块或所述待解码块中的子块的预测值,获取所述待解码块解码后的块或所述待解码块中的子块解码后的子块。
在一种实施方式中,所述解码模块606可以用于:
根据所述待解码块解码后的块或所述待解码块中的子块解码后的子块获取视频流解码数据。
在一种实施方式中,所述待解码块或所述待解码块中的子块的预测值通过帧内预测方式或运动补偿方式获取到。
本申请实施例提供一种计算机可读的存储介质,其上存储有计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行如本实施例提供的在视频解码装置中进行图像处理的方法中的流程。
本申请实施例还提供一种电子设备,包括存储器,处理器以及视频解码装置,所述处理器通过调用所述存储器中存储的计算机程序,用于执行本实施例提供的在视频解码装置中进行图像处理的方法中的流程。
例如,上述电子设备可以是诸如平板电脑或者智能手机等移动终端。请参阅图22,图22为本申请实施例提供的电子设备的结构示意图。
该电子设备700可以包括视频解码装置701、存储器702、处理器703等部件。本领域技术人员可以理解,图22中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
视频解码装置701可以用于对编码的视频图像进行解码,以还原出原始的视频图像。
存储器702可用于存储应用程序和数据。存储器702存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器703通过运行存储在存储器702的应用程序,从而执行各种功能应用以及数据处理。
处理器703是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器702内的应用程序,以及调用存储在存储器702内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
在本实施例中,电子设备中的处理器703会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行代码加载到存储器702中,并由处理器703来运行存储在存储器702中的应用程序,从而执行:
获取视频码流;
从所述视频码流中确定出一个或多个参考位置;
确定所述一个或多个参考位置的参考次数;
根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
根据所述参考位置对待解码对象进行解码;或者执行:
获取视频码流;
根据所述视频码流获取一个或多个参考运动矢量;
根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
确定所述一个或多个参考块的参考次数;
根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
根据所述参考块对待解码块或所述待解码块的子块进行解码。
请参阅图23,电子设备700可以包括视频解码装置701、存储器702、处理器703、电池704、输入单元705、输出单元706等部件。
视频解码装置701可以用于对编码的视频图像进行解码,以还原出原始的视频图像。
存储器702可用于存储应用程序和数据。存储器702存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器703通过运行存储在存储器702的应用程序,从而执行各种功能应用以及数据处理。
处理器703是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器702内的应用程序,以及调用存储在存储器702内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
电池704可用于为电子设备的各个部件提供电力支持,从而保障各个部件的正常运行。
输入单元705可用于接收视频图像的已编码的输入视频流,例如可以用于接收需要进行视频解码的视频流。
输出单元706可以用于输出已解码的视频流。
在本实施例中,电子设备中的处理器703会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行代码加载到存储器702中,并由处理器703来运行存储在存储器702中的应用程序,从而执行:
获取视频码流;
从所述视频码流中确定出一个或多个参考位置;
确定所述一个或多个参考位置的参考次数;
根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
根据所述参考位置对待解码对象进行解码;或者执行:
获取视频码流;
根据所述视频码流获取一个或多个参考运动矢量;
根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
确定所述一个或多个参考块的参考次数;
根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
根据所述参考块对待解码块或所述待解码块的子块进行解码。
在一种实施方式中,所述参考位置包括参考图像帧、参考条带或参考区域,所述处理器703执行所述从所述视频码流中确定出一个或多个参考位置时,还可以执行:根据所述视频码流中图像帧的帧头信息或所述图像帧中一个或多个条带的条带头信息,从所述视频码流中确定出所述一个或多个参考图像帧、参考条带或参考区域。
在一种实施方式中,所述参考位置包括参考图像帧、参考条带或参考区域,所述处理器703执行所述确定所述一个或多个参考位置的参考次数时,还可以执行:通过预设参数确定出所述一个或多个参考图像帧、参考条带或参考区域的参考次数,所述预设参数包括以下中的任一项或多项:网络抽象层解析 参数、条带头解析参数、参考图像列表修正参数和参考图像帧标记参数。
在一种实施方式中,所述预设存储器包括第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二存储器包括设置在视频解码装置外部的系统高速缓存,所述参考位置包括参考图像帧、参考条带或参考区域,所述处理器703执行所述根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中时,还可以执行:若参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的所述一个或多个参考图像帧、参考条带或参考区域存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述第二存储器包括设置在视频解码装置外部的系统缓冲存储器,所述参考位置包括参考图像帧、参考条带或参考区域;所述处理器703执行所述根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中时,还可以执行:若参考次数为多次,则根据所述预设功耗阈值将所述参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在所述系统缓冲存储器中。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述参考位置包括参考图像帧、参考条带或参考区域;所述处理器703执行所述根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中时,还可以执行:若参考次数为一次,则根据所述预设功耗阈值将所述参考次数为一次的一个或多个参考图像帧、参考条带或参考区域存储在所述动态随机存取内存中。
在一种实施方式中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述处理器703执行所述根据所述参考位置对待解码对象进行解码时,还可以执行:从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是所述参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,在根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;若所述待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将所述待解码块解码后的块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述处理器703执行所述根据所述参考位置对待解码对象进行解码时,还可以执行:从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是所述参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,在根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;若所述待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将所述待解码块解码后的块存储在所述系统缓冲存储器中。
在一种实施方式中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述处理器703执行所述根据所述参考位置对待解码对象进行解码时,还可以执行:从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是所述参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,在根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;若所述待解码图像帧、待解码条带或待解码区域的参考次数为一次,则将所述待解码块解码后的块存储在所述动态随机存取内存中。
在一种实施方式中,所述处理器703执行所述根据所述视频码流获取一个或多个参考运动矢量时, 还可以执行:对所述视频码流进行熵解码,得到一个或多个运动矢量差值;根据所述一个或多个运动矢量差值和对应的运动矢量预测值,获取所述一个或多个参考运动矢量。
在一种实施方式中,所述预设存储器包括第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二存储器包括设置在视频解码装置外部的系统高速缓存,所述处理器703执行所述根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块时,还可以执行:若所述参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述第二存储器包括设置在视频解码装置外部的系统缓冲存储器,所述处理器703执行所述根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块时,还可以执行:若所述参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考块存储在所述系统缓冲存储器中。
在一种实施方式中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述处理器703执行所述根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块时,还可以执行:若所述参考次数为一次,则根据所述预设功耗阈值将参考次数为一次的一个或多个参考块存储在所述动态随机存取内存中。
在一种实施方式中,所述处理器703执行所述根据所述参考块对待解码块或所述待解码块的子块进行解码时,还可以执行:从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;若所述待解码块或所述待解码块中的子块的参考次数为多次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
在一种实施方式中,所述处理器703执行所述根据所述参考块对待解码块或所述待解码块中的子块进行解码时,还可以执行:从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;若所述待解码块或所述待解码块中的子块的参考次数为多次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述系统缓冲存储器中。
在一种实施方式中,所述处理器703执行所述根据所述参考块对待解码块或所述待解码块中的子块进行解码时,还可以执行:从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;若所述待解码块或所述待解码块中的子块的参考次数为一次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述动态随机存取内存中。
在一种实施方式中,所述处理器703执行所述对所述视频码流进行熵解码,得到一个或多个运动矢量差值时,还可以执行:对所述视频码流进行熵解码,得到一个或多个运动矢量差值以及量化后的第一残差。
所述处理器703执行所述根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码时,还可以执行:对所述第一残差进行反量化与反变换,得到第二残差;根据所述参考运动矢量和参考块,得到所述待解码块或所述待解码块中的子块的预测值;根据所述第二残差以及所述待解码块或所述待解码块中的子块的预测值,获取所述待解码块解码后的块或所述待解码块中的子块解码后的子块。
在一种实施方式中,所述处理器703执行所述根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码时,还可以执行:根据所述待解码块解码后的块或所述待解码块中的子块解码后的子块获取视频流解码数据。
在一种实施方式中,所述待解码块或所述待解码块中的子块的预测值通过帧内预测方式或运动补偿 方式获取到。
本申请实施例还提供一种图像处理系统,请参阅图24和图25,图24是本申请实施例提供的图像处理系统的结构示意图。图25是本申请实施例提供的图像处理系统的另一结构示意图。该图像处理系统800包括视频解码装置801、第一存储器802和第二存储器803,其中,第一存储器802的功耗大于第二存储器803的功耗,如第一存储器可以是DRAM,第二存储器803可以是Sys$或SysBuf,第一存储器802中存储参考次数为一次的参考位置,或者存储参考次数为一次和多次的参考位置,第二存储器803中存储参考次数为多次的参考位置。需要说明的是,参考次数为参考位置被待解码对象参考的次数。
在一种实施方式中,参考位置可以包括参考图像帧、参考条带、参考区域或参考块。比如,第一存储器802中可以存储参考次数为一次的参考图像帧、参考条带或参考块,或者存储参考次数为一次和多次的参考图像帧、参考条带、参考区域或参考块,即第一存储器802中可以存储参考次数为一次或多次的参考图像帧、参考条带或参考块,第二存储器803中存储参考次数为多次的参考图像帧、参考条带或参考块。
当视频解码装置801在解码时,视频解码装置801在解码时,从第一存储器802中读取参考次数为一次的参考位置,以及从第二存储器803读取参考次数为多次的参考位置,根据参考位置对待解码对象进行解码。该待解码对象可以包括待解码图像帧中的待解码块、待解码条带中的待解码块、待解码区域中的待解码块或者所述待解码块的子块。
比如,当视频解码装置801在解码时,可以从第一存储器802中读取参考次数为一次的参考图像帧、参考条带或参考块,以及从第二存储器803读取参考次数为多次的参考图像帧、参考条带或参考块,根据参考图像帧对待解码图像帧中的待解码块进行解码,根据参考条带对待解码条带中的待解码块进行解码,根据参考区域对待解码区域中的待解码块进行解码,或者,根据参考块对待解码块或待解码块的子块进行解码。
比如,第二存储器803可以是Sys$,在解码时,需要参考的参考图像帧、参考条带、参考区域或参考块的参考次数为一次时,则直接从第一存储器802中读取参考图像帧、参考条带、参考区域或参考块的图像数据,若需要参考的参考图像帧、参考条带、参考区域或参考块的参考次数为多次时,则对参考图像帧、参考条带、参考区域或参考块进行读取时,可以从第一存储器802中读取一次,其余几次从Sys$中读取。
可以理解的是,从Sys$中读取的次数可以大于从第一存储器802中读取的次数,从Sys$中读取的次数可以小于从第一存储器802中读取的次数,或者从Sys$中读取的次数可以等于从第一存储器802中读取的次数,具体从第一存储器802和Sys$中分别读取几次,要根据具体场景进行相应设置,本申请实施例对此不做具体限制。
再比如,第二存储器803可以是SysBuf,在解码时,需要参考的参考图像帧、参考条带、参考区域或参考块的参考次数为一次时,则直接从第一存储器802中读取参考图像帧、参考条带、参考区域或参考块的图像数据,若需要参考的参考图像帧、参考条带、参考区域或参考块的参考次数为多次时,则对参考图像帧、参考条带、参考区域或参考块进行读取时,可以从SysBuf中读取。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对在视频解码装置中进行图像处理的方法的详细描述,此处不再赘述。
本申请实施例提供的所述图像处理装置与上文实施例中的在视频解码装置中进行图像处理的方法属于同一构思,在所述图像处理装置上可以运行所述在视频解码装置中进行图像处理的方法实施例中提供的任一方法,其具体实现过程详见所述在视频解码装置中进行图像处理的方法实施例,此处不再赘述。
需要说明的是,对本申请实施例所述在视频解码装置中进行图像处理的方法而言,本领域普通技术人员可以理解实现本申请实施例所述在视频解码装置中进行图像处理的方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读取存储介质中,如存储在存储器中,并被至少一个处理器执行,在执行过程中可包括如所述在视频解码装置中进行图像处理的方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(ROM,Read Only Memory)、 随机存取记忆体(RAM,Random Access Memory)等。
对本申请实施例的所述图像处理装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种在视频解码装置中进行图像处理的方法、装置、存储介质、电子设备及系统进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (28)

  1. 一种在视频解码装置中进行图像处理的方法,其中,所述方法包括:
    获取视频码流;
    从所述视频码流中确定一个或多个参考位置;
    确定所述一个或多个参考位置的参考次数;
    根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
    根据所述参考位置对待解码对象进行解码。
  2. 根据权利要求1所述的在视频解码装置中进行图像处理的方法,其中,所述参考位置包括参考图像帧、参考条带或参考区域,所述从所述视频码流中确定一个或多个参考位置,包括:
    根据所述视频码流中图像帧的帧头信息或所述图像帧中一个或多个条带的条带头信息,从所述视频码流中确定一个或多个参考图像帧、参考条带或参考区域。
  3. 根据权利要求1所述的在视频解码装置中进行图像处理的方法,其中,所述参考位置包括参考图像帧、参考条带或参考区域,所述确定所述一个或多个参考位置的参考次数,包括:
    通过预设参数确定所述一个或多个参考图像帧、参考条带或参考区域的参考次数,所述预设参数包括以下中的任一项或多项:网络抽象层解析参数、条带头解析参数、参考图像列表修正参数和参考图像帧标记参数。
  4. 根据权利要求1所述的在视频解码装置中进行图像处理的方法,其中,所述预设存储器包括第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗。
  5. 根据权利要求4所述的在视频解码装置中进行图像处理的方法,其中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二存储器包括设置在视频解码装置外部的系统高速缓存,所述参考位置包括参考图像帧、参考条带或参考区域;
    所述根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,包括:
    若参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在所述系统高速缓存中,并将其存储在所述动态随机存取内存中。
  6. 根据权利要求4所述的在视频解码装置中进行图像处理的方法,其中,所述第二存储器包括设置在视频解码装置外部的系统缓冲存储器,所述参考位置包括参考图像帧、参考条带或参考区域;
    所述根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,包括:
    若参考次数为多次,则根据所述预设功耗阈值将所述参考次数为多次的一个或多个参考图像帧、参考条带或参考区域存储在所述系统缓冲存储器中。
  7. 根据权利要求4所述的在视频解码装置中进行图像处理的方法,其中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述参考位置包括参考图像帧、参考条带或参考区域;
    所述根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,包括:
    若参考次数为一次,则根据所述预设功耗阈值将所述参考次数为一次的一个或多个参考图像帧、参考条带或参考区域存储在所述动态随机存取内存中。
  8. 根据权利要求5所述的在视频解码装置中进行图像处理的方法,其中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述根据所述参考位置对待解码对象进行解码,包括:
    从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行 解码,若读取的是所述参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;
    若所述待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将所述待解码块解码后的块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
  9. 根据权利要求6所述的在视频解码装置中进行图像处理的方法,其中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述根据所述参考位置对待解码对象进行解码,包括:
    从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;
    若所述待解码图像帧、待解码条带或待解码区域的参考次数为多次,则将所述待解码块解码后的块存储在所述系统缓冲存储器中。
  10. 根据权利要求7所述的在视频解码装置中进行图像处理的方法,其中,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块或待解码区域中的待解码块,所述根据所述参考位置对待解码对象进行解码,包括:
    从所述预设存储器中读取所需要的参考图像帧、参考条带或参考区域的图像数据,若读取的是参考图像帧的图像数据,则根据读取的所述参考图像帧的图像数据对所述待解码图像帧中的待解码块进行解码,若读取的是参考条带的图像数据,则根据读取的所述参考条带的图像数据对所述待解码条带中的待解码块进行解码,若读取的是参考区域的图像数据,则根据读取的所述参考区域的图像数据对所述待解码区域中的待解码块进行解码;
    若所述待解码图像帧、待解码条带或待解码区域的参考次数为一次,则将所述待解码块解码后的块存储在所述动态随机存取内存中。
  11. 一种在视频解码装置中进行图像处理的方法,其中,所述方法包括:
    获取视频码流;
    根据所述视频码流获取一个或多个参考运动矢量;
    根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
    确定所述一个或多个参考块的参考次数;
    根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
    根据所述参考块对待解码块或所述待解码块中的子块进行解码。
  12. 根据权利要求11所述的在视频解码装置中进行图像处理的方法,其中,所述根据所述视频码流获取一个或多个参考运动矢量,包括:
    对所述视频码流进行熵解码,得到一个或多个运动矢量差值;
    根据所述一个或多个运动矢量差值和对应的运动矢量预测值,获取所述一个或多个参考运动矢量。
  13. 根据权利要求12所述的在视频解码装置中进行图像处理的方法,其中,所述预设存储器包括第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗。
  14. 根据权利要求13所述的在视频解码装置中进行图像处理的方法,其中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述第二存储器包括设置在视频解码装置外部的系统高速缓存,所述根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确 定出需要存储在预设存储器中的一个或多个参考块,包括:
    若所述参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
  15. 根据权利要求13所述的在视频解码装置中进行图像处理的方法,其中,所述第二存储器包括设置在视频解码装置外部的系统缓冲存储器,所述根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,包括:
    若所述参考次数为多次,则根据所述预设功耗阈值将参考次数为多次的一个或多个参考块存储在所述系统缓冲存储器中。
  16. 根据权利要求13所述的在视频解码装置中进行图像处理的方法,其中,所述第一存储器包括设置在视频解码装置外部的动态随机存取内存,所述根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,包括:
    若所述参考次数为一次,则根据所述预设功耗阈值将参考次数为一次的一个或多个参考块存储在所述动态随机存取内存中。
  17. 根据权利要求14所述的在视频解码装置中进行图像处理的方法,其中,所述根据所述参考块对待解码块或所述待解码块中的子块进行解码,包括:
    从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;
    若所述待解码块或所述待解码块中的子块的参考次数为多次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述系统高速缓存中,并且存储在所述动态随机存取内存中。
  18. 根据权利要求15所述的在视频解码装置中进行图像处理的方法,其中,所述根据所述参考块对待解码块或所述待解码块中的子块进行解码,包括:
    从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;
    若所述待解码块或所述待解码块中的子块的参考次数为多次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述系统缓冲存储器中。
  19. 根据权利要求16所述的在视频解码装置中进行图像处理的方法,其中,所述根据所述参考块对待解码块或所述待解码块中的子块进行解码,包括:
    从所述预设存储器中读取所需要的参考块的图像数据,并根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码;
    若所述待解码块或所述待解码块中的子块的参考次数为一次,则将所述待解码块解码后的块或所述待解码块中的子块解码后的子块存储在所述动态随机存取内存中。
  20. 根据权利要求17至19任一项所述的在视频解码装置中进行图像处理的方法,其中,所述对所述视频码流进行熵解码,得到一个或多个运动矢量差值,包括:
    对所述视频码流进行熵解码,得到一个或多个运动矢量差值以及量化后的第一残差;
    所述根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码,包括:
    对所述第一残差进行反量化与反变换,得到第二残差;
    根据所述参考运动矢量和参考块,得到所述待解码块或所述待解码块中的子块的预测值;
    根据所述第二残差以及所述待解码块或所述待解码块中的子块的预测值,获取所述待解码块解码后的块或所述待解码块中的子块解码后的子块。
  21. 根据权利要求20所述的在视频解码装置中进行图像处理的方法,其中,所述根据读取的参考块的图像数据对所述待解码块或所述待解码块中的子块进行解码,还包括:
    根据所述待解码块解码后的块或所述待解码块中的子块解码后的子块获取视频流解码数据。
  22. 根据权利要求20所述的在视频解码装置中进行图像处理的方法,其中,所述待解码块或所述待解码块中的子块的预测值通过帧内预测方式或运动补偿方式获取到。
  23. 一种在视频解码装置中进行图像处理的装置,其中,所述装置包括:
    获取模块,用于获取视频码流;
    第一确定模块,用于从所述视频码流中确定一个或多个参考位置;
    第二确定模块,用于确定所述一个或多个参考位置的参考次数;
    第三确定模块,用于根据预设功耗阈值以及所述一个或多个参考位置的参考次数,从所述一个或多个参考位置中确定出需要存储在预设存储器中的参考位置,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考位置以及从所述预设存储器中读取所述参考位置所产生的功耗小于或等于所述预设功耗阈值;以及
    解码模块,用于根据所述参考位置对待解码对象进行解码。
  24. 一种在视频解码装置中进行图像处理的装置,其中,所述装置包括:
    第一获取模块,用于获取视频码流;
    第二获取模块,用于根据所述视频码流获取一个或多个参考运动矢量;
    第三获取模块,用于根据所述一个或多个参考运动矢量从所述视频码流的一个或多个图像帧中获取对应的一个或多个参考块;
    第一确定模块,用于确定所述一个或多个参考块的参考次数;
    第二确定模块,用于根据预设功耗阈值以及所述一个或多个参考块的参考次数,从所述一个或多个参考块中确定出需要存储在预设存储器中的一个或多个参考块,并将其存储在所述预设存储器中,向所述预设存储器中存储所述参考块以及从所述预设存储器中读取所述参考块所产生的功耗小于或等于所述预设功耗阈值;以及
    解码模块,用于根据所述参考块对待解码块或所述待解码块的子块进行解码。
  25. 一种计算机可读的存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机上执行时,使得所述计算机执行如权利要求1至10或者11至22中任一项所述的方法。
  26. 一种电子设备,包括存储器,处理器以及视频解码装置,其中,所述处理器通过调用所述存储器中存储的计算机程序,以执行如权利要求1至10或者11至22中任一项所述的方法。
  27. 一种图像处理系统,其中,包括视频解码装置、第一存储器和第二存储器,所述第一存储器的功耗大于所述第二存储器的功耗,所述第一存储器中存储参考次数为一次的参考位置,或者存储参数次数为一次和多次的参考位置,所述第二存储器中存储参考次数为多次的参考位置,所述视频解码装置在解码时,从所述第一存储器中读取参考次数为一次的参考位置,以及从所述第二存储器读取参考次数为多次的参考位置,根据所述参考位置对待解码对象进行解码。
  28. 根据权利要求27所述的图像处理系统,其中,所述参考位置包括参考图像帧、参考条带、参考区域或参考块,所述待解码对象包括待解码图像帧中的待解码块、待解码条带中的待解码块、待解码区域中的待解码块或者所述待解码块的子块。
PCT/CN2022/076367 2021-04-01 2022-02-15 在视频解码装置中进行图像处理的方法、装置及系统 WO2022206199A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110357601.8A CN115190302A (zh) 2021-04-01 2021-04-01 在视频解码装置中进行图像处理的方法、装置及系统
CN202110357601.8 2021-04-01

Publications (1)

Publication Number Publication Date
WO2022206199A1 true WO2022206199A1 (zh) 2022-10-06

Family

ID=83455566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076367 WO2022206199A1 (zh) 2021-04-01 2022-02-15 在视频解码装置中进行图像处理的方法、装置及系统

Country Status (2)

Country Link
CN (1) CN115190302A (zh)
WO (1) WO2022206199A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013001796A1 (en) * 2011-06-30 2013-01-03 Panasonic Corporation Methods and apparatuses for encoding video using adaptive memory management scheme for reference pictures
US20150220135A1 (en) * 2012-10-17 2015-08-06 Huawei Technologies Co.,Ltd. Method for reducing power consumption of memory system, and memory controller
WO2016082205A1 (zh) * 2014-11-28 2016-06-02 华为技术有限公司 一种多级缓存的功耗控制方法、装置及设备
CN106717005A (zh) * 2014-09-23 2017-05-24 三星电子株式会社 根据参考频率控制参考图像数据的视频编码/解码方法和设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2888288B2 (ja) * 1996-10-03 1999-05-10 日本電気株式会社 画像符号化装置
CN101527849B (zh) * 2009-03-30 2011-11-09 清华大学 集成视频解码器的存储系统
CN102197652B (zh) * 2009-10-19 2013-09-11 松下电器产业株式会社 解码装置、解码方法、程序以及集成电路
JP6410495B2 (ja) * 2014-07-07 2018-10-24 ルネサスエレクトロニクス株式会社 画像符号化装置、画像復号装置、および画像通信システム
US20170006303A1 (en) * 2015-06-30 2017-01-05 Intel Corporation Method and system of adaptive reference frame caching for video coding
CN106355545B (zh) * 2015-07-16 2019-05-24 浙江大华技术股份有限公司 一种数字图像几何变换的处理方法及装置
CN105263022B (zh) * 2015-09-21 2018-03-02 山东大学 一种针对hevc视频编码的多核混合存储管理方法
CN111355962A (zh) * 2020-03-10 2020-06-30 珠海全志科技股份有限公司 适用于多参考帧的视频解码高速缓存方法、计算机装置及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013001796A1 (en) * 2011-06-30 2013-01-03 Panasonic Corporation Methods and apparatuses for encoding video using adaptive memory management scheme for reference pictures
US20150220135A1 (en) * 2012-10-17 2015-08-06 Huawei Technologies Co.,Ltd. Method for reducing power consumption of memory system, and memory controller
CN106717005A (zh) * 2014-09-23 2017-05-24 三星电子株式会社 根据参考频率控制参考图像数据的视频编码/解码方法和设备
WO2016082205A1 (zh) * 2014-11-28 2016-06-02 华为技术有限公司 一种多级缓存的功耗控制方法、装置及设备

Also Published As

Publication number Publication date
CN115190302A (zh) 2022-10-14

Similar Documents

Publication Publication Date Title
JP7492051B2 (ja) クロマブロック予測方法及び装置
US9210421B2 (en) Memory management for video decoding
KR101177666B1 (ko) 디코딩된 픽처의 지능적 버퍼링
CN110740318A (zh) 用于视频处理和视频译码的自动自适应长期参考帧选择
KR102616713B1 (ko) 이미지 예측 방법, 장치 및 시스템, 디바이스 및 저장 매체
US20180184089A1 (en) Target bit allocation for video coding
TW202145794A (zh) 幀間預測方法、編碼器、解碼器以及電腦儲存媒介
WO2020143585A1 (zh) 视频编码器、视频解码器及相应方法
KR20230162989A (ko) 멀티미디어 데이터 프로세싱 방법, 장치, 디바이스, 컴퓨터-판독가능 저장 매체, 및 컴퓨터 프로그램 제품
US9363523B2 (en) Method and apparatus for multi-core video decoder
TW202201958A (zh) 幀間預測方法、編碼器、解碼器以及電腦儲存媒介
WO2022206199A1 (zh) 在视频解码装置中进行图像处理的方法、装置及系统
US20140105306A1 (en) Image processing apparatus and image processing method
CN112422983A (zh) 通用多核并行解码器系统及其应用
WO2022206217A1 (zh) 在视频编码装置中进行图像处理的方法、装置、介质及系统
WO2020037501A1 (zh) 码率分配方法、码率控制方法、编码器和记录介质
WO2018205781A1 (zh) 一种实现运动估计的方法及电子设备
WO2022227082A1 (zh) 块划分方法、编码器、解码器以及计算机存储介质
US20110051815A1 (en) Method and apparatus for encoding data and method and apparatus for decoding data
WO2020143684A1 (zh) 图像预测方法、装置、设备、系统及存储介质
WO2022206166A1 (zh) 在视频编码装置中进行图像处理的方法、装置及系统
CA2862701A1 (en) Video encoding method, device, and program
WO2022206272A1 (zh) 图像处理方法、装置、存储介质及电子设备
US20240022737A1 (en) Image processing method, non-transitory storage medium and electronic device
JP2024535963A (ja) マルチメディアデータ処理方法および装置、コンピュータデバイス、コンピュータ可読記憶媒体、並びに、コンピュータプログラム製品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778384

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778384

Country of ref document: EP

Kind code of ref document: A1