[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114554220B - A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features - Google Patents

A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features Download PDF

Info

Publication number
CN114554220B
CN114554220B CN202210038155.9A CN202210038155A CN114554220B CN 114554220 B CN114554220 B CN 114554220B CN 202210038155 A CN202210038155 A CN 202210038155A CN 114554220 B CN114554220 B CN 114554220B
Authority
CN
China
Prior art keywords
video
foreground
snapshot
abstract
foreground target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202210038155.9A
Other languages
Chinese (zh)
Other versions
CN114554220A (en
Inventor
黄宏博
陈伟骏
孙牧野
李萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202210038155.9A priority Critical patent/CN114554220B/en
Publication of CN114554220A publication Critical patent/CN114554220A/en
Application granted granted Critical
Publication of CN114554220B publication Critical patent/CN114554220B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/87Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a fixed scene video overrun compression and decoding method based on abstract features, which comprises the following steps: 1) Extracting a background image of an original video by adopting a background modeling method and performing compression coding; 2) Extracting abstract features from a foreground target by adopting a foreground target extraction module comprising algorithms such as instance segmentation, key point detection and the like; 3) Taking a snapshot of the foreground object and compressing the snapshot; 4) Compressing video background compression data, foreground object abstract features and snapshot compression data; 5) Decompressing the video compression data by pre-decoding; 6) Inputting the abstract features of the foreground target and the snapshot of the foreground target into a trained generator for generating an countermeasure network; 7) Fusing the generated foreground target decoding image of each frame with the background image; 8) Reconstructing the fused video frames to obtain a decoded video. The invention has extremely high compression ratio aiming at the video of the fixed scene, remarkably improves the storage efficiency and prolongs the storage time of the monitoring video.

Description

一种基于抽象特征的固定场景视频超限压缩与解码方法A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features

技术领域technical field

本发明涉及计算机视觉的深度学习技术领域,具体为一种基于抽象特征的固定场景视频超限压缩与解码方法。The invention relates to the technical field of deep learning of computer vision, in particular to an abstract feature-based overlimit compression and decoding method for fixed-scene video.

背景技术Background technique

常见的对于视频数据的压缩编码主要是基于纹理、边缘、图像块的移动等底层特征来去除冗余信息,并未充分考虑视频内容所包含的高层抽象特征。深度学习在计算机视觉领域的蓬勃发展为图像和视频的高层抽象理解带来了技术可行性。深度卷积神经网络在大数据和高性能并行计算的支撑下,对图像和视频等高层特征提取带来了革命性的变革。不同于传统基于手工设计的图像特征提取方式,卷积神经网络可以自动在大数据中提取表达能力更强的高层特征。这些高层特征在图像理解和视频结构化中起到了至关重要的作用。借助于深度卷积神经网络模型的高层特征提取能力,在普遍可获取的视频大数据基础上,提取视频中表达性更强的高层抽象特征信息,去除视频中大量存在的抽象冗余,将可以大幅度提升视频压缩性能,减少存储空间和传输带宽,为视频更好的持久化存储和传输带来新的思路。The common compression coding of video data is mainly based on low-level features such as texture, edge, and image block movement to remove redundant information, and does not fully consider the high-level abstract features contained in video content. The vigorous development of deep learning in the field of computer vision has brought technical feasibility to high-level abstract understanding of images and videos. With the support of big data and high-performance parallel computing, deep convolutional neural networks have brought revolutionary changes to the extraction of high-level features such as images and videos. Unlike traditional image feature extraction methods based on manual design, convolutional neural networks can automatically extract high-level features with stronger expressive capabilities in big data. These high-level features play a crucial role in image understanding and video structuring. With the help of the high-level feature extraction capability of the deep convolutional neural network model, on the basis of widely available video big data, extracting more expressive high-level abstract feature information in the video and removing a large number of abstract redundancy in the video will be able to Greatly improve video compression performance, reduce storage space and transmission bandwidth, and bring new ideas for better persistent video storage and transmission.

因此,如何提供一种通过从视频中提取高层抽象特征信息,以提高压缩比的的视频压缩方法是本领域技术人员亟需解决的问题。Therefore, how to provide a video compression method that improves the compression ratio by extracting high-level abstract feature information from the video is an urgent problem to be solved by those skilled in the art.

发明内容Contents of the invention

(一)解决的技术问题(1) Solved technical problems

针对现有技术的不足,本发明提供了一种基于抽象特征的固定场景视频超限压缩与解码方法,通过从视频中提取高层抽象特征信息进行存储,大幅减少了存储空间,以解决上述的技术问题。Aiming at the deficiencies of the prior art, the present invention provides a fixed-scene video overcompression and decoding method based on abstract features. By extracting high-level abstract feature information from the video for storage, the storage space is greatly reduced to solve the above-mentioned technical problems. question.

(二)技术方案(2) Technical solutions

为实现上述的目的,本发明提供如下技术方案:一种基于抽象特征的固定场景视频超限压缩与解码方法,其中包含了编码器与解码器。本方法包括以下步骤:In order to achieve the above object, the present invention provides the following technical solution: a fixed-scene video overcompression and decoding method based on abstract features, which includes an encoder and a decoder. This method comprises the following steps:

1.视频压缩。1. Video compression.

将原视频拆解为图像帧,送入编码器进行处理。编码器包含两个模块:背景建模与前景目标提取。Disassemble the original video into image frames and send them to the encoder for processing. The encoder consists of two modules: background modeling and foreground object extraction.

背景建模模块使用基于混合高斯模型的背景建模算法,对每一帧视频进行前景减除,得到背景图像。所有视频帧处理完后,将多帧背景图像求并集得到单张背景图像,然后进行离散余弦变换,量化与熵编码,得到视频背景压缩数据。The background modeling module uses a background modeling algorithm based on a mixed Gaussian model to perform foreground subtraction on each frame of video to obtain a background image. After all the video frames are processed, the multi-frame background images are combined to obtain a single background image, and then discrete cosine transform, quantization and entropy coding are performed to obtain video background compressed data.

前景目标提取模块由基于卷积神经网络的实例分割模型与关键点检测模型组成,对图像帧进行物体实例分割与关键点检测,获得前景目标抽象特征。所述前景目标抽象特征包含前景目标的形状特征和关键点特征。The foreground target extraction module is composed of an instance segmentation model and a key point detection model based on a convolutional neural network, and performs object instance segmentation and key point detection on image frames to obtain abstract features of foreground targets. The abstract feature of the foreground object includes the shape feature and key point feature of the foreground object.

处理完全部视频帧后,使用基于目标检测框IOU阈值的方法进行帧间目标匹配,得到帧间前景目标的对应关系,然后对每一个前景目标提取快照。提取快照的算法步骤为:对每一个前景目标的多帧形状特征,只保留实例分割模型输出的置信度最高的一帧形状特征,利用该形状特征从原视频帧中抠出该前景目标的图像,便得到了该前景目标的快照,将快照进行离散余弦变换,量化与熵编码,得到前景目标快照压缩数据。提取快照的目的是保存该前景目标的细节特征,例如颜色纹理等。After processing all the video frames, use the method based on the IOU threshold of the target detection frame to perform inter-frame target matching to obtain the corresponding relationship between foreground targets between frames, and then extract a snapshot for each foreground target. The algorithm steps for extracting snapshots are as follows: For each multi-frame shape feature of the foreground object, only retain the shape feature of the frame with the highest confidence output by the instance segmentation model, and use the shape feature to extract the image of the foreground object from the original video frame , the snapshot of the foreground object is obtained, and the snapshot is subjected to discrete cosine transform, quantization and entropy coding to obtain the compressed data of the foreground object snapshot. The purpose of extracting snapshots is to preserve the detailed features of the foreground object, such as color texture and so on.

最后将前景目标抽象特征、快照压缩数据与背景压缩数据进行压缩打包,得到视频压缩数据。视频压缩完成。Finally, the abstract feature of the foreground object, the snapshot compressed data and the background compressed data are compressed and packaged to obtain the video compressed data. Video compression is complete.

在编码器中,背景建模模块将原视频的背景编码为单张压缩图像,从而实现背景冗余信息的去除;前景目标提取模块通过提取前景目标抽象特征与快照,对原视频中每个前景目标只保存多帧抽象特征与单帧快照压缩数据,从而实现前景冗余信息的去除。相比传统视频压缩编码,本发明的编码方式大大减少了需要保存的数据容量,从而实现了超限压缩。In the encoder, the background modeling module encodes the background of the original video into a single compressed image, thereby realizing the removal of background redundant information; the foreground target extraction module extracts the abstract features and snapshots of the foreground target, The goal is to save only multi-frame abstract features and single-frame snapshot compressed data, so as to achieve the removal of foreground redundant information. Compared with traditional video compression coding, the coding method of the present invention greatly reduces the data capacity to be saved, thereby realizing over-limit compression.

2.视频预解码2. Video pre-decoding

当用户需要观看视频时,首先进行视频预解码。将编码器最后压缩打包的视频压缩数据解压,恢复出前景目标抽象特征、前景目标快照与视频背景图像。When a user needs to watch a video, first perform video pre-decoding. Decompress the video compression data compressed and packaged by the encoder at the end, and restore the abstract features of the foreground object, the snapshot of the foreground object and the video background image.

3.视频解码。3. Video decoding.

本发明的解码器由基于生成对抗网络架构的卷积神经网络模型组成,其中包含了生成器与判别器。生成器的输入为前景目标快照与前景目标抽象特征,输出为前景目标解码图像;判别器负责在生成器训练时辅助生成器提高生成图像的质量,输入为生成器生成的前景目标解码图像与真实视频帧中的前景目标图像,输出为介于0到1的数值,代表判别器判断输入图像可能是生成图像(0)或者真实图像(1)。The decoder of the present invention is composed of a convolutional neural network model based on a generated confrontation network architecture, which includes a generator and a discriminator. The input of the generator is the snapshot of the foreground target and the abstract features of the foreground target, and the output is the decoded image of the foreground target; the discriminator is responsible for assisting the generator to improve the quality of the generated image during the training of the generator, and the input is the decoded image of the foreground target generated by the generator and the real The foreground target image in the video frame is output as a value between 0 and 1, which means that the discriminator judges that the input image may be a generated image (0) or a real image (1).

(1)解码器训练过程(1) Decoder training process

训练过程的目标函数为:L=LGAN+LL1+LVGG The objective function of the training process is: L=L GAN +L L1 +L VGG

其中:in:

为生成对抗损失,IS与It分别为前景目标快照与需要生成的真实前景目标图像,RS与Rt为根据IS图像与It图像的关键点生成的响应图,以便输入到生成器中。为生成器生成的前景目标解码图像,z 为随机噪声。To generate the adversarial loss, IS and It are the foreground target snapshot and the real foreground target image to be generated respectively, RS and R t are the response maps generated according to the key points of the IS image and It image, so as to be input to the generation device. Decoded image for the foreground object generated by the generator, z is random noise.

其中:in:

为L1损失,计算生成器生成图像与真实图像的最小绝对误差。For the L1 loss, the minimum absolute error between the generator-generated image and the real image is computed.

其中:in:

为感知损失,通过将生成器生成的前景目标解码图像与真实前景目标图像输入到公开的VGG预训练网络模型,计算两者在深层特征图的最小平方差。For the perceptual loss, by inputting the decoded image of the foreground object generated by the generator and the real foreground object image into the public VGG pre-trained network model, the least square difference between the two in the deep feature map is calculated.

训练结束后,解码器只需保留生成器。After training, the decoder only needs to keep the generator.

(2)解码器解码过程(2) Decoder decoding process

读取每个前景目标的多帧抽象特征与快照,送入解码器中的生成器。生成器模型从多帧抽象特征获取目标的姿态、骨架等信息,从快照中获取目标的颜色、纹理等信息,将以上信息融合处理,生成前景目标解码图像。The multi-frame abstract features and snapshots of each foreground object are read and fed to the generator in the decoder. The generator model obtains information such as the pose and skeleton of the target from the abstract features of multiple frames, and obtains information such as the color and texture of the target from the snapshot, and fuses the above information to generate a decoded image of the foreground target.

读取视频背景图像,将所有生成的前景目标解码图像与背景图像融合,得到重建视频帧。将所有重建视频帧合并重构,得到解码视频。The video background image is read, and all generated foreground target decoded images are fused with the background image to obtain the reconstructed video frame. Merge and reconstruct all the reconstructed video frames to obtain the decoded video.

(三)有益效果(3) Beneficial effects

与现有技术相比,本发明提供了一种基于抽象特征的固定场景视频超限压缩与解码方法,具备以下有益效果:该基于抽象特征的固定场景视频超限压缩与解码方法,针对固定场景视频具有很高的压缩比,极大节约了存储空间资源。实验证明,针对不同长度、出现目标数量不同的固定场景视频,本方法存储的压缩数据容量仅为使用H264编码视频的1/40至1/3,实现了超越传统视频压缩编码的高压缩比。本发明可以应用于各类智能监控系统,显著延长监控视频的存储时长,并且在压缩过程中提取的目标抽象特征,可以用于异常行为检测、交通流量监测等。Compared with the prior art, the present invention provides a fixed-scene video overcompression and decoding method based on abstract features, which has the following beneficial effects: the abstract feature-based fixed-scene video over-limit compression and decoding method is aimed at fixed scene Video has a high compression ratio, which greatly saves storage space resources. Experiments have proved that for fixed-scene videos with different lengths and different numbers of targets, the compressed data capacity stored by this method is only 1/40 to 1/3 of the video encoded by H264, achieving a high compression ratio beyond traditional video compression coding. The invention can be applied to various intelligent monitoring systems, significantly prolonging the storage time of monitoring videos, and the abstract features of objects extracted during the compression process can be used for abnormal behavior detection, traffic flow monitoring and the like.

附图说明Description of drawings

图1为本发明提出的基于抽象特征的固定场景视频超限压缩与解码方法框架图。FIG. 1 is a frame diagram of an abstract feature-based fixed-scene video overcompression and decoding method proposed by the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

本发明提出的固定场景视频超限压缩与解码方法的整体结构如图1所示,主要由两部分组成,编码器与解码器。压缩时,将原视频输入视频编码器后得到视频压缩数据;解码时首先将视频压缩数据进行预解码,然后输入解码器后生成解码视频。The overall structure of the fixed scene video overcompression and decoding method proposed by the present invention is shown in FIG. 1 , which mainly consists of two parts, an encoder and a decoder. When compressing, the original video is input to the video encoder to obtain video compressed data; when decoding, the video compressed data is firstly pre-decoded, and then input to the decoder to generate decoded video.

1.视频压缩步骤1. Video compression steps

步骤1)卷积神经网络初始化,将实例分割模型,关键点检测模型加载到GPU中。Step 1) The convolutional neural network is initialized, and the instance segmentation model and the key point detection model are loaded into the GPU.

步骤2)混合高斯背景模型初始化。Step 2) Mixed Gaussian background model initialization.

步骤3)读取原视频的第i帧。Step 3) read the i-th frame of the original video.

步骤4)将视频帧输入到混合高斯背景模型中,进行匹配与模型权值更新,得到高斯背景建模结果bgiStep 4) Input the video frame into the mixed Gaussian background model, perform matching and model weight update, and obtain the Gaussian background modeling result bg i .

步骤5)使用实例分割模型对当前视频帧进行实例分割,得到m个前景目标的实例分割结果Si={boxj,maskj|j=1,2,…,m};其中boxj为视频帧中第j个前景目标的矩形检测框(x_min,y_min,x_max,y_max),maskj为视频帧中第j个前景目标的掩膜,掩膜是一个长宽与视频帧相等的二值图像,对应目标出现的区域为1,其他区域为0。后续步骤中,前景目标检测框等同于本步骤中的boxj,前景目标的形状特征等同于本步骤中的m askj,前景目标时空信息等同于当前帧序号i加上本步骤中的boxj(i,x _min,y_min,x_max,y_max)。Step 5) Use the instance segmentation model to perform instance segmentation on the current video frame, and obtain the instance segmentation results of m foreground objects S i ={box j , mask j |j=1,2,...,m}; where box j is the video The rectangular detection frame (x_min, y_min, x_max, y_max) of the jth foreground object in the frame, mask j is the mask of the jth foreground object in the video frame, and the mask is a binary image whose length and width are equal to the video frame , the area corresponding to the target appearance is 1, and the other areas are 0. In the subsequent steps, the detection frame of the foreground object is equivalent to box j in this step, the shape feature of the foreground object is equivalent to mask j in this step, and the spatiotemporal information of the foreground object is equal to the current frame number i plus box j in this step (i, x_min, y_min, x_max, y_max).

步骤6)使用关键点检测模型对每个检出的前景目标进行关键点检测,得到前景目标的关键点坐标(x0,y0,x1,y1...)。Step 6) Use the key point detection model to perform key point detection on each detected foreground object, and obtain the key point coordinates (x0, y0, x1, y1...) of the foreground object.

步骤7)重复执行步骤3至步骤6,直至处理完所有视频帧。Step 7) Repeat steps 3 to 6 until all video frames are processed.

步骤8)读取步骤5)中实例分割模型检出的前景目标时空信息(i,x_min,y_min,x_max,y_max)。使用基于目标检测框IOU阈值的方法进行帧间前景目标匹配,得到多个匹配列表,每个列表中包含每个前景目标的多帧时空信息,按照时间顺序排列。例如视频中出现了p个目标,这p个目标分别出现了q1,q2…qp帧,则得到p个长度分别为为q1,q2…qp的匹配列表,列表中每一项为该目标的在不同帧中的时空信息。Step 8) Read the spatio-temporal information (i, x_min, y_min, x_max, y_max) of the foreground object detected by the instance segmentation model in step 5). Using the method based on the IOU threshold of the target detection frame for inter-frame foreground target matching, multiple matching lists are obtained, and each list contains multi-frame spatio-temporal information of each foreground target, arranged in time order. For example, p targets appear in the video, and these p targets appear in q 1 , q 2 ... q p frames respectively, then p matching lists with lengths q 1 , q 2 ... q p respectively are obtained, and each Item is the spatio-temporal information of the target in different frames.

步骤9)对于每一个前景目标取快照,步骤为:根据步骤8)中每一个匹配列表的前景目标时空信息,读取每一个前景目标的多帧形状特征,然后只保留实例分割模型输出的置信度最高的一帧形状特征,使用该形状特征从原视频帧中抠出该前景目标的图像,便得到了该前景目标的快照。将快照进行离散余弦变换,量化与熵编码,得到前景目标快照压缩数据,使用该快照的时空信息作为文件名(is,x_mins,y_mins,x_maxs,y_maxs.jpg) 进行保存。Step 9) Take a snapshot for each foreground object, the steps are: according to the foreground object spatio-temporal information of each matching list in step 8), read the multi-frame shape features of each foreground object, and then only keep the confidence output of the instance segmentation model The shape feature of a frame with the highest degree is used, and the image of the foreground object is extracted from the original video frame by using the shape feature, and a snapshot of the foreground object is obtained. Perform discrete cosine transform, quantization, and entropy coding on the snapshot to obtain the compressed data of the foreground target snapshot, and use the spatiotemporal information of the snapshot as the file name (i s , x_min s , y_min s , x_max s , y_max s .jpg) to save.

步骤10)将每个前景目标的快照文件名(is,x_mins,y_mins,x_maxs,y_ maxs.jpg)与多帧时空信息(i,x_min,y_min,x_max,y_max)、多帧关键点坐标(x0,y0,x1,y1...)合并,写入csv文件保存,为前景目标抽象特征文件。至此,对于每个前景目标,只保留了单帧快照+多帧目标抽象特征。Step 10) Combine the snapshot file name (i s , x_min s , y_min s , x_max s , y_ max s .jpg) of each foreground target with the multi-frame spatio-temporal information (i, x_min, y_min, x_max, y_max), multi-frame The key point coordinates (x0, y0, x1, y1...) are merged and written into a csv file for saving, which is an abstract feature file of the foreground target. So far, for each foreground object, only single-frame snapshot + multi-frame object abstract features are preserved.

步骤11)将从步骤4)得到的每一帧的背景图像序列{bgi|i= 1,2,…,n}求并集bg={bg1 U bg2 U bg3 ··· U bgn},得到完整的视频背景图像,然后进行离散余弦变换,量化与熵编码,得到视频背景压缩数据。Step 11) Find the union bg={bg 1 U bg 2 U bg 3 U bg n } to obtain a complete video background image, and then perform discrete cosine transform, quantization and entropy coding to obtain video background compressed data.

步骤12)将步骤9)得到的前景目标快照压缩数据,步骤10)得到的前景目标抽象特征文件以及步骤11)的得到的视频背景压缩数据作为整体进行压缩打包,得到视频压缩数据。Step 12) compress and package the foreground object snapshot compressed data obtained in step 9), the foreground object abstract feature file obtained in step 10) and the video background compressed data obtained in step 11) as a whole to obtain video compressed data.

2.视频解码步骤2. Video decoding steps

步骤1)预解码,将视频压缩数据解压,恢复出前景目标抽象特征,前景目标快照与视频背景图像。Step 1) pre-decoding, decompressing the compressed video data, recovering the abstract features of the foreground object, the snapshot of the foreground object and the video background image.

步骤2)卷积神经网络初始化,将训练好的生成器网络模型加载到GPU 中。Step 2) Initialize the convolutional neural network, and load the trained generator network model into the GPU.

步骤3)读取前景目标抽象特征文件,获取前景目标快照文件名和与其对应的多帧抽象特征。Step 3) Read the abstract feature file of the foreground object, and obtain the snapshot file name of the foreground object and its corresponding multi-frame abstract feature.

步骤4)将前景目标快照、前景目标快照的抽象特征和待生成前景目标解码图像的抽象特征输入到生成器模型中,生成前景目标解码图像。Step 4) Input the snapshot of the foreground object, the abstract features of the snapshot of the foreground object, and the abstract features of the decoded image of the foreground object to be generated into the generator model to generate the decoded image of the foreground object.

步骤5)重复执行步骤3至步骤4,直到前景目标抽象特征文件读取完毕,所有前景目标全部解码完成。Step 5) Repeat step 3 to step 4 until the abstract feature file of the foreground object is read and all the foreground objects are decoded.

步骤6)读取视频背景图像。Step 6) read the video background image.

步骤7)将每一帧的前景目标解码图像与视频背景图像融合,生成重建视频帧,直到视频所有帧均完成重建,将所有重建视频帧合并,得到解码视频。Step 7) Fusion the decoded image of the foreground target with the video background image of each frame to generate a reconstructed video frame until all frames of the video are reconstructed, and merge all the reconstructed video frames to obtain the decoded video.

需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that there is a relationship between these entities or operations. There is no such actual relationship or order between them. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or device.

尽管已经示出和描述了本发明的实施例,对于本领域普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and variants, the scope of the invention is defined by the appended claims and their equivalents.

Claims (1)

1.一种基于抽象特征的固定场景视频超限压缩与解码方法,其特征在于包括以下步骤:1. a fixed scene video overcompression and decoding method based on abstract features, is characterized in that comprising the following steps: 利用编码器对固定场景视频数据进行压缩;Use the encoder to compress the fixed scene video data; 利用解码器将压缩后的视频数据进行解码,得到解码视频;Using a decoder to decode the compressed video data to obtain a decoded video; 在编码器中,采用背景建模方法提取原视频的背景图像,然后将提取到的背景图像进行压缩编码,得到视频背景压缩数据;In the encoder, a background modeling method is used to extract the background image of the original video, and then the extracted background image is compressed and encoded to obtain video background compressed data; 在编码器中,采用包含物体实例分割与关键点检测算法的前景目标提取模块,对视频帧中的前景目标进行特征提取,获得前景目标抽象特征,所述前景目标抽象特征包含前景目标的形状特征和关键点特征;In the encoder, the foreground object extraction module including the object instance segmentation and key point detection algorithm is used to extract the features of the foreground object in the video frame to obtain the abstract feature of the foreground object, and the abstract feature of the foreground object includes the shape feature of the foreground object and key features; 编码器利用前景目标快照提取算法,对前景目标提取模块获取的前景目标提取快照,将所述的快照进行压缩编码,得到前景目标快照压缩数据;The encoder utilizes the foreground target snapshot extraction algorithm to extract a snapshot of the foreground target obtained by the foreground target extraction module, compress and encode the snapshot, and obtain the compressed data of the foreground target snapshot; 编码器通过提取前景目标抽象特征与快照,对视频中每个前景目标只保存抽象特征与快照压缩数据;The encoder extracts abstract features and snapshots of foreground objects, and only saves abstract features and snapshot compressed data for each foreground object in the video; 对视频背景只保存视频背景压缩数据,将前景目标抽象特征、快照压缩数据与背景压缩数据进行压缩打包,得到视频压缩数据;For the video background, only the video background compressed data is saved, and the abstract features of the foreground target, the snapshot compressed data and the background compressed data are compressed and packaged to obtain the video compressed data; 在解码时,将视频压缩数据解压,恢复出前景目标抽象特征、前景目标快照与视频背景图像;When decoding, the compressed video data is decompressed, and the abstract features of the foreground object, the snapshot of the foreground object and the video background image are recovered; 在解码器中,采用基于生成对抗网络(Generative Adversarial Network)的深度学习模型进行视频解码;In the decoder, a deep learning model based on a Generative Adversarial Network is used for video decoding; 在解码器中,将前景目标抽象特征与前景目标快照输入到生成对抗网络的生成器中,重建得到前景目标解码图像;In the decoder, the abstract feature of the foreground target and the snapshot of the foreground target are input into the generator of the generative confrontation network, and the decoded image of the foreground target is reconstructed; 解码器将视频中各帧的前景目标解码图像与背景图像融合,得到重建视频帧,将所有重建视频帧进行合并重构,得到解码视频。The decoder fuses the decoded image of the foreground target with the background image of each frame in the video to obtain a reconstructed video frame, and merges and reconstructs all the reconstructed video frames to obtain a decoded video.
CN202210038155.9A 2022-01-13 2022-01-13 A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features Expired - Fee Related CN114554220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210038155.9A CN114554220B (en) 2022-01-13 2022-01-13 A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210038155.9A CN114554220B (en) 2022-01-13 2022-01-13 A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features

Publications (2)

Publication Number Publication Date
CN114554220A CN114554220A (en) 2022-05-27
CN114554220B true CN114554220B (en) 2023-07-28

Family

ID=81670725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210038155.9A Expired - Fee Related CN114554220B (en) 2022-01-13 2022-01-13 A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features

Country Status (1)

Country Link
CN (1) CN114554220B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117915096B (en) * 2023-12-14 2024-09-10 北京大兴经济开发区开发经营有限公司 Target identification high-precision high-resolution video coding method and system for AI large model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101536525A (en) * 2006-06-08 2009-09-16 欧几里得发现有限责任公司 Apparatus and method for processing video data
CN108184126A (en) * 2017-12-27 2018-06-19 生迪智慧科技有限公司 Video coding and coding/decoding method, the encoder and decoder of snapshot image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1042736B1 (en) * 1996-12-30 2003-09-24 Sharp Kabushiki Kaisha Sprite-based video coding system
US6950123B2 (en) * 2002-03-22 2005-09-27 Intel Corporation Method for simultaneous visual tracking of multiple bodies in a closed structured environment
CN103179402A (en) * 2013-03-19 2013-06-26 中国科学院半导体研究所 A video compression encoding and decoding method and device thereof
WO2016013147A1 (en) * 2014-07-22 2016-01-28 パナソニックIpマネジメント株式会社 Encoding method, decoding method, encoding apparatus and decoding apparatus
CN109246488A (en) * 2017-07-04 2019-01-18 北京航天长峰科技工业集团有限公司 A kind of video abstraction generating method for safety and protection monitoring system
CN112954393A (en) * 2021-01-21 2021-06-11 北京博雅慧视智能技术研究院有限公司 Target tracking method, system, storage medium and terminal based on video coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101536525A (en) * 2006-06-08 2009-09-16 欧几里得发现有限责任公司 Apparatus and method for processing video data
CN108184126A (en) * 2017-12-27 2018-06-19 生迪智慧科技有限公司 Video coding and coding/decoding method, the encoder and decoder of snapshot image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Qiwei Chen ; Yiming Wang.A small target detection method in infrared image sequences based on compressive sensing and background subtraction.《2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013)》.2013,全文. *
基于H.264压缩域的背景建模与前景对象分割;冯杰;《吉林大学学报(工学版)》;全文 *

Also Published As

Publication number Publication date
CN114554220A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
EP3777207B1 (en) Content-specific neural network distribution
US11272188B2 (en) Compression for deep neural network
Wang et al. Towards analysis-friendly face representation with scalable feature and texture compression
WO2018150083A1 (en) A method and technical equipment for video processing
CN114363623A (en) Image processing method, image processing apparatus, image processing medium, and electronic device
CN113284203B (en) A point cloud compression and decompression method based on octree coding and voxel context
Abd-Alzhra et al. Image compression using deep learning: methods and techniques
Xia et al. An emerging coding paradigm VCM: A scalable coding approach beyond feature and signal
CN113822147A (en) A Deep Compression Method for Cooperative Machine Semantic Tasks
CN116233445B (en) Video encoding and decoding processing method and device, computer equipment and storage medium
CN111711817A (en) An optimization research on HEVC intra-frame coding compression performance combined with convolutional neural network
CN114373023A (en) Point cloud geometric lossy compression reconstruction device and method based on points
CN114554220B (en) A Fixed Scene Video Overlimit Compression and Decoding Method Based on Abstract Features
CN115239563A (en) A device and method for lossy compression of point cloud attributes based on neural network
WO2022110870A1 (en) Image encoding and decoding method, encoding and decoding apparatus, encoder, and decoder
Du et al. Cgvc-t: Contextual generative video compression with transformers
WO2023133889A1 (en) Image processing method and apparatus, remote control device, system and storage medium
CN113422965A (en) Image compression method and device based on generation countermeasure network
WO2023133888A1 (en) Image processing method and apparatus, remote control device, system, and storage medium
Ahn et al. DDA-Net: Deep distribution-aware network for point cloud compression
CN115471875B (en) Multi-code-rate pedestrian recognition visual feature coding compression method and device
CN119152051B (en) 3D medical image compression system for human-computer vision
CN118741055B (en) High-resolution image transmission method and system based on optical communication
US20250045970A1 (en) Point cloud geometry compression
WO2023050439A1 (en) Encoding method, decoding method, bitstream, encoder, decoder, storage medium, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230728