CN107483920B - A kind of panoramic video appraisal procedure and system based on multi-layer quality factor - Google Patents
A kind of panoramic video appraisal procedure and system based on multi-layer quality factor Download PDFInfo
- Publication number
- CN107483920B CN107483920B CN201710683578.5A CN201710683578A CN107483920B CN 107483920 B CN107483920 B CN 107483920B CN 201710683578 A CN201710683578 A CN 201710683578A CN 107483920 B CN107483920 B CN 107483920B
- Authority
- CN
- China
- Prior art keywords
- video
- quality factor
- interest
- panoramic
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N2017/008—Diagnosis, testing or measuring for television systems or their details for television teletext
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of panoramic video appraisal procedures and system based on multi-layer quality factor, belong to multimedia technology field.The present invention will input the damage video of one section of lossless panoramic video and one section of same content, and the quality assessment result of output damage video realizes the automatic assessment to damage video;Its thought is that the area-of-interest based on multi-layer calculates multiple quality factors, the problem of being affected with the important area coped in panoramic video to video quality;Then multi-layer quality factor is merged by Fusion Model, model parameter can be learnt to obtain by subjective data, to cope with the subjective initiative of panoramic video user.This method is suitable for panoramic video quality evaluation: due to considering and having merged influence of the user's area-of-interest of multi-layer to video quality, the quality evaluation valuation of obtained damage video and the evaluation result of subjective experiment are more consistent, are more suitable for the automatic Evaluation to panoramic video quality.
Description
Technical field
It is the present invention relates to a kind of panoramic video method for evaluating quality, in particular to a kind of based on the complete of multi-layer quality factor
Scape video evaluations method and system, belong to multimedia technology field.
Background technique
With the development of virtual reality (Virtual Reality, abbreviation VR) technology, common planar video just gradually by
Replaced 360 degree of panoramic videos.Panoramic video, which refers to, provides 360 degree of horizontal extent for a fixed point of observation, vertical model
The video that 180 degree is free to navigate through is enclosed, the stronger feeling of immersion of VR user and experience more on the spot in person can be given.With this
Universal, development of the user experience quality of panoramic video for key technology in virtual reality system of kind Novel multimedia business
And the optimization of transmission network is of great significance.However, panoramic video quality evaluation is a challenge, because compared to
Common plane video, the experience of panoramic video viewer will receive the influence of many factors, including more psychology and physiology because
The influence of the subjective factors such as influence, the area-of-interest of element.Traditional video quality evaluation method not can accurately reflect aphorama
The quality of frequency.The appraisal procedure for being suitable for panoramic video and system are studied for the development of VR technology and is popularized with important meaning
Justice.
Summary of the invention
The purpose of the present invention is assessing the panoramic video quality in virtual reality system, propose a kind of based on multilayer
The panoramic video appraisal procedure and system of grade quality factor, the system input one section of lossless panoramic video and one section of same content
Video is damaged, the quality assessment result of output damage video realizes the automatic assessment to damage video.
Idea of the invention is that the area-of-interest based on multi-layer calculates multiple quality factors, to cope in panoramic video
Important area the problem of being affected to video quality;Then multi-layer quality factor is merged by Fusion Model,
Model parameter can be learnt to obtain by subjective data, to cope with the subjective initiative of panoramic video user.
The purpose of the present invention is what is solved by the following technical programs: a kind of panoramic video based on multi-layer quality factor
Appraisal procedure and system include a kind of panoramic video appraisal procedure based on multi-layer quality factor and a kind of based on multi-layer matter
The panoramic video of the amount factor comments system.
Wherein, a kind of panoramic video appraisal procedure based on multi-layer quality factor, abbreviation this method;One kind being based on multilayer
The panoramic video of grade quality factor comments system, abbreviation this system.
This system include panoramic video input module, region of interesting extraction module, multi-layer quality factor computing module,
Time domain processing module and multi-layer quality factor Fusion Module.
The connection relationship of each module of this system is as follows:
Panoramic video input module is connected with region of interesting extraction module, region of interesting extraction module and multi-layer matter
Factor computing module is measured to be connected;Multi-layer quality factor computing module is connected with time domain processing module;Time domain processing module and more
Level quality factor Fusion Module is connected.
The function of each module of this system is as follows:
The function of panoramic video input module is to be decoded to obtain panoramic frame image pair to the video file of input;Feel emerging
The function of interesting region extraction module is to extract the multi-layer region of interest domain matrix of panoramic picture;Multi-layer quality factor calculates mould
The function of block is that the quality factor of panoramic picture is calculated according to region of interest domain matrix;The function of time domain processing module is according to complete
The quality factor of scape image calculates the quality factor of panoramic video;The function of multi-layer quality factor Fusion Module is by aphorama
The quality factor of frequency is merged to obtain the automatic assessment result of damage video.
A kind of panoramic video appraisal procedure based on multi-layer quality factor, comprising the following steps:
Step 1: panoramic video input module to input this system a pair of of panoramic video source file carry out video processing and
Decoding process obtains panoramic frame image pair;
Wherein, the panoramic video in a pair of of panoramic video source file of input be one section lossless reference video S ' and one section and
The identical damage video S of reference video content, the damage damaged in video S include be artificially introduced fuzzy plus make an uproar and encode
Based on processing caused by damage caused by the reason of damaging, also include in network transmission process as packet loss and error code being main;
Wherein, lossless reference video is also referred to as reference video;
Step 1.1 judge to input a pair of of panoramic video source file resolution ratio whether having the same of this system, frame per second and
Duration and identical mapping format, including the mapping based on the mapping of longitude and latitude figure, hexahedron mapping and rectangular pyramid mapping, and
Corresponding operating is carried out according to judging result:
If 1.1A inputs a pair of of panoramic video source file resolution ratio, frame per second and duration having the same of this system, and
Identical mapping format, then skip to step 1.2;
A pair of of panoramic video source file that 1.1B inputs this system does not have identical resolution ratio, frame per second and duration, and
Identical mapping format then become with picture element interpolation, duplicated frame image, mapping to damage video in panoramic video input module
It is changed to main video processing, so that damage video and reference video resolution ratio having the same, frame per second and duration and identical
Mapping format;
Step 1.2 uses the decoding tool based on ffmpeg, according to a pair of of panoramic video source file of input this system
Coded format, be decoded processing, each panoramic video be decoded as multiple image, to obtain panoramic frame image pair,
In, the video frame number of panoramic video source file is N, and obtained panoramic frame image is to for N group, including the N obtained by reference video
A reference frame image and the N number of damage frame image obtained by damage video, the width of each panoramic frame image and high respectively W and H;
Step 2: region of interesting extraction module exports step 1 using image procossing and computer vision algorithms make
Panoramic frame image exports multi-layer area-of-interest set of matrices to region of interesting extraction is carried out;
Specific: the reference frame image I ' carry out area-of-interest of the panoramic frame image pair of step 1 output, i.e. ROI are mentioned
It takes;
Wherein, multi-layer area-of-interest set of matrices is low-level area-of-interest set of matrices
Middle level area-of-interest set of matricesHigh-level area-of-interest set of matricesTime domain level area-of-interest set of matricesAnd mapping level is interested
Matrix of areas MpIn all set of matrices, wherein M indicates that size is the two-dimensional matrix of H × W, i.e. the one of image I ' is interested
Matrix of areas, the element value range in M are [0,1], and the numerical value of value, that is, matrix the i-th row jth column of M (i, j) is bigger, indicates ginseng
Examine the easier influence degree noticed by viewer to video quality of pixel I ' (i, j) of corresponding position in frame image I '
It is bigger;The subscript l, m, h of M, it is by the area-of-interest of basic, normal, high, time domain and mapping level that t, p, which respectively indicate the matrix,
What extracting method obtained, superscript 1,2 ... the n of M indicate the matrix be obtained by the n method of place level, wherein
nl,nm,nh,ntThe integer more than or equal to 1 is taken, indicates that basic, normal, high, time domain level can be obtained using one or more kinds of methods
To one or more region of interest domain matrixs, and mapping level only can select a kind of method to obtain area-of-interest square
Battle array;
It is above that step 1 is exported for a reference frame image I ' for the explanation of region of interest domain matrix number
N group panoramic frame image pair N number of reference frame image, step 2 output region of interest domain matrix number be (nl+nm+nh+
nt+1)×N;
Multi-layer region of interest domain matrix is generated by step 2.1 to step 2.5 respectively, specifically:
Step 2.1 calculates the low-level area-of-interest of reference frame image, output using pixel scale image processing method
Low-level area-of-interest set of matrices
Wherein, pixel scale image processing method is based on color contrast and edge detection;
Step 2.2 calculates the middle level area-of-interest of reference frame image, level in output using super-pixel processing method
Area-of-interest set of matrices
Wherein, super-pixel processing method is based on the sequence of super-pixel block conspicuousness;
Step 2.3 calculates the high-level area-of-interest of reference frame image using computer vision methods, and usually viewer holds
The region based on people, animal and vehicle easily paid close attention to exports high-level area-of-interest set of matrices
Wherein, computer vision methods are based on Target Segmentation and semantic segmentation;
Step 2.4 calculates temporal levels area-of-interest using adjacent two frames reference picture using image processing method, leads to
The moving object of concern, output time-domain level area-of-interest set of matrices are often easy for viewer
Wherein, image processing method is based on light stream estimation and estimation;
Step 2.5 selects corresponding weight square according to the mapping format of a pair of of panoramic video source file of input this system
Battle array, output weight matrix is as mapping level area-of-interest matrix Mp;
It is for longitude and latitude figure mapping format, and weight ratio equator, the two poles of the earth weight of corresponding weight matrix is small, tetragonous Cone mapping
The bottom surface weight ratio conical surface weight of the corresponding weight matrix of format is big;
Wherein, the mapping level area-of-interest matrix that step 2.5 exports is only related with video mapping format, with frame image
Itself is unrelated, once it is determined that the video mapping format of input, then the region of interest domain matrix of each frame is identical;
Step 3: multi-layer quality factor computing module, using quality evaluation algorithm, the multi-layer based on step 2 output
Area-of-interest set of matrices calculates the weighted difference of the panoramic frame image pair of step 1 output, exports the more of N framing image pair
Level quality factor set;
Wherein, multi-layer quality factor set is low-level quality factor setMiddle level quality
Factor setHigh-level quality factor setTime domain level quality factor collection
It closesAnd mapping level quality factor set fpIn all numerical value set, wherein f indicate one be greater than 0
Natural number, subscript is consistent with the upper subscript of the M in step 2 thereon, indicates the quality factor by corresponding region of interest
Domain matrix obtains the treatment process and is specifically completed by following steps:
The basic, normal, high of panoramic frame image pair and step 2 output, time domain and the mapping sense that step 3.1 exports step 1 are emerging
Interesting matrix of areas obtains N group according to the sequence of frame
Each group includes: a lossless panorama sketch, a width damage panorama sketch and more
Level area-of-interest set of matrices;
Step 3.2 calculates lossless and damages the quality difference matrix D of panorama sketch using pixel difference appraisal procedure, D be H ×
The two-dimensional matrix of W, D (i, j) indicate color/luminance difference that is lossless and damaging in panorama sketch pixel at the position (i, j), can make
It is calculated with Euclidean distance method;
Each region of interest domain matrix M is multiplied by step 3.3 with difference matrix D corresponding element, the difference square weighted
Battle array set
Step 3.4 uses traditional images objective quality assessment method by the difference matrix compound mapping of weighting for damage image
Multi-layer quality factor set
Wherein, traditional images objective quality assessment method is based on MSE, PSNR and SSIM;
Step 4: time domain processing module, the N group image multi-layer quality factor set that input step three obtains, according to when
Domain processing method, fusion become one group, export the multi-layer quality factor set of video S
Wherein, time-domain processing method is based on average and weighted average;
Step 5: multi-layer quality factor Fusion Module, the multi-layer quality factor that input step four obtains, using fusion
Model Fusion is a quality evaluation result
Export result Q, the i.e. matter of video S
Measure evaluation result;
Wherein, Fusion Model is based on linear regression, nonlinear regression and neural network model;
The parameter of the Fusion Model can be obtained by Experience Design, can also be trained and be obtained by way of machine learning,
Wherein the method based on machine learning can mainly be completed by following steps: design a BP neural network structure first, then
The parameter of BP network is obtained using training data training, so that the result of these quality factors fusion is obtained close to subjective
Point;
Wherein, the quality score for some panoramic videos that the training data utilized is obtained specifically by subjective experiment, and
The video quality factor obtained by step 1 to step 4;
So far, by step 1 to step 5, this method, i.e., a kind of aphorama based on multi-layer quality factor are completed
Frequency appraisal procedure.
Beneficial effect
A kind of panoramic video appraisal procedure and system based on multi-layer quality factor of the present invention, compared with prior art,
It has the following beneficial effects:
This method is suitable for panoramic video quality evaluation: with existing ordinary video method for evaluating quality and existing panorama
Video quality evaluation method is compared, and method of the invention is due to considering and having merged user's area-of-interest of multi-layer to video
The influence of quality, the quality evaluation valuation of obtained damage video and the evaluation result of subjective experiment are more consistent, are more suitable for pair
The automatic Evaluation of panoramic video quality.
Detailed description of the invention
Fig. 1 is a kind of module map of the panoramic video quality evaluation system based on multi-layer quality factor of the present invention;
Fig. 2 is in a kind of panoramic video appraisal procedure and system specific embodiment based on multi-layer quality factor of the present invention
The 5th frame panoramic picture and its multi-layer area-of-interest figure;
Fig. 3 is in a kind of panoramic video appraisal procedure and system specific embodiment based on multi-layer quality factor of the present invention
Multi-layer quality factor Fusion Module structure chart.
Specific embodiment
The present invention is described in detail below in conjunction with drawings and examples, while also describing technical solution of the present invention
The technical issues of solution and beneficial effect, it should be pointed out that described embodiment is intended merely to facilitate the understanding of the present invention,
And any restriction effect is not played to it.
Embodiment 1
The present embodiment is the lossless panoramic video to the method for the invention and system based on two sections of 4K resolution ratio
Concert.mp4 and damaging is illustrated for panoramic video concert_3M.mp4.
Fig. 1 is that a kind of panoramic video appraisal procedure and system based on multi-layer quality factor of the present invention is based on multi-layer matter
Measure the panoramic video quality evaluation system module map of the factor.
From figure 1 it appears that this system solves reference video and damage video input panoramic video input module
Code processing, is then fed into region of interesting extraction module, extracts low-level, middle level, high-level, time domain level and mapping layer
Grade area-of-interest, is then based on these region of interest domain matrixs, calculates panorama sketch in multi-layer quality factor computing module
As pair low-level, middle level, high-level, time domain level and mapping level quality factor set, then by these quality because
Son is sent into time domain processing module, obtains the multi-layer quality factor set of panoramic video, finally merges in multi-layer quality factor
These quality factors are permeated the output of quality score in module, that is, damage the automatic assessment result of video.
Using this system relied on it is a kind of based on the panoramic video appraisal procedure of multi-layer quality factor to the present embodiment
In two sections of 4K resolution ratio lossless panoramic video concert.mp4 and damage at panoramic video concert_3M.mp4
Reason, includes the following steps:
Step A: panoramic video input module is decoded processing, two videos to a pair of of panoramic video source file of input
It is duration 10 seconds, the longitude and latitude bitmap-format panoramic video of frame per second 30fps, resolution ratio 4096*2048, damage video is by lossless view
What frequency obtained after H.264 compressed encoding, the code rate of lossless video is 50Mbps, and the code rate for damaging video is 3Mbps, the two
300 pairs of panorama sketch are obtained after decoding, wide high respectively 4096 and 2048 pixels of image, wherein Fig. 2 (A) is lossless video
5th frame panorama sketch
Step B: region of interesting extraction module carries out region of interesting extraction, the treatment process to 300 lossless images
Specifically completed by following steps:
B.1, the method that step uses color contrast to calculate notable figure, is calculated 300 low-levels of 300 images
Region of interest domain matrixMatrix size is 2048 × 4096, wherein the result of the 5th frame is mapped to image space (by [0,1]
Furthermore the value of range adds another low-level region of interest domain matrix multiplied by 256) such as Fig. 2 (B) is shownIt is one 2048
The all 1's matrix of × 4096 sizes;
B.2, step divides the image into super-pixel, then uses two kinds of super-pixel block conspicuousness sort methods, calculates reference
The middle level area-of-interest matrix of frame imageWithIt is mapped to after image space as shown in Fig. 2 (C, D);
B.3, the method that step uses full convolutional neural networks carries out target semantic segmentation to reference frame image, will divide
The mask arrived is as high-level region of interest domain matrix Mh, be mapped as bianry image such as Fig. 2 (E), matrix element be 1 belong to people,
Target area based on animal and vehicle, element belong to background area for 0;
Step is B.4 in the present embodiment without using inter motion information, therefore the time domain level in the present embodiment is interested
Matrix of areas MtFor null matrix;
B.5, step is longitude and latitude figure according to the mapping format of input video, selects corresponding weight matrix Mp, be mapped to [0,
255] as shown in Fig. 2 (F), the value of each element of matrix is determined by latitude, as shown in formula (1);
B.6 B.5 B.1 the present embodiment in step be obtained 6 region of interest domain matrixs of every frame image to step to stepTotally 1800 matrixes.
Step C: multi-layer quality factor computing module, this example use PSNR quality evaluation algorithm, are exported based on step B
Multi-layer area-of-interest weighting matrix set, calculate 300 frame images pair weighted difference set of matrices, export multi-layer matter
Factor set is measured, which is specifically completed by following steps:
C.1, the multiple semi-cylindrical hills matrix that step exports the step A panoramic picture pair exported and step B, according to frame
Sequence obtain 300 groupsEach group includes: a lossless panorama sketch, a width
Damage panorama sketch, 6 region of interest domain matrixs;
C.2, step calculates weighted difference set of matrices between two image pixels, as shown in formula (2), I (i, j), I '
(i, j) and M (i, j) are respectively the value for damaging each element in image, lossless image and weighting matrix, and wherein image is if threeway
Road then calculates separately weighted difference matrix according to each channel, obtains
D (i, j)=(I (i, j)-I ' (i, j))2×M(i,j) (2)
C.3, step usesCalculate quality factor set
As shown in formula (3), the present embodiment then takes triple channel quality factor if triple channel image using the calculation method of PSNR
Quality factor of the average value as damage image;
C.4, C.3 C.1 step can be obtained 6 quality factors of every frame damage image by step to step,Output of the set as this module as totally 300.
Step D: time domain processing module, 300 multi-layer quality factor set that input step C is obtained, this example according to
The quality factor of corresponding position in each set is averaged by the processing method of time domain average, i.e., shown in formula (4), x, y difference
Indicate the area-of-interest method index in the level index and the level of quality factor, output damage video concert_
3M.mp4 multi-layer quality factor set
Step E: multi-layer quality factor Fusion Module, the multi-layer quality factor set that input step D is obtained, using BP
Neural network is merged, and the final quality evaluation score Q (I, I ') of video fengjing_3M.mp4 is obtained.
E.1, BP neural network that step uses is respectively connected to step 4 and obtains as shown in figure 3, network possesses 6 input nodes
6 quality factors arrived, 10 concealed nodes, 1 output node input the quality evaluation result of [0,1] range;
The parameter of the step E.2 Fusion Model is the aphorama frequency by not including test video concert_3M.mp4
It is got according to training.
In this example, the quality assessment value and single-factor result obtained using 6 multi-layer quality factor amalgamation modes
It compares, it is more linearly related with subjective results.As shown in table 1, the quality factor for successively removing each level obtains and subjectivity
Spearman rank correlation coefficient SROCC ratio uses the SROCC of the quality factor of all levels small.Value in table uses 12 sections of originals
288 sections of damage video BP network parameters of beginning video and corresponding content, then using in other 4 sections of original videos and correspondence
The 96 sections of damage videos held are tested, and obtained SROCC is bigger, and the explanation automatic evaluation method is better.
1 multi-layer quality factor of table and a certain level quality factor of reduction compare
Above-described specific descriptions have carried out further specifically the purpose of invention, technical scheme and beneficial effects
It is bright, it should be understood that the above is only a specific embodiment of the present invention, the protection model being not intended to limit the present invention
It encloses, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the present invention
Protection scope within.
Claims (6)
1. a kind of panoramic video assessment system based on multi-layer quality factor, it is characterised in that: based on the interested of multi-layer
Region calculates multiple quality factors, the problem of being affected with the important area coped in panoramic video to video quality;Then
Multi-layer quality factor is merged by Fusion Model, model parameter can be learnt to obtain by subjective data, to cope with panorama
The subjective initiative of video user;
This system includes panoramic video input module, region of interesting extraction module, multi-layer quality factor computing module, time domain
Processing module and multi-layer quality factor Fusion Module;
The connection relationship of each module of this system is as follows:
Panoramic video input module is connected with region of interesting extraction module, region of interesting extraction module and multi-layer quality because
Sub- computing module is connected;Multi-layer quality factor computing module is connected with time domain processing module;Time domain processing module and multi-layer
Quality factor Fusion Module is connected;
The function of each module of this system is as follows:
The function of panoramic video input module is to be decoded to obtain panoramic frame image pair to the video file of input;Region of interest
The function of domain extraction module is to extract the multi-layer region of interest domain matrix of panoramic picture;Multi-layer quality factor computing module
Function is that the quality factor of panoramic picture is calculated according to region of interest domain matrix;The function of time domain processing module is according to panorama sketch
The quality factor of picture calculates the quality factor of panoramic video;The function of multi-layer quality factor Fusion Module is by panoramic video
Quality factor is merged to obtain the automatic assessment result of damage video.
2. a kind of panoramic video appraisal procedure based on multi-layer quality factor, it is characterised in that: the following steps are included:
Step 1: panoramic video input module carries out video processing and decoding to a pair of of panoramic video source file of input this system
Processing, obtains panoramic frame image pair;
In step 1, panoramic video in a pair of of panoramic video source file of input be one section lossless reference video S ' and one section and
The identical damage video S of reference video content, the damage damaged in video S include be artificially introduced fuzzy plus make an uproar and encode
Based on processing caused by damage caused by the reason of damaging, also include in network transmission process as packet loss and error code being main;
Wherein, lossless reference video is also referred to as reference video;
Obtained panoramic frame image is to the reference frame image comprising being obtained by reference video, and the damage frame obtained by damage video
Image;
Step 2: region of interesting extraction module, the panorama that step 1 is exported using image procossing and computer vision algorithms make
Frame image carries out region of interesting extraction, exports multi-layer area-of-interest set of matrices;
Step 3: multi-layer quality factor computing module, using quality evaluation algorithm, the multi-layer sense based on step 2 output is emerging
Interesting matrix of areas set calculates the weighted difference of the panoramic frame image pair of step 1 output, exports the multi-layer of N framing image pair
Quality factor set;
Step 4: time domain processing module, the N group image multi-layer quality factor set that input step three obtains, at time domain
Reason method, fusion become one group, the multi-layer quality factor set of output damage video S
Wherein, time-domain processing method is based on average and weighted average;
Step 5: multi-layer quality factor Fusion Module, the multi-layer quality factor set that input step four obtains, using fusion
Model Fusion is a quality evaluation result
Result Q is exported, that is, damages the matter of video S
Measure evaluation result.
3. a kind of panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that:
In step 1, the panoramic video in a pair of of panoramic video source file of input is one section lossless reference video S ' and one section and reference
The identical damage video S of video content, the damage damaged in video S include be artificially introduced fuzzy plus based on making an uproar and encoding
Processing caused by damage caused by the reason of damaging, also include in network transmission process as packet loss and error code being main;
Wherein, lossless reference video is also referred to as reference video;
Step 1.1 judges a pair of of panoramic video source file resolution ratio, frame per second and duration whether having the same of input this system,
And identical mapping format, including the mapping based on the mapping of longitude and latitude figure, hexahedron mapping and rectangular pyramid mapping, and according to sentencing
Disconnected result carries out corresponding operating:
If 1.1A inputs a pair of of panoramic video source file resolution ratio, frame per second and duration having the same of this system and identical
Mapping format, then skip to step 1.2;
A pair of of panoramic video source file that 1.1B inputs this system does not have identical resolution ratio, frame per second and duration and identical
Mapping format, then damage video is carried out in panoramic video input module with picture element interpolation, duplicated frame image, mapping transformation be
Main video processing, so that damage video and reference video resolution ratio having the same, frame per second and duration and identical mapping
Format;
Step 1.2 uses the decoding tool based on ffmpeg, according to the volume of a pair of of panoramic video source file of input this system
Code format, is decoded processing, each panoramic video is decoded as multiple image, to obtain panoramic frame image pair, wherein complete
The video frame number of scape video source file is N, and obtained panoramic frame image is to for N group, including the N number of reference obtained by reference video
Frame image and the N number of damage frame image obtained by damage video, the width of each panoramic frame image and high respectively W and H.
4. a kind of panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that:
It is specific in step 2: the reference frame image I ' carry out area-of-interest of the panoramic frame image pair of step 1 output, i.e. ROI
It extracts;Multi-layer area-of-interest set of matrices is low-level area-of-interest set of matricesMiddle level
Area-of-interest set of matricesHigh-level area-of-interest set of matrices
Time domain level area-of-interest set of matricesAnd mapping level area-of-interest matrix MpIn it is all
Set of matrices, wherein M indicates that size is the two-dimensional matrix of H × W, i.e. image I ' a region of interest domain matrix, the element in M
Value range is [0,1], and the numerical value of value, that is, matrix the i-th row jth column of M (i, j) is bigger, indicates to correspond to position in reference frame image I '
The pixel I ' (i, j) set is easier noticed by viewer it is bigger to the influence degree of video quality;Subscript l, the m of M,
H, t, p, which respectively indicate the matrix, to be obtained by the area-of-interest exacting method of basic, normal, high, time domain and mapping level, M's
Superscript 1,2 ... n indicates that the matrix is obtained by the n method of place level, wherein nl,nm,nh,ntIt takes and is more than or equal to
1 integer;
It is above for a reference frame image I ', for the N of step 1 output for the explanation of region of interest domain matrix number
N number of reference frame image of group panoramic frame image pair, the region of interest domain matrix number of step 2 output are (nl+nm+nh+nt+
1)×N;
Multi-layer region of interest domain matrix is generated by step 2.1 to step 2.5 respectively, specifically:
Step 2.1 calculates the low-level area-of-interest of reference frame image using pixel scale image processing method, exports low layer
Grade area-of-interest set of matrices
Wherein, pixel scale image processing method is based on color contrast and edge detection;
Step 2.2 calculates the middle level area-of-interest of reference frame image using super-pixel processing method, and level sense is emerging in output
Interesting matrix of areas set
Wherein, super-pixel processing method is based on the sequence of super-pixel block conspicuousness;
Step 2.3 calculates the high-level area-of-interest of reference frame image using computer vision methods, and usually viewer holds
The region based on people, animal and vehicle easily paid close attention to exports high-level area-of-interest set of matrices
Wherein, computer vision methods are based on Target Segmentation and semantic segmentation;
Step 2.4 calculates temporal levels area-of-interest using adjacent two frames reference picture using image processing method, usually
Viewer is easy the moving object of concern, output time-domain level area-of-interest set of matrices
Wherein, image processing method is based on light stream estimation and estimation;
Step 2.5 selects corresponding weight matrix according to the mapping format of a pair of of panoramic video source file of input this system, defeated
Weight matrix is as mapping level area-of-interest matrix M outp;
It is for longitude and latitude figure mapping format, and weight ratio equator, the two poles of the earth weight of corresponding weight matrix is small, rectangular pyramid mapping format
The bottom surface weight ratio conical surface weight of corresponding weight matrix is big;
Wherein, the mapping level area-of-interest matrix that step 2.5 exports is only related with video mapping format, with frame image itself
Unrelated, once it is determined that the video mapping format of input, then the region of interest domain matrix of each frame is identical.
5. a kind of panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that:
In step 3, multi-layer quality factor set is low-level quality factor setMiddle level quality factor
SetHigh-level quality factor setTime domain level quality factor setAnd mapping level quality factor set fpIn all numerical value set, wherein f indicate one greater than 0
Natural number, subscript is consistent with the upper subscript of the M in step 2.4 thereon, indicates the quality factor by corresponding region of interest
Domain matrix obtains;
The treatment process of step 3 is specifically completed by following steps:
The basic, normal, high of panoramic frame image pair and step 2 output, time domain and the mapping area-of-interest that step 3.1 exports step 1
Matrix obtains N group according to the sequence of frame
Each group includes: a lossless panorama sketch, a width damage panorama sketch and multi-layer sense
Interest matrix of areas set;
Step 3.2 calculates quality difference matrix D that is lossless and damaging panorama sketch using pixel difference appraisal procedure, and D is H × W's
Two-dimensional matrix, D (i, j) indicate color/luminance difference that is lossless and damaging in panorama sketch pixel at the position (i, j), can be used
Euclidean distance method is calculated;
Each region of interest domain matrix M is multiplied by step 3.3 with difference matrix D corresponding element, the difference matrix collection weighted
It closes:
Step 3.4 uses traditional images objective quality assessment method by the difference matrix compound mapping of weighting for the more of damage image
Level quality factor set
Wherein, traditional images objective quality assessment method is based on MSE, PSNR and SSIM.
6. a kind of panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that:
In step 5, Fusion Model is based on linear regression, nonlinear regression and neural network model;
The parameter of the Fusion Model can be obtained by Experience Design, can also be trained and be obtained by way of machine learning, wherein
Method based on machine learning can mainly be completed by following steps: being designed a BP neural network structure first, then utilized
Training data training obtains the parameter of BP network, so that the result of these quality factors fusion is close to subjective scores;
Wherein, the quality score for some panoramic videos that the training data utilized is obtained specifically by subjective experiment, and pass through
The video quality factor that step 1 is obtained to step 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710683578.5A CN107483920B (en) | 2017-08-11 | 2017-08-11 | A kind of panoramic video appraisal procedure and system based on multi-layer quality factor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710683578.5A CN107483920B (en) | 2017-08-11 | 2017-08-11 | A kind of panoramic video appraisal procedure and system based on multi-layer quality factor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107483920A CN107483920A (en) | 2017-12-15 |
CN107483920B true CN107483920B (en) | 2018-12-21 |
Family
ID=60599247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710683578.5A Active CN107483920B (en) | 2017-08-11 | 2017-08-11 | A kind of panoramic video appraisal procedure and system based on multi-layer quality factor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107483920B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7032536B2 (en) | 2018-02-09 | 2022-03-08 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Instance segmentation methods and equipment, electronics, programs and media |
CN108460411B (en) * | 2018-02-09 | 2021-05-04 | 北京市商汤科技开发有限公司 | Instance division method and apparatus, electronic device, program, and medium |
CN108271020B (en) * | 2018-04-24 | 2019-08-09 | 福州大学 | A kind of panoramic video quality evaluating method of view-based access control model attention model |
CN109377481B (en) * | 2018-09-27 | 2022-05-24 | 上海联影医疗科技股份有限公司 | Image quality evaluation method, image quality evaluation device, computer equipment and storage medium |
US11024062B2 (en) | 2018-06-11 | 2021-06-01 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for evaluating image quality |
CN108833976B (en) | 2018-06-27 | 2020-01-24 | 深圳看到科技有限公司 | Method and device for evaluating picture quality after dynamic cut-stream of panoramic video |
CN108683909B (en) * | 2018-07-12 | 2020-07-07 | 北京理工大学 | VR audio and video integral user experience quality evaluation method |
CN111093069A (en) * | 2018-10-23 | 2020-05-01 | 大唐移动通信设备有限公司 | Quality evaluation method and device for panoramic video stream |
CN110211090B (en) * | 2019-04-24 | 2021-06-29 | 西安电子科技大学 | Method for evaluating quality of visual angle synthetic image |
CN111953959A (en) * | 2019-05-17 | 2020-11-17 | 华为技术有限公司 | VR video quality evaluation method and device |
CN111127298B (en) * | 2019-06-12 | 2023-05-16 | 上海大学 | Panoramic image blind quality assessment method |
CN110139169B (en) * | 2019-06-21 | 2020-11-24 | 上海摩象网络科技有限公司 | Video stream quality evaluation method and device and video shooting system |
CN110312170B (en) * | 2019-07-12 | 2022-03-04 | 青岛一舍科技有限公司 | Video playing method and device capable of intelligently adjusting visual angle |
CN113301336A (en) * | 2020-02-21 | 2021-08-24 | 华为技术有限公司 | Video coding method, device, equipment and medium |
CN111402860B (en) * | 2020-03-16 | 2021-11-02 | 恒睿(重庆)人工智能技术研究院有限公司 | Parameter management method, system, medium and device |
CN111696081B (en) * | 2020-05-18 | 2024-04-09 | 南京大学 | Method for reasoning panoramic video quality from visual field video quality |
CN114079777B (en) * | 2020-08-20 | 2024-06-04 | 华为技术有限公司 | Video processing method and device |
US20220156944A1 (en) * | 2020-11-13 | 2022-05-19 | Samsung Electronics Co., Ltd. | Apparatus and method with video processing |
CN112565208A (en) * | 2020-11-24 | 2021-03-26 | 鹏城实验室 | Multi-user panoramic video cooperative transmission method, system and storage medium |
CN112634468B (en) * | 2021-03-05 | 2021-05-18 | 南京魔鱼互动智能科技有限公司 | Virtual scene and real scene video fusion algorithm based on SpPccs |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100559880C (en) * | 2007-08-10 | 2009-11-11 | 中国传媒大学 | A kind of highly-clear video image quality evaluation method and device based on self-adapted ST area |
CN104243973B (en) * | 2014-08-28 | 2017-01-11 | 北京邮电大学 | Video perceived quality non-reference objective evaluation method based on areas of interest |
-
2017
- 2017-08-11 CN CN201710683578.5A patent/CN107483920B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107483920A (en) | 2017-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107483920B (en) | A kind of panoramic video appraisal procedure and system based on multi-layer quality factor | |
CN100559880C (en) | A kind of highly-clear video image quality evaluation method and device based on self-adapted ST area | |
CN101282481A (en) | Method for evaluating video quality based on artificial neural net | |
CN112950596B (en) | Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels | |
CN111105376B (en) | Single-exposure high-dynamic-range image generation method based on double-branch neural network | |
CN110517237A (en) | No-reference video quality evaluating method based on expansion Three dimensional convolution neural network | |
CN105635743A (en) | Minimum noticeable distortion method and system based on saliency detection and total variation | |
CN114598864B (en) | Deep learning-based full-reference ultra-high definition video quality objective evaluation method | |
CN110944200A (en) | Method for evaluating immersive video transcoding scheme | |
CN107071423A (en) | Application process of the vision multi-channel model in stereoscopic video quality objective evaluation | |
CN116033279B (en) | Near infrared image colorization method, system and equipment for night monitoring camera | |
CN105894507B (en) | Image quality evaluating method based on amount of image information natural scene statistical nature | |
CN115131229A (en) | Image noise reduction and filtering data processing method and device and computer equipment | |
Katsenou et al. | BVI-SynTex: A synthetic video texture dataset for video compression and quality assessment | |
Liu et al. | Spatio-temporal interactive laws feature correlation method to video quality assessment | |
CN114915777A (en) | Non-reference ultrahigh-definition video quality objective evaluation method based on deep reinforcement learning | |
Yang et al. | No-reference quality assessment of stereoscopic videos with inter-frame cross on a content-rich database | |
Da et al. | Perceptual quality assessment of nighttime video | |
CN114445755A (en) | Video quality evaluation method, device, equipment and storage medium | |
CN113628143A (en) | Weighted fusion image defogging method and device based on multi-scale convolution | |
CN111127386B (en) | Image quality evaluation method based on deep learning | |
CN112508847A (en) | Image quality evaluation method based on depth feature and structure weighted LBP feature | |
Kim et al. | Long-term video generation with evolving residual video frames | |
CN116524387A (en) | Ultra-high definition video compression damage grade assessment method based on deep learning network | |
CN110838120A (en) | Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |