[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110363788A - A kind of video object track extraction method and device - Google Patents

A kind of video object track extraction method and device Download PDF

Info

Publication number
CN110363788A
CN110363788A CN201910505008.6A CN201910505008A CN110363788A CN 110363788 A CN110363788 A CN 110363788A CN 201910505008 A CN201910505008 A CN 201910505008A CN 110363788 A CN110363788 A CN 110363788A
Authority
CN
China
Prior art keywords
image
pixel
transparency
mask
transparency mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910505008.6A
Other languages
Chinese (zh)
Inventor
蔡昭权
蔡映雪
陈伽
胡松
黄思博
李慧
胡辉
陈明阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou University
Original Assignee
Huizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou University filed Critical Huizhou University
Publication of CN110363788A publication Critical patent/CN110363788A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure discloses a kind of video object track extraction method and device, transparency estimated value is recalculated to obtain the first transparency mask of the first image by measuring the confidence level of prospect background pixel pair first, then new picture is generated by superposition grayscale information and obtain the second transparency mask of the first image, and further correct the first transparency mask of the first image, finally the foreground target of frame image a certain in video is extracted using the modified first transparency mask, and specified target is retrieved in all foreground targets, then according to the time sequencing of all frames, the track of the specified target is generated according to all images for including the specified target.The disclosure can comprehensively utilize the confidence level and grayscale information of prospect background pixel pair in a certain frame image in video, provide a kind of scheme of new video object trajectory extraction.

Description

A kind of video object track extraction method and device
Technical field
The disclosure belongs to field of image processing, in particular to a kind of video object track extraction method and device.
Background technique
In safety-security area, there is the demand of many trajectory extractions to set the goal for video middle finger.
However, in the prior art, although there are the schemes of enough extraction video object tracks, on how to benefit With prospect background pixel to and grayscale information go forward side by side onestep extraction target trajectory to extract video foreground target, there has been no related new The implementation method of grain husk.
Summary of the invention
Present disclose provides a kind of video object track extraction methods, include the following steps:
S100 divides all foreground pixel set F, the had powerful connections picture in the image for the first image in video Element set B and all unknown pixel set Z;Wherein, the first image is a certain frame image extracted from the video;
S200 gives certain prospect background pixels to (Fi, Bj), each unknown pixel Z is measured according to the following formulakIt is saturating Lightness
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkNearest m Foreground pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel pair (Fi, Bj) amount to m2Group;
S300, for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingAccording to as follows Formula measures prospect background pixel to (Fi, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for (FiMAX, BjMAX);
S400 calculates each unknown pixel Z according to the following formulakTransparency estimated value
S500, according to each unknown pixel ZkTransparency estimated valuePrimarily determine the first of the first image Transparency mask;
S600, to the first image superposition grayscale information to generate the second image, and it is all to divide it to second image Foreground pixel set, all background pixel set and all unknown pixel set;
S700 executes step S200 to S500 for second image, to determine that the first transparency of the second image hides Cover, and using the first transparency mask of second image as the second transparency mask of the first image;
S800, using the second transparency mask of the first image, the first transparency for correcting the first image is hidden Cover;
S900 corrects the first transparency mask of resulting first image according to step S800, to the first of the video Foreground target in image extracts, and specified target is retrieved in all foreground targets, then according to the time of all frames Sequentially, the track of the specified target is generated according to all images for including the specified target.
In addition, the disclosure further discloses a kind of video object trajectory extraction device, comprising:
First division module, is used for: for the first image in video, dividing all foreground pixel set in the image F, all background pixel set B and all unknown pixel set Z;Wherein, the first image is extracted from the video A certain frame image;
First metric module, is used for: giving certain prospect background pixels to (Fi, Bj), measurement is each not according to the following formula Know pixel ZkTransparency
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkNearest m Foreground pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel pair (Fi, Bj) amount to m2Group;
Second metric module, is used for: for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is corresponding 'sMeasurement prospect background pixel is to (F according to the following formulai, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for (FiMAX, BjMAX);
Computing module is used for: calculating each unknown pixel Z according to the following formulakTransparency estimated value
Determining module is used for: according to each unknown pixel ZkTransparency estimated valuePrimarily determine described first First transparency mask of image;
Second division module, is used for: to the first image superposition grayscale information to generate the second image, and to second figure As dividing its all foreground pixel set, all background pixel set and all unknown pixel set;
Calling module again is used for: being directed to second image, is called first metric module, the second measurement again Module, computing module and determining module, to determine the first transparency mask of the second image, and by the first of second image Second transparency mask of the transparency mask as the first image;
Correction module is used for: using the second transparency mask of the first image, correcting the first of the first image Transparency mask;
Extraction module is used for: according to the first transparency mask of resulting first image of correction module, to the video Foreground target in first image extracts, and specified target is retrieved in all foreground targets, then according to all frames Time sequencing generates the track of the specified target according to all images for including the specified target.
By the method and device, the disclosure is capable of the confidence level and gray scale letter of Prospects of Comprehensive Utilization background pixel pair Breath provides a kind of scheme of new video object trajectory extraction.
Detailed description of the invention
Fig. 1 is the schematic diagram of one embodiment the method in the disclosure;
Fig. 2 is the schematic diagram of another embodiment described device in the disclosure.
Specific embodiment
In order to make those skilled in the art understand that technical solution disclosed by the disclosure, below in conjunction with embodiment and related The technical solution of each embodiment is described in attached drawing, and described embodiment is a part of this disclosure embodiment, without It is whole embodiments.Term " first " used by the disclosure, " second " etc. rather than are used for for distinguishing different objects Particular order is described.In addition, " comprising " and " having " and their any deformation, it is intended that covering and non-exclusive packet Contain.Such as contain the process of a series of steps or units or method or system or product or equipment are not limited to arrange Out the step of or unit, but optionally further include the steps that not listing or unit, or further includes optionally for these mistakes Other intrinsic step or units of journey, method, system, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the disclosure.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.It will be appreciated by those skilled in the art that , embodiment described herein can combine with other embodiments.
Show referring to the process that Fig. 1, Fig. 1 are a kind of video object track extraction methods that one embodiment provides in the disclosure It is intended to.As shown, described method includes following steps:
S100 divides all foreground pixel set F, the had powerful connections picture in the image for the first image in video Element set B and all unknown pixel set Z;Wherein, the first image is a certain frame image extracted from the video;
It is understood that there are many means for dividing foreground pixel, background pixel and unknown pixel to image, can be artificial Mark, can also by way of machine learning or data-driven, can also be according to corresponding prospect threshold value, background threshold come Mark off all foreground and background pixels and its corresponding set;Once foreground and background pixel is divided out, unknown pixel, It corresponds to set also with regard to being divided out naturally;
In addition, the first image may is that when video is played when extracting to video foreground target, ring Current video broadcasting should be suspended, and carry out the interception of present frame immediately to pause picture, in the operation of user to obtain Obtain the first image;The first image is also possible to: when video is there is no being played, in response to the operation of user, choosing at random The a certain frame or a few frames in video are selected, using wherein a certain frame image as the first image.In any case, it is to be understood that the party The foreground target that method can be used for each frame image in video extracts.Preferably, the first image is first frame in video Image.
S200 gives certain prospect background pixels to (Fi, Bj), each unknown pixel Z is measured according to the following formulakIt is saturating Lightness
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkNearest m Foreground pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel pair (Fi, Bj) amount to m2Group;
To those skilled in the art, theoretically, the selection of m can make corresponding prospect background pixel to being Part sample, can also be with exhaustive whole image;For step S200, it is intended to the color and prospect background by unknown pixel The color relationship of pixel pair estimates the transparency of unknown pixel;In addition, the selection of m can also further combined with neighborhood territory pixel with Between unknown pixel color, texture, gray scale, brightness, in terms of feature;
S300, for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingAccording to as follows Formula measures prospect background pixel to (Fi, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for (FiMAX, BjMAX);
It is understood that the value of σ is empirical value or statistical value or simulation value, step S300 is further screened using confidence level Prospect background pixel pair, and for subsequent step by the prospect background pixel further screened to estimating that unknown pixel is transparent Degree;
S400 calculates each unknown pixel Z according to the following formulakTransparency estimated value
S500, according to each unknown pixel ZkTransparency estimated valuePrimarily determine the first of the first image Transparency mask;
This is to say, after the transparency estimated value of each unknown pixel obtains, the present embodiment with regard to primarily determining naturally First transparency mask of the first image;Why say be naturally, be because transparency mask can be considered as by By selected those respective pixels composition of certain value (or value range);
S600, to the first image superposition grayscale information to generate the second image, and it is all to divide it to second image Foreground pixel set, all background pixel set and all unknown pixel set;
For the step, the present embodiment is contemplated that gray scale is believed in view of each pixel is in addition to the effect of RGB color Cease the influence to pixel;Therefore, after being superimposed grayscale information, transparency mask is modified using following steps.
S700 executes step S200 to S500 for second image, to determine that the first transparency of the second image hides Cover, and using the first transparency mask of second image as the second transparency mask of the first image;
S800, using the second transparency mask of the first image, the first transparency for correcting the first image is hidden Cover;
S900 corrects the first transparency mask of resulting first image according to step S800, to the first of the video Foreground target in image extracts, and specified target is retrieved in all foreground targets, then according to the time of all frames Sequentially, the track of the specified target is generated according to all images for including the specified target.
So far, the confidence level and grayscale information of disclosure Prospects of Comprehensive Utilization background pixel pair provides a kind of new video The scheme that target trajectory extracts.It is understood that in this scenario, the extraction of video foreground target therein is one and infinitely forces Close process, due to the transition of color, gray scale in the image frame of video, it's hard to say some way transparency obtained Mask is unique correct.Theoretically, above-described embodiment fusion more information, consideration are more multifactor, are conducive to more comprehensive Image in video is investigated, to extract relatively satisfied video foreground target.It is understood that in above-described embodiment In, it, can also be with when being extracted according to the foreground target in the first image of the first transparency mask to the video It uses for reference, comprehensive related means in the prior art.That is, the key of above-described embodiment is to obtain how in new ways Transparency mask and the trajectory extraction for implementing final specified target, and how do not lie according to transparency mask extraction video foreground Target.
In another embodiment, the specified target derives from a photo, or derives from an image data base.Example Such as, specified target is suspect, and photo is the recent photograph of the suspect, and image data base is wanted circular personnel image data Library.
In another embodiment, further include following steps after the step S900:
S1000 extracts remaining each frame image from the video, and respectively as the first image, weight Abovementioned steps S100 to S900 is executed, again to extract all foreground targets of the video;Or
S1100 extracts remaining each frame image from the video, respectively as the first image, and root It is divided according to the first transparency mask of revised first image of previous frame all in the first image corresponding to the present frame Foreground pixel set Fc, all background pixel set BCWith all unknown pixel set ZC, repeat abovementioned steps S200 extremely S900, to extract all foreground targets of the video, wherein before dividing all in the first image corresponding to the present frame Scape pixel set Fc, all background pixel set BCWith all unknown pixel set ZCSpecifically includes the following steps:
First transparency mask of revised first image of previous frame is carried out binaryzation by S11001, and threshold value takes 0.5, Obtain the first bianry image of foreground target;
S11002, using the first bianry image as the second bianry image initial value;
S11003 carries out morphological erosion operation to the second bianry image using the circular configuration element that size is 3x3, and The second bianry image is updated with the result of acquisition:
S11004 is repeated step S1003 five times;
S11005, using the first bianry image as third bianry image initial value;
S11006 carries out morphological dilation to third bianry image using the circular configuration element that size is 3x3, and Third bianry image is updated with the result of acquisition:
S11007 is repeated step S1006 five times;
S11008 will be genuine respective pixel in the second bianry image as all foreground pixel set Fc, by the three or two It is false respective pixel in value image as all background pixel set BC, rest of pixels is as all unknown pixel set ZC
It is understood that repeating above-mentioned steps S100 to S900 to each frame image in video, view will be extracted All foreground targets in frequency.But, it is contemplated that often each frame image and its a later frame image have in picture video pictures Continuity and similitude in appearance, therefore, in order to make full use of this continuity and similitude, above-described embodiment can also be with The institute in the first image corresponding to present frame is divided according to the first transparency mask of revised first image of previous frame There is foreground pixel set Fc, all background pixel set BCWith all unknown pixel set ZC, so as in the essence of image procossing Balance is obtained between degree and efficiency;That is, the embodiment has the characteristic inherited: inherit the transparent of former frame Mask is spent, and divides foreground pixel set, background pixel set and the unknown pixel collection of a later frame using the transparency mask Close, in view of the continuity and similitude on image content, therefore this division not only in accordance with the transparency mask of former frame and And the means of morphological erosion and morphological dilations are utilized, this belongs to an innovative point of the disclosure.
In another embodiment, in step S600, in the following way to the first image superposition grayscale information to generate Second image:
S601 carries out mean filter to the first image and obtains third image;
S602, the first image and third image generate the second image by following formula:
Wherein, IM2Indicate the gray value of k-th of pixel on the second image after being superimposed, xrIndicate k-th of picture on the first image Plain xkNeighborhood territory pixel, NkIt indicates with xkCentered on neighborhood in number of pixels,Indicate to the first image into The pixel value of k-th of pixel, β take 0.5 on the resulting third image of row mean filter.
The mode of specific superposition grayscale information is given by empirical value and related formula for above-described embodiment.
In another embodiment, step S800 further include:
S801 is found respectively according to the first transparency mask of the second transparency mask of the first image and the first image The edge at the edge of its second transparency mask, the first transparency mask;
S802 obtains the position of all pixels at the edge of the second transparency mask and the edge of the first transparency mask All pixels position, and determine position and the first transparency mask of all pixels at the edge of the second transparency mask The region that the position of all pixels at edge is overlapped, and then determine the identical pixel Z in positionsp
S803 searches pixel Z respectivelyspThe transparency estimated value of the first transparency mask corresponding to the first image and right It should be in the transparency estimated value of the second transparency mask of the first image, and using the average value of the two as pixel ZspIt is revised Transparency estimated value;
S804, with pixel ZspRevised transparency estimated value corrects the first transparency mask of the first image.
For above-described embodiment, it is intended to find, compares the identical pixel in position in two kinds of transparency masks, and utilize Transparency estimated value of the identical pixel in the position in respective transparency mask is averaged to correct the of the first image One transparency mask.
In another embodiment, the step S802 further comprises:
S8021, according to the position of all pixels at the edge of the second transparency mask of judgement and the first transparency mask Edge all pixels position be overlapped region, further determine that the different pixel Z in positiondp, including two kinds of situations: it is located at The pixel Z at the edge of the second transparency maskdp2With the pixel Z at the edge for being located at the first transparency maskdp1
Unlike previous embodiment, edge that two transparency masks of the present embodiment additional attention are determined The different pixel in middle position, and find out these pixels of position different from each other;
S8022 utilizes the different pixel Z in the positiondpPixel Z identical with positionsp, obtain the second transparency mask Edge and the first transparency mask edge determined by: the closed enclosed region of institute and described between edge and edge The position of all closing pixels of enclosed region;
For the step, the edge as corresponding to each mask can be considered as a connection or closure to a certain degree Curve, then no matter closed curve corresponding to two masks is what kind of overlapping or nonoverlapping relationship: two are hidden Those of on the corresponding edge of cover for the pixel of position not corresponding (i.e. position is different or position is not overlapped), jointly really All closing pixels of the closed enclosed region of institute and the enclosed region between the edge and edge of two masks are determined Position;
S8023 executes following sub-step:
(1) pixel Z is searcheddp1Position corresponding to pixel estimate in the transparency of the first transparency mask of the first image Evaluation, and the transparence value of the corresponding pixel in the second image is searched, and using the average value of the two as pixel Zdp1Amendment Transparency estimated value afterwards;
(2) pixel Z is searcheddp2Position corresponding to pixel estimate in the transparency of the second transparency mask of the first image Evaluation, and the transparence value of the corresponding pixel in the first image is searched, and using the average value of the two as pixel Zdp2Amendment Transparency estimated value afterwards;
For the step, it is transparent under two different systems to be intended to find each pixel in aforementioned enclosed region Estimated value or transparence value are spent, and using the average value of the two as the revised transparency estimated value of respective pixel, then under For correcting the first transparency mask of the first image in one step S8024.That is, the present embodiment is similar to previous reality The amendment thinking for applying example is such, and what only the present embodiment solved is the common closed region of the corresponding edge institute of two masks. Wherein, with pixel Zdp1For, belong to the pixel of the first transparency mask of the first image, first in the first image is saturating There are a transparency estimated values for lightness mask, in addition, in the second image, pixel Zdp1In second image corresponding to position Pixel have a transparence value in the second image, the present embodiment using the average value of the transparency estimated value and transparence value as Respective pixel Zdp1Revised transparency estimated value.Pixel Zdp1It is similar.
S8024, in conjunction with pixel Zdp1Revised transparency estimated value and pixel Zdp2Revised transparency estimated value, is repaired First transparency mask of positive the first image.For example, by pixel Zdp1Revised transparency estimated value and pixel Zdp2It repairs Transparency estimated value after just, the transparence value as the first transparency mask corresponding position pixel.
Step in embodiment of the disclosure method can be sequentially adjusted, merged and deleted according to actual needs.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because According to the present invention, some steps may be performed in other sequences or simultaneously.
In addition, referring to fig. 2, the disclosure further discloses a kind of video object trajectory extraction device in another embodiment, Include:
First division module, is used for: for the first image in video, dividing all foreground pixel set in the image F, all background pixel set B and all unknown pixel set Z;Wherein, the first image is extracted from the video A certain frame image;
First metric module, is used for: giving certain prospect background pixels to (Fi, Bj), measurement is each not according to the following formula Know pixel ZkTransparency
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkNearest m Foreground pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel pair (Fi, Bj) amount to m2Group;
Second metric module, is used for: for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is corresponding 'sMeasurement prospect background pixel is to (F according to the following formulai, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for (FiMAX, BjMAX);
Computing module is used for: calculating each unknown pixel Z according to the following formulakTransparency estimated value
Determining module is used for: according to each unknown pixel ZkTransparency estimated valuePrimarily determine described first First transparency mask of image;
Second division module, is used for: to the first image superposition grayscale information to generate the second image, and to second figure As dividing its all foreground pixel set, all background pixel set and all unknown pixel set;
Calling module again is used for: being directed to second image, is called first metric module, the second measurement again Module, computing module and determining module, to determine the first transparency mask of the second image, and by the first of second image Second transparency mask of the transparency mask as the first image;
Correction module is used for: using the second transparency mask of the first image, correcting the first of the first image Transparency mask;
Extraction module is used for: according to the first transparency mask of resulting first image of correction module, to the video Foreground target in first image extracts, and specified target is retrieved in all foreground targets, then according to all frames Time sequencing generates the track of the specified target according to all images for including the specified target.
For the embodiment, as shown in Figure 2, above-mentioned modules can be constituted with processor and memory system with Just implement;But Fig. 2 is not interfered: modules can also have processing unit with itself to realize data-handling capacity.
In another embodiment, described device further includes following module:
Successively calling module is used for: remaining each frame image is extracted from the video, and respectively as described First image, successively described in calling: the first division module, the second metric module, computing module, determines mould at the first metric module Block, the second division module, again calling module, correction module and extraction module, to extract all foreground targets of the video; Or include:
Calling module is inherited, is used for: extracting remaining each frame image from the video, respectively as described the One image, and it is input to third division module, wherein the third division module is used for according to revised first figure of previous frame First transparency mask of picture divides all foreground pixel set F in the first image corresponding to the present framec, had powerful connections Pixel set BCWith all unknown pixel set ZC;Then the succession calling module successively calls first metric module, Two metric modules, computing module, determining module, the second division module, again calling module, correction module and extraction module, with Extract all foreground targets of the video, wherein third division module includes:
First binary Images Processing unit, for carrying out the first transparency mask of revised first image of previous frame Binaryzation, threshold value take 0.5, obtain the first bianry image of foreground target;
Second bianry image initial cell, is used for: using the first bianry image as the second bianry image initial value;
Second binary Images Processing unit, is used for: using size be 3x3 circular configuration element to the second bianry image into The operation of row morphological erosion, and the second bianry image is updated with the result obtained;
First repeats call unit, calls the second two-value processing unit five times for repeating;
Third bianry image initial cell, is used for: using the first bianry image as third bianry image initial value;
Third binary Images Processing unit, is used for: using size be 3x3 circular configuration element to third bianry image into Row morphological dilation, and third bianry image is updated with the result obtained:
First repeats call unit, calls third two-value processing unit five times for repeating;
True and false division unit, is used for: will be true in the second bianry image of the second binary Images Processing unit final updating Respective pixel as all foreground pixel set Fc, by the third bianry image of third binary Images Processing unit final updating In be false respective pixel as all background pixel set BC, rest of pixels is as all unknown pixel set ZC
In another embodiment, wherein the second division module further include:
Mean filter unit, is used for: carrying out mean filter to the first image and obtains third image;
Second image generation unit, is used for: the first image and third image pass through following formula the second image of generation:
Wherein, IM2Indicate the gray value of k-th of pixel on the second image after being superimposed, xrIndicate k-th of picture on the first image Plain xkNeighborhood territory pixel, NkIt indicates with xkCentered on neighborhood in number of pixels,Indicate to the first image into The pixel value of k-th of pixel, β take 0.5 on the resulting third image of row mean filter.
In another embodiment, wherein correction module further include:
Edge cells are found, are used for: according to the second transparency mask of the first image and the first transparency of the first image Mask finds the edge of its second transparency mask, the edge of the first transparency mask respectively;
It determines position units, is used for: obtaining the position of all pixels at the edge of the second transparency mask and first transparent The position of all pixels at the edge of mask is spent, and determines the position and first of all pixels at the edge of the second transparency mask The region that the position of all pixels at the edge of transparency mask is overlapped, and then determine the identical pixel Z in positionsp
First amending unit, is used for: searching pixel Z respectivelyspThe first transparency mask corresponding to the first image it is transparent Spend estimated value, and the transparency estimated value of the second transparency mask corresponding to the first image, and using the average value of the two as Pixel ZspRevised transparency estimated value;
Second amending unit, is used for: with pixel ZspRevised transparency estimated value, corrects the first of the first image Transparency mask.
It is understood that described device can implement method described in one embodiment above.
In another embodiment, wherein the determining position units further comprise:
Different location subelement, is used for: according to the position of all pixels at the edge of the second transparency mask of judgement and The region that the position of all pixels at the edge of the first transparency mask is overlapped, further determines that the different pixel Z in positiondp, packet It includes: the pixel Z positioned at the edge of the second transparency maskdp2With the pixel Z at the edge for being located at the first transparency maskdp1
It is closed subelement, is used for: utilizing the different pixel Z in the positiondpPixel Z identical with positionsp, obtain second Determined by the edge of the edge of transparency mask and the first transparency mask: the closed closed area of institute between edge and edge The position of all closing pixels of domain and the enclosed region;
Subelement is repeatedly searched, is used for:
(3) pixel Z is searcheddp1Position corresponding to pixel estimate in the transparency of the first transparency mask of the first image Evaluation, and the transparence value of the corresponding pixel in the second image is searched, and using the average value of the two as pixel Zdp1Amendment Transparency estimated value afterwards;
(4) pixel Z is searcheddp2Position corresponding to pixel estimate in the transparency of the second transparency mask of the first image Evaluation, and the transparence value of the corresponding pixel first transparency mask in the first image is searched, and with the average value of the two As pixel Zdp2Revised transparency estimated value;
Complicated revise subelemen, is used for: in conjunction with pixel Zdp1Revised transparency estimated value and pixel Zdp2It is revised Transparency estimated value corrects the first transparency mask of the first image.
Those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, it is involved And movement, module, unit be not necessarily essential to the invention.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided by the disclosure, it should be understood that disclosed method is, it can be achieved that be corresponding function Energy unit, processor or even system may be distributed over multiple wherein each section of the system both can be located in one place In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.In addition, each functional unit can integrate in one processing unit, it is also possible to each unit individualism, it can also two A or more than two units are integrated in one unit.Above-mentioned integrated unit both can take the form of hardware realization, can also To realize in the form of software functional units.If the integrated unit is realized in the form of SFU software functional unit and conduct Independent product when selling or using, can store in a computer readable storage medium.Based on this understanding, originally The disclosed technical solution substantially all or part of the part that contributes to existing technology or the technical solution in other words It can be embodied in the form of software products, which is stored in a storage medium, including several fingers It enables and using so that a computer equipment (can be smart phone, personal digital assistant, wearable device, laptop, plate Computer) execute the disclosure each embodiment the method all or part of the steps.And storage medium above-mentioned include: USB flash disk, Read-only memory (ROM, Read-Only Memory), is moved random access memory (RAM, Random Access Memory) The various media that can store program code such as dynamic hard disk, magnetic or disk.
The above, above embodiments are only to illustrate the technical solution of the disclosure, rather than its limitations;Although referring to before Embodiment is stated the disclosure is described in detail, it should be understood by those skilled in the art that: it still can be to aforementioned each reality Technical solution documented by example is applied to modify or equivalent replacement of some of the technical features;And these modification or Person's replacement, the range for the presently disclosed embodiments technical solution that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of video object track extraction method, includes the following steps:
S100 divides all foreground pixel set F in the image, all background pixel collections for the first image in video Close B and all unknown pixel set Z;Wherein, the first image is a certain frame image extracted from the video;
S200 gives certain prospect background pixels to (Fi, Bj), each unknown pixel Z is measured according to the following formulakTransparency
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkM nearest prospect Pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel is to (Fi, Bj) amount to m2Group;
S300, for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingAccording to the following formula Prospect background pixel is measured to (Fi, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for (FiMAX, BjMAX);;
S400 calculates each unknown pixel Z according to the following formulakTransparency estimated value
S500, according to each unknown pixel ZkTransparency estimated valuePrimarily determine the first transparent of the first image Spend mask;
S600 to the first image superposition grayscale information to generate the second image, and divides its all prospect to second image Pixel set, all background pixel set and all unknown pixel set;
S700 executes step S200 to S500 for second image, to determine the first transparency mask of the second image, And using the first transparency mask of second image as the second transparency mask of the first image;
S800 corrects the first transparency mask of the first image using the second transparency mask of the first image;
S900 corrects the first transparency mask of resulting first image according to step S800, to the first image of the video In foreground target extract, and specified target is retrieved in all foreground targets, then according to the time sequencing of all frames, The track of the specified target is generated according to all images for including the specified target.
2. according to the method described in claim 1, wherein, it is preferred that further include following steps after the step S900:
S1000 extracts remaining each frame image from the video, and respectively as the first image, repetition is held Row abovementioned steps S100 to S900, to extract all foreground targets of the video;Or
S1100 extracts remaining each frame image, respectively as the first image, and according to upper from the video First transparency mask of revised first image of one frame divides all prospects in the first image corresponding to the present frame Pixel set Fc, all background pixel set BCWith all unknown pixel set ZC, abovementioned steps S200 to S900 is repeated, To extract all foreground targets of the video, wherein divide all prospect pictures in the first image corresponding to the present frame Plain set Fc, all background pixel set BCWith all unknown pixel set ZCSpecifically includes the following steps:
First transparency mask of revised first image of previous frame is carried out binaryzation by S11001, and threshold value takes 0.5, obtains First bianry image of foreground target;
S11002, using the first bianry image as the second bianry image initial value;
S11003 carries out morphological erosion operation to the second bianry image using the circular configuration element that size is 3x3, and with obtaining The result obtained updates the second bianry image:
S11004 is repeated step S1003 five times;
S11005, using the first bianry image as third bianry image initial value;
S11006 carries out morphological dilation to third bianry image using the circular configuration element that size is 3x3, and with obtaining The result obtained updates third bianry image:
S11007 is repeated step S1006 five times;
S11008 will be genuine respective pixel in the second bianry image as all foreground pixel set Fc, by third bianry image In be false respective pixel as all background pixel set BC, rest of pixels is as all unknown pixel set ZC
3. according to the method described in claim 1, wherein, in step S600, in the following way to the first image superposition gray scale Information is to generate the second image:
S601 carries out mean filter to the first image and obtains third image;
S602, the first image and third image generate the second image by following formula:
Wherein, IM2Indicate the gray value of k-th of pixel on the second image after being superimposed, xrIndicate k-th of pixel x on the first imagek Neighborhood territory pixel, NkIt indicates with xkCentered on neighborhood in number of pixels,It indicates to carry out the first image equal The pixel value of k-th of pixel, β take 0.5 on the resulting third image of value filtering.
4. according to the method described in claim 1, wherein, step S800 further include:
S801, according to the first transparency mask of the second transparency mask of the first image and the first image, find respectively its The edge at the edge of two transparency masks, the first transparency mask;
S802 obtains the institute of the position of all pixels at the edge of the second transparency mask and the edge of the first transparency mask There is the position of pixel, and determines the position of all pixels at the edge of the second transparency mask and the edge of the first transparency mask All pixels the region that is overlapped of position, and then determine the identical pixel Z in positionsp
S803 searches pixel Z respectivelyspThe transparency estimated value of the first transparency mask corresponding to the first image, and correspond to The transparency estimated value of second transparency mask of the first image, and using the average value of the two as pixel ZspIt is revised transparent Spend estimated value;
S804, with pixel ZspRevised transparency estimated value corrects the first transparency mask of the first image.
5. according to the method described in claim 4, wherein, the step S802 further comprises:
S8021, according to the side of the position of all pixels at the edge of the second transparency mask of judgement and the first transparency mask The region that the position of all pixels of edge is overlapped, further determines that the different pixel Z in positiondp, comprising: it is located at the second transparency and hides The pixel Z at the edge of coverdp2With the pixel Z at the edge for being located at the first transparency maskdp1
S8022 utilizes the different pixel Z in the positiondpPixel Z identical with positionsp, obtain the edge of the second transparency mask Determined by edge with the first transparency mask: the closed enclosed region of institute and the closed area between edge and edge The position of all closing pixels in domain;
S8023 executes following sub-step:
(1) pixel Z is searcheddp1Position corresponding to pixel in the transparency estimated value of the first transparency mask of the first image, And the transparence value of the corresponding pixel in the second image is searched, and using the average value of the two as pixel Zdp1It is revised Transparency estimated value;
(2) pixel Z is searcheddp2Position corresponding to pixel in the transparency estimated value of the second transparency mask of the first image, And the transparence value of the corresponding pixel in the first image is searched, and using the average value of the two as pixel Zdp2It is revised Transparency estimated value;
S8024, in conjunction with pixel Zdp1Revised transparency estimated value and pixel Zdp2Revised transparency estimated value corrects institute State the first transparency mask of the first image.
6. a kind of video object trajectory extraction device, comprising:
First division module, is used for: for the first image in video, dividing all foreground pixel set F in the image, institute The pixel set that has powerful connections B and all unknown pixel set Z;Wherein, the first image be extracted from the video it is a certain Frame image;
First metric module, is used for: giving certain prospect background pixels to (Fi, Bj), each unknown picture is measured according to the following formula Plain ZkTransparency
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkM nearest prospect Pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel is to (Fi, Bj) amount to m2Group;
Second metric module, is used for: for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingMeasurement prospect background pixel is to (F according to the following formulai, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for (FiMAX, BjMAX);
Computing module is used for: calculating each unknown pixel Z according to the following formulakTransparency estimated value
Determining module is used for: according to each unknown pixel ZkTransparency estimated valuePrimarily determine the first image The first transparency mask;
Second division module, is used for: to the first image superposition grayscale information to generate the second image, and drawing to second image Divide its all foreground pixel set, all background pixel set and all unknown pixel set;
Calling module again is used for: be directed to second image, call again first metric module, the second metric module, Computing module and determining module, to determine the first transparency mask of the second image, and it is transparent by the first of second image Spend second transparency mask of the mask as the first image;
Correction module is used for: using the second transparency mask of the first image, correct the first image first is transparent Spend mask;
Extraction module is used for: according to the first transparency mask of resulting first image of correction module, to the first of the video Foreground target in image extracts, and specified target is retrieved in all foreground targets, then according to the time of all frames Sequentially, the track of the specified target is generated according to all images for including the specified target.
7. device according to claim 6, described device further include:
Successively calling module is used for: remaining each frame image is extracted from the video, and respectively as described first Image, successively call described in: the first division module, the first metric module, the second metric module, computing module, determining module, the Two division modules, again calling module, correction module and extraction module, to extract all foreground targets of the video;Or Include:
Calling module is inherited, is used for: remaining each frame image is extracted from the video, respectively as first figure Picture, and it is input to third division module, wherein the third division module is used for according to revised first image of previous frame First transparency mask divides all foreground pixel set F in the first image corresponding to the present framec, all background pixels Set BCWith all unknown pixel set ZC;Then the succession calling module successively calls first metric module, second degree Module, computing module, determining module, the second division module, again calling module, correction module and extraction module are measured, to extract All foreground targets of the video, wherein third division module includes:
First binary Images Processing unit, for the first transparency mask of revised first image of previous frame to be carried out two-value Change, threshold value takes 0.5, obtains the first bianry image of foreground target;
Second bianry image initial cell, is used for: using the first bianry image as the second bianry image initial value;
Second binary Images Processing unit, is used for: carrying out shape to the second bianry image using the circular configuration element that size is 3x3 State etching operation, and the second bianry image is updated with the result obtained;
First repeats call unit, calls the second two-value processing unit five times for repeating;
Third bianry image initial cell, is used for: using the first bianry image as third bianry image initial value;
Third binary Images Processing unit, is used for: carrying out shape to third bianry image using the circular configuration element that size is 3x3 State expansive working, and third bianry image is updated with the result obtained:
First repeats call unit, calls third two-value processing unit five times for repeating;
True and false division unit, is used for: will be really right in the second bianry image of the second binary Images Processing unit final updating Answer pixel as all foreground pixel set Fc, will be in the third bianry image of third binary Images Processing unit final updating False respective pixel is as all background pixel set BC, rest of pixels is as all unknown pixel set ZC
8. device according to claim 6, wherein the second division module further include: mean filter unit is used for: to One image carries out mean filter and obtains third image;
Second image generation unit, is used for: the first image and third image pass through following formula the second image of generation:
Wherein, IM2Indicate the gray value of k-th of pixel on the second image after being superimposed, xrIndicate k-th of pixel x on the first imagek Neighborhood territory pixel, NkIt indicates with xkCentered on neighborhood in number of pixels,It indicates to carry out the first image equal The pixel value of k-th of pixel, β take 0.5 on the resulting third image of value filtering.
9. device according to claim 6, wherein correction module further include:
Edge cells are found, are used for: according to the first transparency mask of the second transparency mask of the first image and the first image, The edge of its second transparency mask, the edge of the first transparency mask are found respectively;
Determine position units, be used for: the position and the first transparency for obtaining all pixels at the edge of the second transparency mask hide The position of all pixels at the edge of cover, and determine the position of all pixels at the edge of the second transparency mask and first transparent The region that the position of all pixels at the edge of mask is overlapped is spent, and then determines the identical pixel Z in positionsp
First amending unit, is used for: searching pixel Z respectivelyspThe transparency of the first transparency mask corresponding to the first image is estimated Evaluation, and the transparency estimated value of the second transparency mask corresponding to the first image, and using the average value of the two as pixel ZspRevised transparency estimated value;
Second amending unit, is used for: with pixel ZspRevised transparency estimated value, correct the first image first are transparent Spend mask.
10. device according to claim 9, wherein the determining position units further comprise:
Different location subelement, is used for: according to the position and first of all pixels at the edge of the second transparency mask of judgement The region that the position of all pixels at the edge of transparency mask is overlapped, further determines that the different pixel Z in positiondp, comprising: position Pixel Z in the edge of the second transparency maskdp2With the pixel Z at the edge for being located at the first transparency maskdp1
It is closed subelement, is used for: utilizing the different pixel Z in the positiondpPixel Z identical with positionsp, obtain the second transparency Determined by the edge of the edge of mask and the first transparency mask: the closed enclosed region of institute between edge and edge, and The position of all closing pixels of the enclosed region;
Subelement is repeatedly searched, is used for:
(1) pixel Z is searcheddp1Position corresponding to pixel in the transparency estimated value of the first transparency mask of the first image, And the transparence value of the corresponding pixel in the second image is searched, and using the average value of the two as pixel Zdp1It is revised Transparency estimated value;
(2) pixel Z is searcheddp2Position corresponding to pixel in the transparency estimated value of the second transparency mask of the first image, And the transparence value of the corresponding pixel in the first image is searched, and using the average value of the two as pixel Zdp2It is revised Transparency estimated value;
Complicated revise subelemen, is used for: in conjunction with pixel Zdp1Revised transparency estimated value and pixel Zdp2It is revised transparent Estimated value is spent, the first transparency mask of the first image is corrected.
CN201910505008.6A 2018-09-26 2019-06-11 A kind of video object track extraction method and device Withdrawn CN110363788A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2018/107514 2018-09-26
CN2018107514 2018-09-26

Publications (1)

Publication Number Publication Date
CN110363788A true CN110363788A (en) 2019-10-22

Family

ID=68140393

Family Applications (5)

Application Number Title Priority Date Filing Date
CN201910444633.4A Withdrawn CN110378867A (en) 2018-09-26 2019-05-24 By prospect background pixel to and grayscale information obtain transparency mask method
CN201910444632.XA Withdrawn CN110335288A (en) 2018-09-26 2019-05-24 A kind of video foreground target extraction method and device
CN201910505008.6A Withdrawn CN110363788A (en) 2018-09-26 2019-06-11 A kind of video object track extraction method and device
CN201910628287.5A Withdrawn CN110516534A (en) 2018-09-26 2019-07-11 A kind of method for processing video frequency and device based on semantic analysis
CN201910737589.6A Withdrawn CN110659562A (en) 2018-09-26 2019-08-09 Deep learning (DNN) classroom learning behavior analysis method and device

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201910444633.4A Withdrawn CN110378867A (en) 2018-09-26 2019-05-24 By prospect background pixel to and grayscale information obtain transparency mask method
CN201910444632.XA Withdrawn CN110335288A (en) 2018-09-26 2019-05-24 A kind of video foreground target extraction method and device

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201910628287.5A Withdrawn CN110516534A (en) 2018-09-26 2019-07-11 A kind of method for processing video frequency and device based on semantic analysis
CN201910737589.6A Withdrawn CN110659562A (en) 2018-09-26 2019-08-09 Deep learning (DNN) classroom learning behavior analysis method and device

Country Status (2)

Country Link
CN (5) CN110378867A (en)
WO (5) WO2020062899A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989962B (en) * 2021-02-24 2024-01-05 上海商汤智能科技有限公司 Track generation method, track generation device, electronic equipment and storage medium
KR20240005727A (en) * 2021-04-06 2024-01-12 나이앤틱, 인크. Panoptic segmentation prediction for augmented reality

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456212A (en) * 2010-10-19 2012-05-16 北大方正集团有限公司 Separation method and system for visible watermark in numerical image
CN102999892A (en) * 2012-12-03 2013-03-27 东华大学 Intelligent fusion method for depth images based on area shades and red green blue (RGB) images
CN103366364A (en) * 2013-06-07 2013-10-23 太仓中科信息技术研究院 Color difference-based image matting method
US8731315B2 (en) * 2011-09-12 2014-05-20 Canon Kabushiki Kaisha Image compression and decompression for image matting
WO2015048694A2 (en) * 2013-09-27 2015-04-02 Pelican Imaging Corporation Systems and methods for depth-assisted perspective distortion correction
US20170116481A1 (en) * 2015-10-23 2017-04-27 Beihang University Method for video matting via sparse and low-rank representation
CN107516319A (en) * 2017-09-05 2017-12-26 中北大学 A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN108391118A (en) * 2018-03-21 2018-08-10 惠州学院 A kind of display system for realizing 3D rendering based on projection pattern

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6361910B1 (en) * 2000-02-03 2002-03-26 Applied Materials, Inc Straight line defect detection
US6870945B2 (en) * 2001-06-04 2005-03-22 University Of Washington Video object tracking by estimating and subtracting background
US7466842B2 (en) * 2005-05-20 2008-12-16 Mitsubishi Electric Research Laboratories, Inc. Modeling low frame rate videos with bayesian estimation
US8508546B2 (en) * 2006-09-19 2013-08-13 Adobe Systems Incorporated Image mask generation
US8520972B2 (en) * 2008-09-12 2013-08-27 Adobe Systems Incorporated Image decomposition
CN101686338B (en) * 2008-09-26 2013-12-25 索尼株式会社 System and method for partitioning foreground and background in video
CN101588459B (en) * 2009-06-26 2011-01-05 北京交通大学 Video keying processing method
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
US8625888B2 (en) * 2010-07-21 2014-01-07 Microsoft Corporation Variable kernel size image matting
US8386964B2 (en) * 2010-07-21 2013-02-26 Microsoft Corporation Interactive image matting
CN102163216B (en) * 2010-11-24 2013-02-13 广州市动景计算机科技有限公司 Picture display method and device thereof
CN102236901B (en) * 2011-06-30 2013-06-05 南京大学 Method for tracking target based on graph theory cluster and color invariant space
US8744123B2 (en) * 2011-08-29 2014-06-03 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
US9305357B2 (en) * 2011-11-07 2016-04-05 General Electric Company Automatic surveillance video matting using a shape prior
CN102651135B (en) * 2012-04-10 2015-06-17 电子科技大学 Optimized direction sampling-based natural image matting method
US8792718B2 (en) * 2012-06-29 2014-07-29 Adobe Systems Incorporated Temporal matte filter for video matting
AU2013206597A1 (en) * 2013-06-28 2015-01-22 Canon Kabushiki Kaisha Depth constrained superpixel-based depth map refinement
US20150091891A1 (en) * 2013-09-30 2015-04-02 Dumedia, Inc. System and method for non-holographic teleportation
CN104112144A (en) * 2013-12-17 2014-10-22 深圳市华尊科技有限公司 Person and vehicle identification method and device
US10089740B2 (en) * 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
CN104952089B (en) * 2014-03-26 2019-02-15 腾讯科技(深圳)有限公司 A kind of image processing method and system
CN103903230A (en) * 2014-03-28 2014-07-02 哈尔滨工程大学 Video image sea fog removal and clearing method
CN105590307A (en) * 2014-10-22 2016-05-18 华为技术有限公司 Transparency-based matting method and apparatus
CN104573688B (en) * 2015-01-19 2017-08-25 电子科技大学 Mobile platform tobacco laser code intelligent identification Method and device based on deep learning
CN104680482A (en) * 2015-03-09 2015-06-03 华为技术有限公司 Method and device for image processing
CN104935832B (en) * 2015-03-31 2019-07-12 浙江工商大学 For the video keying method with depth information
CN105100646B (en) * 2015-08-31 2018-09-11 北京奇艺世纪科技有限公司 Method for processing video frequency and device
CN105809679B (en) * 2016-03-04 2019-06-18 李云栋 Mountain railway side slope rockfall detection method based on visual analysis
US10275892B2 (en) * 2016-06-09 2019-04-30 Google Llc Multi-view scene segmentation and propagation
CN106204567B (en) * 2016-07-05 2019-01-29 华南理工大学 A kind of natural background video matting method
CN107665326B (en) * 2016-07-29 2024-02-09 奥的斯电梯公司 Monitoring system for passenger conveyor, passenger conveyor and monitoring method thereof
CN107872644B (en) * 2016-09-23 2020-10-09 亿阳信通股份有限公司 Video monitoring method and device
CN106778810A (en) * 2016-11-23 2017-05-31 北京联合大学 Original image layer fusion method and system based on RGB feature Yu depth characteristic
US10198621B2 (en) * 2016-11-28 2019-02-05 Sony Corporation Image-Processing device and method for foreground mask correction for object segmentation
CN106952276A (en) * 2017-03-20 2017-07-14 成都通甲优博科技有限责任公司 A kind of image matting method and device
CN107194867A (en) * 2017-05-14 2017-09-22 北京工业大学 A kind of stingy picture synthetic method based on CUDA
CN107273905B (en) * 2017-06-14 2020-05-08 电子科技大学 Target active contour tracking method combined with motion information
CN107230182B (en) * 2017-08-03 2021-11-09 腾讯科技(深圳)有限公司 Image processing method and device and storage medium
CN108399361A (en) * 2018-01-23 2018-08-14 南京邮电大学 A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation
CN108320298B (en) * 2018-04-28 2022-01-28 亮风台(北京)信息科技有限公司 Visual target tracking method and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456212A (en) * 2010-10-19 2012-05-16 北大方正集团有限公司 Separation method and system for visible watermark in numerical image
US8731315B2 (en) * 2011-09-12 2014-05-20 Canon Kabushiki Kaisha Image compression and decompression for image matting
CN102999892A (en) * 2012-12-03 2013-03-27 东华大学 Intelligent fusion method for depth images based on area shades and red green blue (RGB) images
CN103366364A (en) * 2013-06-07 2013-10-23 太仓中科信息技术研究院 Color difference-based image matting method
WO2015048694A2 (en) * 2013-09-27 2015-04-02 Pelican Imaging Corporation Systems and methods for depth-assisted perspective distortion correction
US20170116481A1 (en) * 2015-10-23 2017-04-27 Beihang University Method for video matting via sparse and low-rank representation
CN107516319A (en) * 2017-09-05 2017-12-26 中北大学 A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN108391118A (en) * 2018-03-21 2018-08-10 惠州学院 A kind of display system for realizing 3D rendering based on projection pattern

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NING XU 等: "Deep Image Matting", 《COMPUTER VISION FOUNDATION》 *
SHUTAO LI 等: "Image matting for fusion of multi-focus images in dynamic scenes", 《INFORMATION FUSION》 *
ZHAOQUAN CAI 等: "Improving sampling-based image matting with cooperative coevolution differential evolution algorithm", 《SOFT COMPUTING》 *
谢斌 等: "基于雾气遮罩理论的图像去雾算法", 《计算机工程与科学》 *

Also Published As

Publication number Publication date
CN110335288A (en) 2019-10-15
CN110659562A (en) 2020-01-07
CN110516534A (en) 2019-11-29
WO2020062899A1 (en) 2020-04-02
WO2020063189A1 (en) 2020-04-02
WO2020062898A1 (en) 2020-04-02
WO2020063321A1 (en) 2020-04-02
WO2020063436A1 (en) 2020-04-02
CN110378867A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
EP3540637B1 (en) Neural network model training method, device and storage medium for image processing
CN111461110B (en) Small target detection method based on multi-scale image and weighted fusion loss
CN110047069B (en) Image detection device
CN108986140B (en) Target scale self-adaptive tracking method based on correlation filtering and color detection
CN110765860B (en) Tumble judging method, tumble judging device, computer equipment and storage medium
CN107169463B (en) Method for detecting human face, device, computer equipment and storage medium
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN108961158B (en) Image synthesis method and device
CN112330684B (en) Object segmentation method and device, computer equipment and storage medium
CN113378812A (en) Digital dial plate identification method based on Mask R-CNN and CRNN
CN111783779A (en) Image processing method, apparatus and computer-readable storage medium
JP2007128195A (en) Image processing system
CN113870157A (en) SAR image synthesis method based on cycleGAN
CN116645592B (en) Crack detection method based on image processing and storage medium
CN114021704B (en) AI neural network model training method and related device
CN110390259A (en) Recognition methods, device, computer equipment and the storage medium of diagram data
CN116030498A (en) Virtual garment running and showing oriented three-dimensional human body posture estimation method
CN110363788A (en) A kind of video object track extraction method and device
CN111563462A (en) Image element detection method and device
CN114004772B (en) Image processing method, and method, system and equipment for determining image synthesis model
CN111027472A (en) Video identification method based on fusion of video optical flow and image space feature weight
CN117934827A (en) Key point detection model training method, key point detection method and related device
CN113506226B (en) Motion blur restoration method and system
CN110766079B (en) Training data generation method and device for screen abnormal picture detection
CN114648800A (en) Face image detection model training method, face image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20191022

WW01 Invention patent application withdrawn after publication