[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101533512A - Method for automatically extracting interesting image regions based on human visual attention system - Google Patents

Method for automatically extracting interesting image regions based on human visual attention system Download PDF

Info

Publication number
CN101533512A
CN101533512A CN200910022191A CN200910022191A CN101533512A CN 101533512 A CN101533512 A CN 101533512A CN 200910022191 A CN200910022191 A CN 200910022191A CN 200910022191 A CN200910022191 A CN 200910022191A CN 101533512 A CN101533512 A CN 101533512A
Authority
CN
China
Prior art keywords
image
pixel
input picture
contrast
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910022191A
Other languages
Chinese (zh)
Other versions
CN101533512B (en
Inventor
齐飞
吴金建
石光明
刘焱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2009100221910A priority Critical patent/CN101533512B/en
Publication of CN101533512A publication Critical patent/CN101533512A/en
Application granted granted Critical
Publication of CN101533512B publication Critical patent/CN101533512B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically extracting interesting image regions based on a human visual attention system, which mainly solves the problem that the prior method for extracting interesting regions cannot extract a plurality of interesting regions and edge information. The method comprises the following steps: calculating the local brightness contrast, global brightness contrast and edge of an input image; fusing characteristic graphs corresponding to the three characteristics through global nonlinear normalization merging algorithm so as to generate a contrast graph; calculating the position characteristics of the input image so as to establish a weighting graph; establishing a saliency graph corresponding to the input image through the contrast graph and the weighting graph; and segmenting out the interesting regions of the input image according to the saliency graph. The method can effectively extract a plurality of interesting regions in the input image, and is used in the technical field of image analysis and image compression.

Description

Method for automatically extracting interesting image regions based on human visual attention system
Technical field
The present invention relates to extract the method for interesting image regions, particularly extract the method for interesting image regions, be used for graphical analysis and Image Compression field by simulating human vision attention system.
Technical background
Along with the fast development of computer network communication technology, the internet provides information service, and especially image information develops rapidly.
The data of image information amount is huge, and how effectively processing image information becomes the research focus of image information service.For the human visual system, the information that image provided not is of equal importance everywhere.Some zone of image provides people to understand the required main contents of image, is referred to as area-of-interest; And the other zone only provides some less important background content.Searching is published picture as area-of-interest, and graphical analysis, compression of images etc. is all had important meaning.
W.Osberger and A.J.Maeder are at article " Automatic identification ofperceptually important regions in the image ", in Proc.Int ' l Conf.Pattern Recognition, 1998, among the pp.17-20, a kind of method of extracting area-of-interest based on human visual attention system is proposed.This method is come the analysis image characteristic factor with the basis that is divided into of image, determines area-of-interest then.Therefore partitioning algorithm is depended in the success or failure of this method.People such as Itti Laurent are at article " A model of saliency-based visual attention for rapid scene analysis ", IEEE Trans.Pattern Analysis and Machine Intelligence, vol.20, no.11, pp1254-1259, in 1998, a kind of method of extracting area-of-interest by these three characteristic factors of color, direction and brightness of analyzing each pixel is proposed.This method is easy to analyze and realize, and can accurately orient the approximate location of area-of-interest.Because the feature that adopts the gaussian pyramid model to come analysis image in this method, this is a kind of algorithm of down-sampling, and therefore unavoidable meeting loses some specifying informations of image in the operation, as marginal information etc.C.M.Priviter and L.W.Stark are at article " Algorithms for definingvisual region-of-interesting:Comparison with eye fixations ", IEEETrans.Pattern Anal.Machine Intell.vol.22, no.9, pp.970-982, in 2000, proposed based on the single feature of image, the segmentation algorithm directly extracts area-of-interest from image.This method is simple to operate, is easy to realize.For simple image, can effectively extract its area-of-interest; But for the image with complex background, this method effect is poor.SooYeong Kwak etc. are at article " Automatic salient-object extraction using the contrast map andsalient point ", in Advances in Multimedia Information Processing-PCM2004.vol.3332 of LNCS, pp.138-145, Springer Berlin. and K.B.Chul etc. are at article " Automatic object-of-interest segmentation fromnature images ", in PROC.Int ' l Conf Pattern Recognition, 2006, among the pp.45-48, the application focus window of proposition detects the method for well-marked target.These methods therefore also can lose detailed information such as edge in operating process, and when a plurality of target occurring simultaneously in the image, application focus window method detect a plurality of well-marked targets and can lose efficacy owing to all be to adopt the model of Itti to set up remarkable figure.
Summary of the invention
The objective of the invention is to overcome the defective and the deficiency that exist in the above-mentioned prior art, a kind of method for automatically extracting interesting image regions based on human visual attention system simple to operate is provided, have the area-of-interest of the image of complex background with effective extraction, and a plurality of target objects that occur in the image.
For achieving the above object, simulating human vision attention of the present invention system, influence human visual system's rudimentary factor and senior factor in the analysis image, the contrast figure that these three the rudimentary factors in the local luminance contrast of the main analyzing influence vision system of the present invention, overall luminance contrast and edge are set up, and the weight map of this senior factor foundation of position, set up significantly figure in conjunction with contrast figure and weight map, performing step is as follows:
(1) these three low-level features factors of local luminance contrast, overall luminance contrast and edge of each pixel of difference calculating input image;
(2) use these three low-level features factors of local luminance contrast, overall luminance contrast and edge that overall non-linear normalizing merge algorithm merges input picture, generate contrast figure;
(3), obtain the weight map of input picture correspondence according to the weight of each pixel of input picture this pixel of the position calculation in image in image;
(4), generate the remarkable figure of input picture by described contrast figure and weight map;
(5) area-of-interest of input picture is determined in the indication of the remarkable figure of basis.
Described calculating local luminance contrast is the luminance difference that calculates in each pixel subrange peripheral with it, and calculation procedure is as follows:
(2a) adopt Gaussian function that input picture is done smoothing processing:
I ( σ i ) = I ⊗ G ( σ i )
Wherein, I is the former figure of input picture, and G (σ) is a Gaussian function, I (σ j) be the image after the smoothing processing;
(2b) with Gaussian difference function calculation pixel with it absolute brightness difference in peripheral little field:
DoG(x,y,σ 1,σ 2)=|I(x,y,σ 1)-I(x,y,σ 2)|
Wherein: σ iValue decision central pixel point and the degree of association of peripheral little field interior pixel point, establish σ here 1=1, I (x, y, σ 1) be through first variance σ 1The pixel of image after the Gaussian function smoothing processing, I (x, y, σ 2) be through second variance σ 2The pixel of image after the Gaussian function smoothing processing, DoG (x, y, σ 1, σ 2) be the local luminance contrast value;
(2c) get two different σ 2Value obtains two width of cloth luminance difference figure DoG1 and DoG2 respectively, by overall non-linear normalizing fusion method this two width of cloth image is merged, and obtains local luminance contrast figure.
Described according to the weight of each pixel of input picture this pixel of the position calculation in image in image, utilize following formula to calculate:
W center ( x , y ) = 1 if ( x , y ) ∈ center 1 + cos 2 πr 2 R 2 else
Wherein ' center ' is picture centre
Figure A200910022191D00063
The zone, r is that (x, y) to the distance of input picture central area, R is the distance of input picture edge to its center to point.
The present invention has following advantage:
1) the present invention according to influencing a plurality of factor analysis input pictures that the human vision system is paid close attention in the image, extracts the human eye area-of-interest, so can accurately extract the input picture area-of-interest owing to adopt the method for simulating human vision system.
2) each pixel that the present invention is based on input picture is done computing, so design process is simple, is easy to realize.
3) the present invention is owing to these three features of local luminance contrast, overall luminance contrast and edge of analyzing each pixel of input picture, so can extract the marginal information of object.
4) the present invention is owing to adopt according to the remarkable indication of figure and seek area-of-interest, thereby can extract a plurality of area-of-interests simultaneously.
Embodiment
With reference to Fig. 1, specific implementation step of the present invention is as follows:
Step 1, these three low-level features factors of local luminance contrast, overall luminance contrast and edge of each pixel of calculating input image respectively.
(1a) calculate the local luminance contrast
Gaussian difference (Differential of Gaussian) function can be represented central pixel point and its difference between the pixel of field on every side effectively, so adopt the Gaussian difference function to calculate the interior luminance contrast of regional area.
With reference to Fig. 2, being calculated as follows of local luminance contrast among the present invention:
At first, adopt Gaussian function that input picture is done smoothing processing:
I ( σ i ) = I ⊗ G ( σ i )
Wherein, I is the former figure of input picture, and G (σ) is a Gaussian function, I (σ i) be the image after the smoothing processing, Be the convolution symbol;
Secondly, employing Gaussian difference function calculation pixel is absolute brightness difference in peripheral little field with it, works as σ iValue is from little change when big, the smoothed processing of image fuzzy more, and each central pixel point has also comprised the information of more surrounding pixel point simultaneously, when getting different σ iDuring value, the Gaussian difference function is:
DoG(x,y;σ 1,σ 2)=|I(x,y;σ 1)-I(x,y;σ 2)| (A)
I (x, y, σ wherein 1) be through first variance σ 1The pixel of image after the pairing Gaussian convolution nuclear G1 smoothing processing, I (x, y, σ 2) be through second variance σ 2The pixel of image after the Gaussian function smoothing processing, we establish σ here 1=1, and σ 1≠ σ 2, DoG (x, y, σ 1, σ 2) be that the Gaussian difference function is in (x, the value of y) locating;
Then, when utilizing in the Gaussian difference function detected image target object, its effect and σ 2The value and the size of target object relevant, for the little target object of size, get its corresponding little σ 2Value, it is better that local contrast detects effect; And, get its big σ for the big target object of size 2Value, it is better that local contrast detects effect; Under the situation, get its minimum without any target object priori in about image
Figure A200910022191D0008104001QIETU
Value and maximum thereof
Figure A200910022191D0008104009QIETU
Value obtains two Gaussian convolution nuclear G21 and G22, obtains two width of cloth luminance difference figure DoG1 and DoG2 according to formula (A),
At last, merge this two width of cloth luminance difference figure DoG1 and DoG2, obtain local contrast with the non-linear normalizing act of union.
(1b) calculate overall luminance contrast
When certain regional brightness is protruded in view picture figure among the width of cloth figure, certainly will attract observer's notice, overall luminance contrast is represented the contrast in brightness of each pixel and full figure, computing formula is as follows:
G contrast ( x , y ) = | L m ( x , y ) - L M | L m ( x , y ) + L M
L wherein m(x is that (x y) is the average gray value in 7 * 7 fields at center, L with some y) MIt is the average gray value of full figure.
(1c) edge calculation zone
Fringe region is the highstrung zone of human visual system in the image, adopts canny operator extraction edge of image feature usually, and threshold value is made as 0.5.
Step 2 is set up the contrast figure of input picture.
The contrast figure that sets up input picture is undertaken by overall non-linear normalizing merge algorithm, this algorithm can overallly promote those only the characteristic pattern of minority peak value, overall situation compacting those have a lot of places all to have the characteristic pattern of similar big small leak in entire image, and its concrete steps are as follows:
(2a) will normalize to same dynamic range (0-1) by step (1a), (1b) and the local luminance contrast that (1c) calculates figure, overall luminance contrast figure and outline map;
(2b) seek the global maximum M of this three width of cloth figure respectively, and calculate all local peaked mean value M;
(2c) respectively to the corresponding amplification (M-M) of this three width of cloth figure overall situation 2Doubly, obtain figure after three width of cloth normalization;
(2d) three width of cloth figure additions after the normalization are obtained luminance contrast figure.
Step 3 is set up the weight map of input picture.
The human visual system pays close attention to the central area of input picture more, so the central area pixel weights of input picture are big, and the pixel weights of its fringe region are little, and concrete computing formula is as follows:
W center ( x , y ) = 1 if ( x , y ) ∈ center 1 + cos 2 πr 2 R 2 else
Wherein ' center ' is picture centre
Figure A200910022191D00092
The zone, r is that (x, y) to the distance of input picture central area, R is the distance of edge to the input picture center to point.
Step 4 is set up the remarkable figure of input picture.
According to the contrast figure and the weight map of above-mentioned steps 2 and step 3 gained input picture, the remarkable figure that sets up the image correspondence is:
SM(x,y)=CM(x,y)×W center(x,y)
Wherein, CM is the contrast figure of input picture, W CenterWeight map for input picture.
The brightness value of each point is represented the susceptibility of its correspondence image pixel among this remarkable figure, and significantly the brightness value of certain point is big more among the figure, represents that this conspicuousness in former figure is high more, attracts the vision system more attention.
Step 5, the area-of-interest of extraction input picture.
(5a) according to the characteristics of the high place attraction vision system more concern of remarkable figure brightness, setting threshold extracts the high zone of brightness value among the remarkable figure:
ROI ( x , y ) = 1 SM ( x , y ) ≥ T 0 else - - - ( B )
Wherein, ROI is the figure after the input picture binaryzation, and 1 represents the effective coverage, and 0 represents inactive area; T is a segmentation threshold, here we establish the T value among the SM figure peaked half.
(5b) obtain a width of cloth binary map according to formula (B), this binary map is made morphology handle, removing the interfered cell territory that some cause because of noise, its value is that 1 zone is area-of-interest in the binary map after treatment.

Claims (3)

1. the method for automatically extracting interesting image regions based on human visual attention system comprises the steps:
(1) these three low-level features factors of local luminance contrast, overall luminance contrast and edge of each pixel of difference calculating input image;
(2) use these three low-level features factors of local luminance contrast, overall luminance contrast and edge that overall non-linear normalizing merge algorithm merges input picture, generate contrast figure;
(3), obtain the weight map of input picture correspondence according to the weight of each pixel of input picture this pixel of the position calculation in image in image;
(4), generate the remarkable figure of input picture by described contrast figure and weight map;
(5) area-of-interest of input picture is determined in the indication of the remarkable figure of basis.
2. the method for claim 1, the described calculating local luminance of its step (1) contrast is the luminance difference that calculates in each pixel subrange peripheral with it, calculation procedure is as follows:
(2a) adopt Gaussian function that input picture is done smoothing processing:
I ( σ i ) = I ⊗ G ( σ i )
Wherein, I is the former figure of input picture, G (σ i) be Gaussian function, I (σ i) be the image after the smoothing processing;
(2b) with Gaussian difference function calculation pixel with it absolute brightness difference in peripheral little field:
DoG(x,y,σ 1,σ 2)=|I(x,y,σ 1)-I(x,y,σ 2)|
Wherein: σ iValue decision central pixel point and the degree of association of peripheral little field interior pixel point, establish σ here 1=1,
I (x, y, σ 1) be through first variance σ 1The pixel of image after the Gaussian function smoothing processing,
I (x, y, σ 2) be through second variance σ 2The pixel of image after the Gaussian function smoothing processing, DoG (x, y, σ 1, σ 2) be the local luminance contrast value;
(2c) get two different σ 2Value obtains two width of cloth luminance difference figure DoG1 and DoG2 respectively, by overall non-linear normalizing fusion method this two width of cloth image is merged, and obtains local luminance contrast figure.
3. method according to claim 1, wherein step (3) is described according to the weight of each pixel of input picture this pixel of the position calculation in image in image, utilizes following formula to calculate:
W center ( x , y ) = 1 if ( x , y ) ∈ center 1 + cos 2 πr 2 R 2 else
Wherein ' center ' is picture centre
Figure A200910022191C00032
The zone, r is that (x, y) to the distance of input picture central area, R is the distance of input picture edge to its center to point.
CN2009100221910A 2009-04-24 2009-04-24 Image region-of-interest automatic extraction method based on human visual attention system Expired - Fee Related CN101533512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100221910A CN101533512B (en) 2009-04-24 2009-04-24 Image region-of-interest automatic extraction method based on human visual attention system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100221910A CN101533512B (en) 2009-04-24 2009-04-24 Image region-of-interest automatic extraction method based on human visual attention system

Publications (2)

Publication Number Publication Date
CN101533512A true CN101533512A (en) 2009-09-16
CN101533512B CN101533512B (en) 2012-05-09

Family

ID=41104091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100221910A Expired - Fee Related CN101533512B (en) 2009-04-24 2009-04-24 Image region-of-interest automatic extraction method based on human visual attention system

Country Status (1)

Country Link
CN (1) CN101533512B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807300A (en) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 Target fragment region merging method and device
CN101847264A (en) * 2010-05-28 2010-09-29 北京大学 Image interested object automatic retrieving method and system based on complementary significant degree image
CN102005057A (en) * 2010-11-17 2011-04-06 中国科学院声学研究所 Method for detecting region of interest of color image
CN102036073A (en) * 2010-12-21 2011-04-27 西安交通大学 Method for encoding and decoding JPEG2000 image based on vision potential attention target area
CN102063623A (en) * 2010-12-28 2011-05-18 中南大学 Method for extracting image region of interest by combining bottom-up and top-down ways
CN102158712A (en) * 2011-03-22 2011-08-17 宁波大学 Multi-viewpoint video signal coding method based on vision
CN102509099A (en) * 2011-10-21 2012-06-20 清华大学深圳研究生院 Detection method for image salient region
CN102509299A (en) * 2011-11-17 2012-06-20 西安电子科技大学 Image salient area detection method based on visual attention mechanism
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN102568016A (en) * 2012-01-03 2012-07-11 西安电子科技大学 Compressive sensing image target reconstruction method based on visual attention
CN102687140A (en) * 2009-12-30 2012-09-19 诺基亚公司 Methods and apparatuses for facilitating content-based image retrieval
WO2012122682A1 (en) * 2011-03-15 2012-09-20 清华大学 Method for calculating image visual saliency based on color histogram and overall contrast
WO2012162878A1 (en) * 2011-05-30 2012-12-06 Technicolor (China) Technology Co., Ltd. Method and device for determining saliency value of current block of image
CN102855025A (en) * 2011-12-08 2013-01-02 西南科技大学 Optical multi-touch contact detection method based on visual attention model
CN103384848A (en) * 2011-02-21 2013-11-06 埃西勒国际通用光学公司 Method for determining at least one geometric/physiognomic parameter associated with the mounting of an ophthalmic lens in a spectacle frame worn by a user
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system used for obtaining super-resolution images with optimized visual experience
CN104079934A (en) * 2014-07-14 2014-10-01 武汉大学 Method for extracting regions of interest in real-time video communication
CN104105441A (en) * 2012-02-13 2014-10-15 株式会社日立制作所 Region extraction system
CN104658004A (en) * 2013-11-20 2015-05-27 南京中观软件技术有限公司 Video image-based air refueling auxiliary cohesion method
CN106828460A (en) * 2017-03-02 2017-06-13 深圳明创自控技术有限公司 A kind of safe full-automatic pilot for prevention of car collision
CN104156938B (en) * 2013-05-14 2017-08-11 五邑大学 A kind of image connectivity region description method and its application process in image registration
CN108960247A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Image significance detection method, device and electronic equipment
CN109242869A (en) * 2018-09-21 2019-01-18 科大讯飞股份有限公司 A kind of image instance dividing method, device, equipment and storage medium
CN109615635A (en) * 2018-12-06 2019-04-12 厦门理工学院 The method and device of quality sorting is carried out to strawberry based on image recognition
CN110267041A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image encoding method, device, electronic equipment and computer readable storage medium
CN111414904A (en) * 2019-01-08 2020-07-14 北京地平线机器人技术研发有限公司 Method and apparatus for processing region of interest data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088473A (en) * 1998-02-23 2000-07-11 Arch Development Corporation Method and computer readable medium for automated analysis of chest radiograph images using histograms of edge gradients for false positive reduction in lung nodule detection
US7970206B2 (en) * 2006-12-13 2011-06-28 Adobe Systems Incorporated Method and system for dynamic, luminance-based color contrasting in a region of interest in a graphic image
JP4525719B2 (en) * 2007-08-31 2010-08-18 カシオ計算機株式会社 Gradation correction apparatus, gradation correction method, and program
CN101282479B (en) * 2008-05-06 2011-01-19 武汉大学 Method for encoding and decoding airspace with adjustable resolution based on interesting area

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102687140B (en) * 2009-12-30 2016-03-16 诺基亚技术有限公司 For contributing to the method and apparatus of CBIR
CN102687140A (en) * 2009-12-30 2012-09-19 诺基亚公司 Methods and apparatuses for facilitating content-based image retrieval
CN101807300B (en) * 2010-03-05 2012-07-25 北京智安邦科技有限公司 Target fragment region merging method and device
CN101807300A (en) * 2010-03-05 2010-08-18 北京智安邦科技有限公司 Target fragment region merging method and device
CN101847264A (en) * 2010-05-28 2010-09-29 北京大学 Image interested object automatic retrieving method and system based on complementary significant degree image
CN101847264B (en) * 2010-05-28 2012-07-25 北京大学 Image interested object automatic retrieving method and system based on complementary significant degree image
CN102005057B (en) * 2010-11-17 2012-07-25 中国科学院声学研究所 Method for detecting region of interest of color image
CN102005057A (en) * 2010-11-17 2011-04-06 中国科学院声学研究所 Method for detecting region of interest of color image
CN102036073A (en) * 2010-12-21 2011-04-27 西安交通大学 Method for encoding and decoding JPEG2000 image based on vision potential attention target area
CN102036073B (en) * 2010-12-21 2012-11-28 西安交通大学 Method for encoding and decoding JPEG2000 image based on vision potential attention target area
CN102063623A (en) * 2010-12-28 2011-05-18 中南大学 Method for extracting image region of interest by combining bottom-up and top-down ways
CN103384848B (en) * 2011-02-21 2016-02-10 埃西勒国际通用光学公司 For determining the method for at least one geometry/appearance parameter that the installation of the lens in the spectacle frame worn with user is associated
CN103384848A (en) * 2011-02-21 2013-11-06 埃西勒国际通用光学公司 Method for determining at least one geometric/physiognomic parameter associated with the mounting of an ophthalmic lens in a spectacle frame worn by a user
WO2012122682A1 (en) * 2011-03-15 2012-09-20 清华大学 Method for calculating image visual saliency based on color histogram and overall contrast
CN102158712A (en) * 2011-03-22 2011-08-17 宁波大学 Multi-viewpoint video signal coding method based on vision
WO2012162878A1 (en) * 2011-05-30 2012-12-06 Technicolor (China) Technology Co., Ltd. Method and device for determining saliency value of current block of image
CN102509099A (en) * 2011-10-21 2012-06-20 清华大学深圳研究生院 Detection method for image salient region
CN102509099B (en) * 2011-10-21 2013-02-27 清华大学深圳研究生院 Detection method for image salient region
CN102509299B (en) * 2011-11-17 2014-08-06 西安电子科技大学 Image salient area detection method based on visual attention mechanism
CN102509299A (en) * 2011-11-17 2012-06-20 西安电子科技大学 Image salient area detection method based on visual attention mechanism
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN102521595B (en) * 2011-12-07 2014-01-15 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN102855025A (en) * 2011-12-08 2013-01-02 西南科技大学 Optical multi-touch contact detection method based on visual attention model
CN102855025B (en) * 2011-12-08 2015-06-17 西南科技大学 Optical multi-touch contact detection method based on visual attention model
CN102568016A (en) * 2012-01-03 2012-07-11 西安电子科技大学 Compressive sensing image target reconstruction method based on visual attention
CN104105441A (en) * 2012-02-13 2014-10-15 株式会社日立制作所 Region extraction system
CN104156938B (en) * 2013-05-14 2017-08-11 五邑大学 A kind of image connectivity region description method and its application process in image registration
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system used for obtaining super-resolution images with optimized visual experience
CN103514580B (en) * 2013-09-26 2016-06-08 香港应用科技研究院有限公司 For obtaining the method and system of the super-resolution image that visual experience optimizes
CN104658004A (en) * 2013-11-20 2015-05-27 南京中观软件技术有限公司 Video image-based air refueling auxiliary cohesion method
CN104658004B (en) * 2013-11-20 2018-05-15 南京中观软件技术有限公司 A kind of air refuelling auxiliary marching method based on video image
CN104079934A (en) * 2014-07-14 2014-10-01 武汉大学 Method for extracting regions of interest in real-time video communication
CN106828460A (en) * 2017-03-02 2017-06-13 深圳明创自控技术有限公司 A kind of safe full-automatic pilot for prevention of car collision
CN108960247A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Image significance detection method, device and electronic equipment
CN109242869A (en) * 2018-09-21 2019-01-18 科大讯飞股份有限公司 A kind of image instance dividing method, device, equipment and storage medium
CN109242869B (en) * 2018-09-21 2021-02-02 安徽科大讯飞医疗信息技术有限公司 Image instance segmentation method, device, equipment and storage medium
CN109615635A (en) * 2018-12-06 2019-04-12 厦门理工学院 The method and device of quality sorting is carried out to strawberry based on image recognition
CN111414904A (en) * 2019-01-08 2020-07-14 北京地平线机器人技术研发有限公司 Method and apparatus for processing region of interest data
CN111414904B (en) * 2019-01-08 2023-12-01 北京地平线机器人技术研发有限公司 Method and device for processing data of region of interest
CN110267041A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image encoding method, device, electronic equipment and computer readable storage medium
US11095902B2 (en) 2019-06-28 2021-08-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for image coding, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
CN101533512B (en) 2012-05-09

Similar Documents

Publication Publication Date Title
CN101533512B (en) Image region-of-interest automatic extraction method based on human visual attention system
Beeravolu et al. Preprocessing of breast cancer images to create datasets for deep-CNN
EP3455782B1 (en) System and method for detecting plant diseases
EP1693782B1 (en) Method for facial features detection
Kumar et al. Review of lane detection and tracking algorithms in advanced driver assistance system
Shih et al. Automatic extraction of head and face boundaries and facial features
CN103902977B (en) Face identification method and device based on Gabor binary patterns
WO2019114145A1 (en) Head count detection method and device in surveillance video
CN105518709A (en) Method, system and computer program product for identifying human face
CN109413411B (en) Black screen identification method and device of monitoring line and server
CN110415208A (en) A kind of adaptive targets detection method and its device, equipment, storage medium
CN107977639A (en) A kind of face definition judgment method
WO2019184851A1 (en) Image processing method and apparatus, and training method for neural network model
Liu et al. Infrared ship target segmentation through integration of multiple feature maps
US20210233245A1 (en) Computer-implemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computer-program product
Dhar et al. An efficient real time moving object detection method for video surveillance system
CN104318216A (en) Method for recognizing and matching pedestrian targets across blind area in video surveillance
Gao et al. Agricultural image target segmentation based on fuzzy set
Lin et al. Robust license plate detection using image saliency
CN102156879B (en) Human target matching method based on weighted terrestrial motion distance
CN106446920B (en) A kind of stroke width transform method based on gradient amplitude constraint
CN105654090A (en) Pedestrian contour detection method based on curve volatility description
WenJuan et al. A real-time lip localization and tacking for lip reading
Arévalo et al. Detecting shadows in QuickBird satellite images
Ayoub et al. Visual saliency detection based on color frequency features under Bayesian framework

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120509

Termination date: 20190424

CF01 Termination of patent right due to non-payment of annual fee