[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN105740862A - Pedestrian contour detection method based on macro feature point description - Google Patents

Pedestrian contour detection method based on macro feature point description Download PDF

Info

Publication number
CN105740862A
CN105740862A CN201410586661.7A CN201410586661A CN105740862A CN 105740862 A CN105740862 A CN 105740862A CN 201410586661 A CN201410586661 A CN 201410586661A CN 105740862 A CN105740862 A CN 105740862A
Authority
CN
China
Prior art keywords
image
macro
contour
pedestrian
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410586661.7A
Other languages
Chinese (zh)
Inventor
吕楠
张丽秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Abd Smart Eye Electronics Co Ltd
Original Assignee
Abd Smart Eye Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Abd Smart Eye Electronics Co Ltd filed Critical Abd Smart Eye Electronics Co Ltd
Priority to CN201410586661.7A priority Critical patent/CN105740862A/en
Publication of CN105740862A publication Critical patent/CN105740862A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a pedestrian contour detection method based on macro feature point description. The method comprises: S1, obtaining the video stream image of a monitoring area to take as an input image; S2, performing edge detection on the input image through a Sobel operator, and obtaining an edge image; S3, processing a difference image through process of a background difference method according to the input image, and performing binarization processing on the difference image to obtain a motion object area; S4, combining the edge image and the motion object area to operate, and extracting a public part to obtain a motion object contour; S5, extracting a macro shape feature point set from the motion object contour; and S6, connecting all the macro shape feature points in the macro shape feature points set in order according to the coordinate position relation to form a pedestrian contour. In the invention, the pedestrian contour detection method based on the macro feature point description is configured to extract the macro shape feature point set from a motion object contour, so that the consumption of calculation resource in the pedestrian contour detection process is reduced, and the efficiency of the pedestrian contour detection is improved.

Description

Pedestrian contour detection method based on macro characteristic point description
Technical Field
The invention belongs to the technical field of video image processing and identification, and particularly relates to a pedestrian contour detection method based on macro characteristic point description.
Background
In the management and decision-making of public places such as shopping malls, shopping centers, airports, stations and the like, the flow of people is indispensable data. Through counting the people flow, namely the number of people entering and exiting, the operation work in public places can be effectively monitored and organized in real time, and safer environment and higher-quality service are provided for people. Taking a shopping mall as an example, the flow of people is a very basic and important index, is closely related to the sales volume of the shopping mall, and if the accurate and real flow of people is known, reliable reference information can be provided for sales, services and logistics.
However, for the pedestrian detection technology, the detection of the pedestrian contour is a very important link. In the prior art, pedestrian contour detection is mainly performed by a monocular vision method, a template is established by utilizing human body edges and texture characteristics of pedestrians, and machine learning methods such as a neural network, a support vector machine, a cascade detector and the like are comprehensively applied. However, in the prior art, the operation process in the process of detecting the pedestrian outline is complex, and the consumption of the computing resource of a computer is high, so that the phenomenon of delay is caused when the pedestrian is detected. Therefore, there is a need to improve the detection method of pedestrian outline in the prior art to solve the above technical drawbacks.
Disclosure of Invention
The invention aims to provide a pedestrian contour detection method based on macro characteristic point description, which is used for reducing the consumption of computing resources in the pedestrian contour extraction process and improving the pedestrian contour detection efficiency.
In order to achieve the above object, the present invention provides a pedestrian contour detection method based on macro feature point description, which includes the following steps:
s1, acquiring a video stream image of the monitoring area as an input image;
s2, carrying out edge detection on the input image through a Sobel operator to obtain an edge image;
s3, obtaining a difference image through background difference method processing according to the input image, and carrying out binarization processing on the difference image to obtain a moving target area;
s4, combining the edge image and the moving target area, performing AND operation, and extracting a public part to obtain a moving target contour;
s5, extracting the macro-shape feature point set of the moving target contour;
and S6, sequentially connecting all the macro feature points in the macro shape feature point set in the step S5 according to the coordinate position relationship to form the pedestrian outline.
As a further improvement of the present invention, the step S1 specifically includes: and acquiring a video stream image of a monitoring area as an input image through a camera, wherein the monitoring area is positioned right below the camera.
As a further improvement of the present invention, the step S2 specifically includes: and performing edge detection on the input image obtained in the step S1 by using a 3 × 3 Sobel operator to obtain an edge image.
As a further improvement of the present invention, the background subtraction processing in step S3 specifically includes:
according to the input image obtained in step S1, a first frame of scene image without moving objects is extracted as a background image, and then a background difference operation is performed using the current frame image and the background image to obtain a difference image, where a calculation formula of the background difference operation is: dk(x,y)=Fk(x,y)-B(x,y);
Wherein, B (x, y) is the gray value of the pixel point in the background image, Fk(x, y) is the gray value of the pixel point in the current frame image, Dk(x, y) is a difference image between them.
As a further improvement of the present invention, the "macro shape feature point set" in step S5 specifically includes: and (3) a non-noise pixel point set with maximum curvature change in a local range on the contour of the moving object.
As a further improvement of the present invention, the step S5 specifically includes the following steps:
s51, extracting the micro shape of the moving target outline;
s52, extracting macro shapes and macro shape elements of the outlines of the moving objects according to the micro shapes;
and S53, extracting the macro-shape feature point set of the moving object contour.
Compared with the prior art, the invention has the beneficial effects that: the macro-shape feature point set is extracted through the moving target contour, so that the consumption of computing resources in the pedestrian contour detection process is reduced, and the pedestrian contour detection efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart of a pedestrian contour detection method based on macro feature point description according to the present invention;
FIG. 2 is a schematic diagram of the operation of FIG. 1 to obtain video stream images of a monitored area;
FIG. 3a is a schematic diagram of the calculation of gradient values in the x direction by the Sobel operator;
FIG. 3b is a schematic diagram of the calculation of gradient values in the y direction by the Sobel operator;
FIG. 4 is a schematic diagram of the convolution and operation of an input image according to the present invention;
FIG. 5 is a diagram illustrating multi-target macro shape feature point set extraction according to the present invention;
fig. 6 is a schematic diagram of macro shape feature points of the moving object contour obtained by executing step S5;
fig. 7 is a schematic diagram of the pedestrian profile acquired by executing step S6.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a pedestrian contour detection method based on macro feature point description according to the present invention.
In the embodiment, the pedestrian contour detection method based on macro feature point description comprises the following steps:
and S1, acquiring a video stream image of the monitored area as an input image.
Referring to fig. 2, the pedestrian contour detection method based on macro feature point description is based on camera vertical shooting and is suitable for outdoor situations and indoor situations. In the present embodiment, the step S1 specifically includes: video stream images of a monitored area 30 are acquired as input images by the camera 10, the monitored area 30 being located directly below the camera 10.
Specifically, the camera 10 is disposed immediately above the vicinity of the doorway 20, and the pedestrian can walk back and forth in the doorway 20 in the direction of the arrow 201. The monitoring area 30 acquired by the camera 10 may completely cover the entire area of the doorway 20.
In the present embodiment, the monitoring area 30 is rectangular, but may be square, circular, or other shapes. The camera 10 is located directly above the centre point 301 of the monitored area 30, from which we can deduce that the monitored area 30 is located directly below the camera 10.
And S2, carrying out edge detection on the input image through a Sobel operator to obtain an edge image.
The edge refers to the most significant part of the local brightness change of the image, and mainly exists between an object and the object, between the object and the background, and between areas, the edge detection is the most basic operation for detecting the local significant change of the image, and the significant change of the gray value of the image can be detected by using a discrete approximation function of a gradient.
Referring to fig. 3a, 3b and 4, the gray value of a certain pixel in an input image with 256 levels of gray for one frame is set as f (x, y), and the formula for calculating the gradient value of the pixel is shown in formula (1):
M ( x , y ) = S x 2 + S y 2 - - - ( 1 ) ;
wherein M (x, y) is the gradient value obtained at the pixel point (x, y), Sx、SyTo calculate the gradient values in the x and y directions respectively by using the sobel operator. Fig. 3a is a schematic diagram of Sobel operator calculating gradient value of the pixel point (x, y) along x direction; fig. 3b is a schematic diagram of Sobel operator calculating gradient value of the pixel point (x, y) along y direction.
Sx、SyRespectively representing the convolution operation of the sobel operator with the gray levels of the image neighborhood as shown in fig. 4, Z in fig. 4i(i ═ 1, 2.. 9.) denotes the gray scale value of the pixel around the eight neighborhoods of this pixel (x, y), SxAnd SyThe formula calculation is shown in formula (2) and formula (3):
S x = 1 2 1 0 0 0 - 1 - 2 - 1 * Z 1 Z 2 Z 3 Z 4 ( x , y ) Z 6 Z 7 Z 8 Z 9 - - - ( 2 )
S y = 1 0 - 1 2 0 - 2 1 0 - 1 * Z 1 Z 2 Z 3 Z 4 ( x , y ) Z 6 Z 7 Z 8 Z 9 - - - ( 3 )
the gradient values of the image at the pixel point (x, y) in the x and y directions are respectively shown in formula (4) and formula (5):
Sx=(Z1+2Z2+Z3)-(Z7+2Z8+Z9)(4)
Sy=(Z1+2Z4+Z7)-(Z3+2Z6+Z9)(5)
the Sobel operator is one of operators in image processing, and is mainly used for edge detection. Technically, it is a discrete difference operator used to calculate the gradient approximation of the image brightness function. The operator comprises two groups of 3 x 3 matrixes which are used for solving gradient values of pixel points in x and y directions respectively, performing convolution and operation on the gradient values and the neighborhood gray level of each pixel point in the input image as shown in figure 4, and then selecting a proper threshold value K to extract an edge image.
Specifically, the calculation formula of the convolution and operation is shown in formula (6),
f ( x , y ) = 0 , M ( x , y ) < K 1 , M ( x , y ) &GreaterEqual; K - - - ( 6 ) ;
wherein the threshold K is 200.
When f (x, y) is 1, the point is an edge point of the input image.
And S3, obtaining a difference image through background difference method processing according to the input image, and carrying out binarization processing on the difference image to obtain a moving target area.
In this embodiment, the background subtraction processing specifically includes: extracting a first frame of scene image without moving objects as a background image, then performing differential operation on a current frame image and the background image to obtain a differential image, and then performing background differential operation on the current frame image and the background image to obtain the differential image. The calculation formula of the background difference operation is as shown in formula (7):
Dk(x,y)=Fk(x,y)-B(x,y)(7)
wherein B (x, y) is the gray value of the pixel point in the background image, Fk(x, y) is the gray value of the pixel point in the current frame image, Dk(x, y) is a difference image between them.
Then, the difference image is subjected to binarization processing, and the operation formula of the binarization processing is shown as formula (8):
R k ( x , y ) = 0 , D k ( x , y ) < M 1 , D k ( x , y ) &GreaterEqual; M - - - ( 8 ) ;
wherein D isk(x, y) is a difference image of the two, Rk(x, y) is a binary image obtained after the difference method processing, M is a segmentation threshold value, and M is 40.
When R iskWhen (x, y) is 0, the point is a background point; when R iskWhen (x, y) is 1, the point is a foreground point, namely a moving object.
And S4, combining the edge image and the moving object area, carrying out AND operation, and extracting a common part to obtain the moving object outline.
And operation is a logical multiplication rule, which means that the logical product is equal to 1 only when the logical variables participating in the operation all take values of 1 at the same time.
In the present embodiment, the image edge obtained in step S2 includes: background edges and edges of moving objects. The moving target region obtained in step S3 includes only: moving object regions without background images. And operation is carried out on the edge image and the moving target area, and a public part in the two images can be extracted to obtain the moving target contour.
And S5, extracting the macro-shape feature point set of the moving object contour.
The macro-shape characteristic point set extraction technology of the moving target contour simulates the visual function of a human, detects the non-noise pixel point set of the maximum curvature change in the local range on the moving target contour, and can adopt the macro-shape characteristic point set on the moving target contour as the description characteristic of the moving target contour.
Referring to fig. 5, the extraction process of the macro shape feature point set in step S5 is as follows:
first, step S51 is executed to extract the micro shape of the contour of the moving object.
The micro-shape of the moving target contour is composed of all fine link information of the moving target contour, and the micro-shape extraction formula of the moving target contour is shown as a formula (9):
C r = O C ( R r ) = L 1 &infin; L 2 &infin; &CenterDot; &CenterDot; &CenterDot; &infin; L M r - - - ( 9 ) ;
wherein, OC(. R) is a micro shape extraction operatorrFor the moving object contour, infinity is a line segment chaining symbol indicating the top connection of different line segments. MrIs the total number of target micro-shape primitives, LnIs a micro-shape element (n is 1, 2, 3, …, M)r),Here, theIs LnAt the beginning of the process,is LnThe end point of (1). L isnIs CrThe upper largest set of consecutive equidirectional boundary pixel points.Is a boundary CrThe set of points is global.Defining the two primitive anglesAs shown in equation (10):
&theta; L i , L i + m &OverBar; = &Delta; Ar cos [ d ( L i ) &OverBar; &CenterDot; d ( L i + m ) &OverBar; | | d ( L i ) &OverBar; | | 2 * | | d ( L i + m ) | | 2 ] - - - ( 10 ) ;
wherein,in the formula (10), a product is a vector inner product operation sign.
Then, step S52 is executed to extract the macro shape and macro shape primitive of the moving object outline according to the micro shape.
The macro shape of the contour of the moving object represents the macroscopically observed appearance shape of the object, which can be extracted by the micro shape obtained in step S51, and the extraction process is as shown in the publication (11):
M C r = O M ( C r ) = ML 1 &infin; M L 2 &infin; &CenterDot; &CenterDot; &CenterDot; &infin; ML H r - - - ( 11 ) ;
wherein, OM(. is a macro shape extraction operator, MLnBecomes a macro-shape element (n is 1, 2, …, H)r),HrIs the number of target macro shape primitives.
Finally, step S53 is executed to extract a macro shape feature point set of the moving object contour.
Macro shape characteristic point set TD of moving object contourrThe method is characterized in that the method is positioned on a moving target contour and is formed by a series of key point sets capable of describing a target macro shape, the target macro shape is detected through a certain detection algorithm, and the formula is expressed as formula (12):
TDr=OD(MCr)={DPn|n=1,2,…,Dr}(12);
wherein, OD(. is a macro-shape feature point set extraction operator called DPnIs a macro featurePoint (n ═ 1, 2, …, D)r),DrThus, the macro shape feature point set of the contours of the R moving objects in the k-th scene image can be represented as R × DrDimensional feature MatrixFIt can be obtained by, as shown in the publication (13):
MatrixF=[TD1TD2…TDR]T=O(f(k,x,y))(13);
where T represents a transposed symbol, O (·) is a multi-target macro-feature detection operator, and f (k, x, y) is a pixel value of the kth frame image at coordinates (x, y).
And finally, S6 is executed, and all the macro feature points in the macro shape feature point set in the step S5 are connected in sequence according to the coordinate position relationship to form the pedestrian outline.
Referring to fig. 6 and 7, in the present embodiment, the step S6 specifically includes: and sequentially connecting all the macro characteristic points in the obtained macro shape characteristic point set according to the coordinate position relationship to generate a convex closed image contour, and taking the convex closed image contour as the detected pedestrian contour.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (6)

1. A pedestrian contour detection method based on macro feature point description is characterized by comprising the following steps:
s1, acquiring a video stream image of the monitoring area as an input image;
s2, carrying out edge detection on the input image through a Sobel operator to obtain an edge image;
s3, obtaining a difference image through background difference method processing according to the input image, and carrying out binarization processing on the difference image to obtain a moving target area;
s4, combining the edge image and the moving target area, performing AND operation, and extracting a public part to obtain a moving target contour;
s5, extracting the macro-shape feature point set of the moving target contour;
and S6, sequentially connecting all the macro feature points in the macro shape feature point set in the step S5 according to the coordinate position relationship to form the pedestrian outline.
2. The pedestrian contour detection method according to claim 1, wherein the step S1 is specifically: and acquiring a video stream image of a monitoring area as an input image through a camera, wherein the monitoring area is positioned right below the camera.
3. The pedestrian contour detection method according to claim 1, wherein the step S2 is specifically: and performing edge detection on the input image obtained in the step S1 by using a 3 × 3 Sobel operator to obtain an edge image.
4. The pedestrian contour detection method according to claim 1, wherein the background subtraction processing in step S3 is specifically:
according to the input image obtained in step S1, a first frame of scene image without moving objects is extracted as a background image, and then a background difference operation is performed using the current frame image and the background image to obtain a difference image, where a calculation formula of the background difference operation is: dk(x,y)=Fk(x,y)-B(x,y);
Wherein, B (x, y) is the gray value of the pixel point in the background image, Fk(x, y) is the gray value of the pixel point in the current frame image, Dk(x, y) is a difference image between them.
5. The pedestrian contour detection method according to claim 1, wherein the "macro-shape feature point set" in the step S5 is specifically: and (3) a non-noise pixel point set with maximum curvature change in a local range on the contour of the moving object.
6. The pedestrian contour detection method according to claim 1, wherein the step S5 specifically includes the steps of:
s51, extracting the micro shape of the moving target outline;
s52, extracting macro shapes and macro shape elements of the outlines of the moving objects according to the micro shapes;
and S53, extracting the macro-shape feature point set of the moving object contour.
CN201410586661.7A 2014-10-27 2014-10-27 Pedestrian contour detection method based on macro feature point description Pending CN105740862A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410586661.7A CN105740862A (en) 2014-10-27 2014-10-27 Pedestrian contour detection method based on macro feature point description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410586661.7A CN105740862A (en) 2014-10-27 2014-10-27 Pedestrian contour detection method based on macro feature point description

Publications (1)

Publication Number Publication Date
CN105740862A true CN105740862A (en) 2016-07-06

Family

ID=56236278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410586661.7A Pending CN105740862A (en) 2014-10-27 2014-10-27 Pedestrian contour detection method based on macro feature point description

Country Status (1)

Country Link
CN (1) CN105740862A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544573A (en) * 2017-09-21 2019-03-29 卡西欧计算机株式会社 Contour detecting device, printing equipment, profile testing method and recording medium
CN113538337A (en) * 2021-06-17 2021-10-22 杭州涂鸦信息技术有限公司 Detection method, detection device and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120229643A1 (en) * 2009-12-02 2012-09-13 Tata Consultancy Services Limited Cost-effective system and method for detecting, classifying and tracking the pedestrian using near infrared camera
CN102842045A (en) * 2012-08-03 2012-12-26 华侨大学 Pedestrian detection method based on combined features
CN102867349A (en) * 2012-08-20 2013-01-09 无锡慧眼电子科技有限公司 People counting method based on elliptical ring template matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120229643A1 (en) * 2009-12-02 2012-09-13 Tata Consultancy Services Limited Cost-effective system and method for detecting, classifying and tracking the pedestrian using near infrared camera
CN102842045A (en) * 2012-08-03 2012-12-26 华侨大学 Pedestrian detection method based on combined features
CN102867349A (en) * 2012-08-20 2013-01-09 无锡慧眼电子科技有限公司 People counting method based on elliptical ring template matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张文涛: ""视频多目标分割、宏特征描述与状态检测技术"", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544573A (en) * 2017-09-21 2019-03-29 卡西欧计算机株式会社 Contour detecting device, printing equipment, profile testing method and recording medium
CN109544573B (en) * 2017-09-21 2022-11-25 卡西欧计算机株式会社 Contour detection device, printing device, contour detection method, and recording medium
CN113538337A (en) * 2021-06-17 2021-10-22 杭州涂鸦信息技术有限公司 Detection method, detection device and computer readable storage medium
CN113538337B (en) * 2021-06-17 2024-02-27 杭州涂鸦信息技术有限公司 Detection method, detection device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN102867349B (en) People counting method based on elliptical ring template matching
Kumar et al. Review of lane detection and tracking algorithms in advanced driver assistance system
Santosh et al. Tracking multiple moving objects using gaussian mixture model
CN107330390B (en) People counting method based on image analysis and deep learning
CN107578418A (en) A kind of indoor scene profile testing method of confluent colours and depth information
Fei et al. Visual tracking based on improved foreground detection and perceptual hashing
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN103413303A (en) Infrared target segmentation method based on joint obviousness
CN101533512A (en) Method for automatically extracting interesting image regions based on human visual attention system
CN104574375A (en) Image significance detection method combining color and depth information
CN104392463A (en) Image salient region detection method based on joint sparse multi-scale fusion
CN107301376B (en) Pedestrian detection method based on deep learning multi-layer stimulation
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN105513105A (en) Image background blurring method based on saliency map
CN104966054B (en) Detection method of small target in unmanned plane visible images
CN104794435A (en) Unmanned aerial vehicle to ground moving target detection method based on vision
CN104933738A (en) Visual saliency map generation method based on local structure detection and contrast
CN103886324B (en) Scale adaptive target tracking method based on log likelihood image
CN105654090A (en) Pedestrian contour detection method based on curve volatility description
Károly et al. Optical flow-based segmentation of moving objects for mobile robot navigation using pre-trained deep learning models
CN105426928B (en) A kind of pedestrian detection method based on Haar feature and EOH feature
Ye et al. An improved algorithm for Harris corner detection
CN102156879B (en) Human target matching method based on weighted terrestrial motion distance
CN104504692B (en) The extracting method of notable object in image based on region contrast
CN105740862A (en) Pedestrian contour detection method based on macro feature point description

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160706

RJ01 Rejection of invention patent application after publication