CN107527357A - Face datection positioning and method for real time tracking in Violent scene - Google Patents
Face datection positioning and method for real time tracking in Violent scene Download PDFInfo
- Publication number
- CN107527357A CN107527357A CN201710718630.6A CN201710718630A CN107527357A CN 107527357 A CN107527357 A CN 107527357A CN 201710718630 A CN201710718630 A CN 201710718630A CN 107527357 A CN107527357 A CN 107527357A
- Authority
- CN
- China
- Prior art keywords
- video frame
- face
- violent scene
- violent
- monitoring video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides Face datection positioning and method for real time tracking in a kind of Violent scene, and this method includes the light stream vector amplitude for obtaining a pixel in monitoring video frame;Judge whether monitoring video frame is Violent scene according to light stream vector amplitude.The center O in face tracking region in initial Violent scene monitoring video frame is calculated according to the colour of skin average in YCbCr space and colour of skin variance0.According to the center O in face tracking region in initial Violent scene monitoring video frame0The center O in face tracking region in calculated for subsequent Violent scene monitoring video framem+1, finally according to the center O in face tracking regionm+1To obtain the face tracking region of m+1 frame Violent scene frame of video, detection positioning and the real-time tracking of face in Violent scene are realized.
Description
Technical Field
The invention relates to the field of video and image processing, in particular to a face detection positioning and real-time tracking method in a violent scene.
Background
The face detection positioning and real-time tracking in the violent scene have important application in crime monitoring occasions such as a residential quarter access control system, a shopping mall video system and the like. At present, under more ideal conditions, the front face detection has achieved satisfactory effects. However, under a complex background, due to the influence of factors such as multi-pose, shielding and illumination, the success rate of face detection and real-time tracking is low.
The current mainstream identification methods (such as LGBP, neural network, PCA) are based on static images to perform face identification, and cannot be applied to face detection positioning and tracking in violent scenes in video monitoring, and a mechanism for tracking multiple objects in real time is also lacking, so that the practicability is limited. In the process of human body motion (such as shoulder charging and swinging), the face can vibrate to a larger extent to cause the change of the face posture characteristics, which can cause the reduction of the detection and tracking accuracy.
Disclosure of Invention
The invention provides a face detection positioning and real-time tracking method in a violent scene, which can accurately perform face detection positioning and tracking in the violent scene and has strong real-time performance, and aims to solve the problem that the face in the violent scene cannot be identified in the prior art.
In order to achieve the above object, the present invention provides a face detection positioning and real-time tracking method in a violent scene, which comprises:
step one, acquiring the optical flow vector magnitude of each pixel in a monitoring video frame:
wherein (u) i,j,t ,v i,j,t ) The optical flow of a pixel p (i, j, t), where (i, j) is the position of the pixel within the surveillance video frame and t is the video frame sequence index;
judging the violent scene, and representing the monitoring video frame as the violent scene monitoring video frame when the following conditions are met:
wherein Th is a light flow judgment threshold, and N is the number of pixels in a frame of image;
converting the violent scene monitoring video frame from an RGB space to a YCbCr color space and establishing an image color histogram;
fourthly, calculating the central position O of a face tracking area in the initial violence scene monitoring video frame according to the skin color mean value and the skin color variance in the YCbCr space 0 ;
Step five, monitoring the central position O of the face tracking area in the video frame according to the initial violence scene 0 The central position O of the face tracking area in the follow-up violent scene monitoring video frame is obtained by the following formula m+1 :
Wherein k =1 … L, m ≡ 0 … t, L is the histogram segment number index, t is the video frame sequence index, O m Monitoring the center position of the face tracking area in the video frame for the current violent scene, O m+1 Monitoring faces within video frames for next frame of violent sceneTracking the central position of the area, monitoring Cb brightness of other pixels in a video frame by an I' (I, j) violent scene, wherein delta is a Dirac function, c (I, j) is the number of histogram sections where a pixel p (I, j, t) is located, N1 is the number of pixels in the vertical direction of the image, and N2 is the number of pixels in the horizontal direction of the image;
step six, using O m+1 And obtaining critical pixels of the face tracking area by using an edge detection operator as a center, and performing curve fitting on the critical pixels to form the face tracking area of the (m + 1) th frame of the violent scene video frame until the judgment condition of the violent scene is not met any more.
According to one embodiment of the present invention, the optical flow (u) i,j,t ,v i,j,t ) The calculation is carried out in a differential mode, and the calculation formula is as follows:
where I is the image brightness of pixel p (I, j, t),in order to be a lateral brightness gradient,in order to be a longitudinal brightness gradient,is the brightness gradient on the time axis.
According to an embodiment of the invention, the center position O of the face tracking area in the initial violent scene monitoring video frame 0 The determination adopts the following steps:
first, the difference S (I, j)) between Cb luminance and skin color of each pixel in the initial violent scene monitoring video frame is calculated:
wherein, the average value of the mu skin color and the sigma are skin color variance;
secondly, judging a face area, and when S (I (I, j)) > Th _ skin, enabling a characterization pixel p (I, j, t) to belong to the face area, wherein Th _ skin is a difference threshold of skin color;
and finally, extracting a face area, extracting critical pixels according to the relation between S (I (I, j)) and Th _ skin, performing curve fitting to form a face tracking area in the initial violent scene monitoring video frame and obtain the central position O of the face tracking area in the initial violent scene monitoring video frame 0 。
According to an embodiment of the present invention, the following formula is used to convert the surveillance video frame from the RGB space to the YCbCr color space:
Y=0.257*R+0.564*G+0.098*B+16 (7)
Cb=-0.148*R-0.291*G+0.439*B+128 (8)
Cr=0.439*R-0.368*G-0.071*B+128 (9)
where Y represents luminance, cb reflects the difference between the blue component and luminance of the RGB input, and Cr reflects the difference between the red component and luminance of the RGB input.
According to an embodiment of the present invention, an elliptical curve fitting manner is adopted to perform curve fitting on the critical pixels so as to form a face tracking area.
According to an embodiment of the present invention, in step six, the edge detection operator is any one of a local difference operator, a Sobel operator, or a Canny operator.
In conclusion, the method for detecting, positioning and tracking the face in the violent scene adopts the improved optical flow method and the curve fitting to detect, position and track the face in real time, and can solve the problems that the face is difficult to accurately position and track due to the motion of the human body. Furthermore, the method for detecting, positioning and tracking the face in the violent scene provided by the invention simplifies the algorithm as much as possible on the basis of meeting the requirements of detecting, positioning and tracking the face precision. The algorithm greatly improves the real-time performance of positioning and tracking, and has certain robustness on face deformation; the access control system is convenient to realize in embedded systems such as ARM microcontrollers or single-chip microcomputers, and can be well compatible with the existing access control monitoring system.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a flowchart illustrating a method for detecting and locating a face in a violent scene and tracking the face in real time according to an embodiment of the present invention.
Detailed Description
As shown in FIG. 1, the method for detecting and locating a face in a violent scene and tracking the face in real time provided by the embodiment adopts an improved optical flow method to detect the face in the violent scene, and the method starts with step S10 of acquiring an optical flow vector magnitude m of each pixel in a monitoring video frame i,j,t 。
Wherein (u) i,j,t ,v i,j,t ) The optical flow is the pixel p (i, j, t), where (i, j) is the position of the pixel within the surveillance video frame and t is the video frame sequence index.
Optical flow (u) i,j,t ,v i,j,t ) Optical flow (u) i,j,t ,v i,j,t ) The calculation is carried out in a differential mode, and the calculation formula is as follows:
where I is the image brightness of pixel p (I, j, t),in order to be a lateral brightness gradient,in order to be a longitudinal brightness gradient,is the brightness gradient on the time axis.
The traditional optical flow method is only calculated in a differential mode, a compact optical flow field is obtained by utilizing a basic equation of optical flow and adding certain constraint conditions, the calculated amount is very large, the real-time performance is poor, the method is difficult to be used for tracking the fast moving or shaking human face in a violent scene, and the method cannot be used for identifying and tracking the human face. The embodiment firstly adopts a differential mode to replace the differentiation in the traditional optical flow method, and then adopts the optical flow vector amplitude m with simple calculation mode i,j,t The method is used as a judgment parameter of a violent scene, has high detection precision and simple calculation mode, has low requirement on a microprocessor in an access control system, and can be compatible with the access control system in a community or an office building. Furthermore, a large number of experiments show that the detection of the violent scene by adopting the optical flow vector magnitude has good robustness.
When obtaining the optical flow vector magnitude m i,j,t Then step S20 is executed, according to the optical flow vector magnitude m i,j,t To judge whether the current monitoring video frame is a violent scene.
And when the optical flow vector magnitude of the monitoring video frame meets the formula (2), representing that the monitoring video frame is a violent scene monitoring video frame.
Where Th is a light flow determination threshold, and N is the number of pixels in one frame image.
When the current video frame is judged to be the violent scene monitoring video frame, the human face in the violent scene needs to be identified. Step S30 is performed to convert the violent scene monitoring video frame from RGB space to YCbCr color space and create an image color histogram.
The formula for converting the RGB space to the YCbCr color space is as follows:
Y=0.257*R+0.564*G+0.098*B+16 (7)
Cb=-0.148*R-0.291*G+0.439*B+128 (8)
Cr=0.439*R-0.368*G-0.071*B+128 (9)。
y denotes luminance, cb reflects the difference between the blue component and luminance of the RGB input, and Cr reflects the difference between the red component and luminance of the RGB input.
Executing the step S40, and calculating the central position O of the face tracking area in the initial violence scene monitoring video frame according to the skin color mean value and the skin color variance in the YCbCr space 0 The specific calculation method is as follows:
first, the difference S (I, j)) between Cb luminance and skin color of each pixel in the initial violent scene video frame is calculated:
wherein, the average value of the mu skin color and the sigma are skin color variance;
secondly, judging a face area, and when S (I (I, j)) > Th _ skin, enabling a characterization pixel p (I, j, t) to belong to the face area, wherein Th _ skin is a difference threshold of skin color;
and finally, extracting a face area, extracting critical pixels according to the relation between S (I (I, j)) and Th _ skin, performing curve fitting to form a face tracking area in the initial violent scene video frame, and obtaining the central position O of the face tracking area in the initial violent scene video frame 0 。
Center position O of face tracking area in frame for acquiring monitoring video of initial violent scene 0 Then step S50 is executed, and the central position O of the face tracking area in the video frame is monitored according to the initial violent scene 0 The central position O of the face tracking area in the follow-up violent scene monitoring video frame is obtained by the following formula m+1 :
Wherein k =1 … L, m ≡ 0 … t, L is the histogram segment number index, t is the video frame sequence index, O m Monitoring the center position of the face tracking area in the video frame for the current violent scene, O m+1 The center position of the face tracking area in the next violent scene monitoring video frame is I' (I, j) Cb brightness of other pixels in the violent scene monitoring video frame, delta is a Dirac function, c (I, j) is the number of histogram sections where the pixel p (I, j, t) is located, N1 is the number of pixels in the vertical direction of the image, and N2 is the number of pixels in the horizontal direction of the image.
Step S60, with O m+1 Obtaining critical pixels of a face tracking area by using an edge detection operator as a center, and performing curve fitting on the critical pixels to form the face tracking area of the (m + 1) th frame of the violent scene video frame until the judgment condition of the violent scene is not met any more, namely the judgment condition of the violent scene is metAnd finishing the recognition of the human faces in all the violent scene monitoring video frames.
The method for detecting, positioning and tracking the face in the violent scene and in real time provided by the embodiment utilizes the central position of the face tracking area in the previous violent scene monitoring video frame to obtain the central position of the face tracking area in the next violent scene monitoring video frame, and the calculation amount of the point-to-point calculation mode is small, so that the real-time performance of face tracking can be greatly improved, and then the face tracking area is formed by utilizing the edge detection operator according to the skin color of the face (the skin color is an important feature of the face), so that the detection precision is good.
In this embodiment, the edge detection operator is a local difference operator. However, the present invention is not limited thereto. In other embodiments, other edge detection operators such as the Sobel operator or the Canny operator may be used.
The face is substantially elliptical and the parameters of the elliptical fitting are relatively simple, so in this embodiment, an elliptical curve fitting manner is adopted when the face tracking area is formed in the violent scene monitoring video frame. However, the present invention is not limited thereto. In other embodiments, other curves with higher matching accuracy with the face contour can be used for fitting. The higher the accuracy of curve fitting, the corresponding required calculated amount also can increase, and this embodiment adopts the mode of ellipse fitting to balance fit accuracy and calculated amount, makes its application that can be better in the access control system that computing power is limited.
In conclusion, the method for detecting, positioning and tracking the face in the violent scene adopts the improved optical flow method and the curve fitting to detect, position and track the face in real time, and can solve the problems that the face is difficult to accurately position and track due to the motion of the human body. Furthermore, the method for detecting, positioning and tracking the face in the violent scene provided by the invention simplifies the algorithm as much as possible on the basis of meeting the requirements of detecting, positioning and tracking the face precision. The algorithm greatly improves the real-time performance of positioning and tracking, and has certain robustness on human face deformation; the access control system is convenient to realize in embedded systems such as ARM microcontrollers or single-chip microcomputers, and can be well compatible with the existing access control monitoring system.
Although the present invention has been described with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.
Claims (6)
1. A face detection positioning and real-time tracking method in a violent scene is characterized by comprising the following steps:
step one, acquiring the optical flow vector magnitude of each pixel in a monitoring video frame:
wherein (u) i,j,t ,v i,j,t ) The optical flow for pixel p (i, j, t), where (i, j) is the position of the pixel within the surveillance video frame and t is the video frame sequence index;
judging a violent scene, and representing the monitoring video frame as a violent scene monitoring video frame when the following conditions are met:
th is a light flow judgment threshold value, and N is the number of pixels in one frame of image;
converting the violent scene monitoring video frame from an RGB space to a YCbCr color space and establishing an image color histogram;
calculating the central position O degree of a face tracking area in the initial violence scene monitoring video frame according to the skin color mean value and the skin color variance in the YCbCr space;
step five, obtaining the central position O of the face tracking area in the follow-up violent scene monitoring video frame by using the following formula according to the central position O degree of the face tracking area in the initial violent scene monitoring video frame m+1 :
Wherein k =1 … L, m ≡ 0 … t, L is the histogram segment number index, t is the video frame sequence index, O m Monitoring the center position of the face tracking area in the video frame for the current violent scene, O m+1 The center position of a face tracking area in a next frame of monitored video frame is I' (I, j) Cb brightness of other pixels in the violent scene monitored video frame, delta is a Dirac function, c (I, j) is the number of histogram segments where a pixel p (I, j, t) is located, N1 is the number of pixels in the vertical direction of the image, and N2 is the number of pixels in the horizontal direction of the image;
step six, using O m+1 And obtaining critical pixels of the face tracking area by using an edge detection operator as a center, and performing curve fitting on the critical pixels to form the face tracking area of the (m + 1) th frame of the violent scene video frame until the judgment condition of the violent scene is not met any more.
2. Method for face detection localization and real-time tracking in violent scenes according to claim 1, characterized by the fact that the luminous flux (u) is i,j,t ,v i,j,t ) The calculation is carried out in a differential mode, and the calculation formula is as follows:
where I is the image brightness of pixel p (I, j, t),in order to be a lateral brightness gradient,in order to be a longitudinal brightness gradient,is the brightness gradient on the time axis.
3. The method of claim 1, wherein the central position O of the face tracking area in the initial violent scene monitoring video frame is the center position of the face detection and positioning and real-time tracking in the violent scene 0 The determination adopts the following steps:
first, the difference S (I, j)) between Cb luminance and skin color of each pixel in the initial violent scene monitoring video frame is calculated:
wherein, the average value of the mu skin color and the sigma are skin color variance;
secondly, judging a face area, and when S (I (I, j)) > Th _ skin, enabling a characterization pixel p (I, j, t) to belong to the face area, wherein Th _ skin is a difference threshold of skin color;
and finally, extracting a face area, extracting critical pixels according to the relation between S (I (I, j)) and Th _ skin, performing curve fitting to form a face tracking area in the initial violent scene monitoring video frame, and obtaining the central position O degree of the face tracking area in the initial violent scene monitoring video frame.
4. The method of claim 1, wherein the surveillance video frame is converted from RGB space to YCbCr color space using the following formula:
Y=0.257*R+0.564*G+0.098*B+16 (7)
Cb=-0.148*R-0.291*G+0.439*B+128 (8)
Cr=0.439*R-0.368*G-0.071*B+128 (9)
where Y represents luminance, cb reflects the difference between the blue component of the RGB input and the luminance, and Cr reflects the difference between the red component of the RGB input and the luminance.
5. The method for detecting, locating and tracking the face of a person in a violent scene according to claim 1 or 3, wherein the curve fitting is performed on the critical pixels by means of elliptic curve fitting so as to form the face tracking area.
6. The method for detecting, locating and tracking the human face in the violent scene according to claim 1, wherein in the sixth step, the edge detection operator is any one of a local difference operator, a Sobel operator or a Canny operator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710718630.6A CN107527357B (en) | 2017-08-21 | 2017-08-21 | Face datection positioning and method for real time tracking in Violent scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710718630.6A CN107527357B (en) | 2017-08-21 | 2017-08-21 | Face datection positioning and method for real time tracking in Violent scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107527357A true CN107527357A (en) | 2017-12-29 |
CN107527357B CN107527357B (en) | 2019-11-22 |
Family
ID=60681585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710718630.6A Active CN107527357B (en) | 2017-08-21 | 2017-08-21 | Face datection positioning and method for real time tracking in Violent scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107527357B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109359551A (en) * | 2018-09-21 | 2019-02-19 | 深圳市璇玑实验室有限公司 | A kind of nude picture detection method and system based on machine learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750527A (en) * | 2012-06-26 | 2012-10-24 | 浙江捷尚视觉科技有限公司 | Long-time stable human face detection and tracking method in bank scene and long-time stable human face detection and tracking device in bank scene |
US20150003686A1 (en) * | 2013-06-28 | 2015-01-01 | Hulu, LLC | Local Binary Pattern-based Optical Flow |
-
2017
- 2017-08-21 CN CN201710718630.6A patent/CN107527357B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750527A (en) * | 2012-06-26 | 2012-10-24 | 浙江捷尚视觉科技有限公司 | Long-time stable human face detection and tracking method in bank scene and long-time stable human face detection and tracking device in bank scene |
US20150003686A1 (en) * | 2013-06-28 | 2015-01-01 | Hulu, LLC | Local Binary Pattern-based Optical Flow |
Non-Patent Citations (2)
Title |
---|
REN C. LUO ET AL: "Alignment and tracking of facial features with component-based active appearance models and optical flow", 《2011 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM)》 * |
程远航: "基于光流法的视频人脸特征点跟踪方法", 《计算机与现代化》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109359551A (en) * | 2018-09-21 | 2019-02-19 | 深圳市璇玑实验室有限公司 | A kind of nude picture detection method and system based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN107527357B (en) | 2019-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105184779B (en) | One kind is based on the pyramidal vehicle multiscale tracing method of swift nature | |
CN109635758B (en) | Intelligent building site video-based safety belt wearing detection method for aerial work personnel | |
WO2019062092A1 (en) | Superpixel- and multivariate color space-based body outline extraction method | |
US9971944B2 (en) | Unstructured road boundary detection | |
Chu et al. | Object tracking algorithm based on camshift algorithm combinating with difference in frame | |
CN105138987B (en) | A kind of vehicle checking method based on converging channels feature and estimation | |
WO2009131539A1 (en) | A method and system for detecting and tracking hands in an image | |
US9418426B1 (en) | Model-less background estimation for foreground detection in video sequences | |
CN103927520A (en) | Method for detecting human face under backlighting environment | |
CN105809716B (en) | Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method | |
JP6157165B2 (en) | Gaze detection device and imaging device | |
CN108921881A (en) | A kind of across camera method for tracking target based on homography constraint | |
CN105574515A (en) | Pedestrian re-identification method in zero-lap vision field | |
Yadav et al. | Fast face detection based on skin segmentation and facial features | |
Changhui et al. | Overlapped fruit recognition for citrus harvesting robot in natural scenes | |
Kim et al. | Color segmentation robust to brightness variations by using B-spline curve modeling | |
Dawod et al. | Adaptive skin color model for hand segmentation | |
CN111161219B (en) | Robust monocular vision SLAM method suitable for shadow environment | |
Blauensteiner et al. | On colour spaces for change detection and shadow suppression | |
CN110188640B (en) | Face recognition method, face recognition device, server and computer readable medium | |
CN103578121B (en) | Method for testing motion based on shared Gauss model under disturbed motion environment | |
CN114037087B (en) | Model training method and device, depth prediction method and device, equipment and medium | |
CN108647605B (en) | Human eye gaze point extraction method combining global color and local structural features | |
CN114945071A (en) | Photographing control method, device and system for built-in camera of recycling machine | |
KR20190105273A (en) | Preprocessing method for color filtering robust against illumination environment and the system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |