CN108830240A - Fatigue driving state detection method, device, computer equipment and storage medium - Google Patents
Fatigue driving state detection method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN108830240A CN108830240A CN201810649253.XA CN201810649253A CN108830240A CN 108830240 A CN108830240 A CN 108830240A CN 201810649253 A CN201810649253 A CN 201810649253A CN 108830240 A CN108830240 A CN 108830240A
- Authority
- CN
- China
- Prior art keywords
- driving state
- fatigue driving
- driver
- key point
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 46
- 238000012544 monitoring process Methods 0.000 claims abstract description 36
- 230000008569 process Effects 0.000 claims description 26
- 206010048232 Yawning Diseases 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 25
- 230000008859 change Effects 0.000 claims description 23
- 241001282135 Poromitra oscitans Species 0.000 claims description 20
- 239000013598 vector Substances 0.000 claims description 12
- 210000000744 eyelid Anatomy 0.000 claims description 11
- 210000001747 pupil Anatomy 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 5
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Geometry (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
This application involves a kind of fatigue driving state detection method, system, computer equipment and storage mediums.Fatigue driving state detection method includes:The monitoring image for obtaining driving procedure, image is split to obtain corresponding regional ensemble;By comparing the similarity in two neighboring region in the regional ensemble to obtain the object candidate area in the monitoring image, and the face area in the object candidate area is identified;It identifies the mouth key point and eyes key point in the face area, and is changed with time according to the position of the mouth key point and eyes key point judge whether driver is in fatigue driving state respectively.Above-mentioned fatigue driving state detection method, can be improved the accuracy rate for judging whether driver is in fatigue driving state.
Description
Technical Field
The present application relates to the field of image detection technologies, and in particular, to a method and an apparatus for detecting a fatigue driving state, a computer device, and a storage medium.
Background
With the development of economic society, the transportation industry is gradually expanded, and motor vehicles are increasing day by day. Road traffic accidents caused by fatigue driving of drivers are also increasing. The occurrence of traffic accidents can be avoided by detecting whether the driver is in a fatigue driving state.
In detecting whether the driver is in a fatigue driving state, it is possible to determine whether there is an abnormality in the behavior of the driver from various driving data of the vehicle during driving, and thus recognize driving fatigue. However, since environmental disturbance factors affecting the driving data are numerous, the method of determining whether the driver is in a fatigue driving state based on the driving data has a low accuracy.
Disclosure of Invention
In view of the above, it is necessary to provide a fatigue driving state detection method, apparatus, computer device, and storage medium capable of improving the accuracy of determining whether a driver is in a fatigue driving state, in view of the above technical problems.
A fatigue driving state detection method, the method comprising:
acquiring a monitoring image of a driving process, and segmenting the image to obtain a corresponding region set;
obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set, and identifying a face region in the target candidate region;
identifying a mouth key point and an eye key point in the face area, and judging whether a driver is in a fatigue driving state according to the change of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
In one embodiment, the method for detecting a fatigue driving state, where the obtaining of the target candidate region in the monitored image by comparing similarities of two adjacent regions in the region set, includes:
obtaining each similar area in the monitoring image by calculating the similarity of two adjacent areas in the area set;
and determining the similar regions with the areas exceeding a specified threshold value in each similar region as target candidate regions.
In one embodiment, the method for detecting a fatigue driving state, where the obtaining of each similar region in the monitored image by calculating the similarity between two adjacent regions in the region set includes:
calculating the similarity of two adjacent regions in the region set;
and merging the two adjacent regions with the similarity meeting the preset condition, and setting the merged regions as similar regions.
In one embodiment, the fatigue driving state detection method includes the similarity of textures;
the step of comparing the similarity of the textures of two adjacent regions in the region set comprises:
carrying out graying processing on the monitoring image to obtain a grayscale image;
calculating the binary pattern characteristics of each area in the two adjacent areas, and acquiring vectors corresponding to the binary pattern characteristics;
and calculating the texture similarity of the two adjacent regions according to the vector.
In one embodiment, the fatigue driving state detecting method, which identifies the mouth key points and the eye key points in the face region, includes:
and matching the face area with a template, and acquiring the key points of the mouth and the key points of the eyes in the face area according to the key points of the mouth and the key points of the eyes in the template.
In one embodiment, the method for detecting a fatigue driving state, the determining whether the driver is in the fatigue driving state according to the change of the positions of the mouth key point and the eye key point with time, respectively, includes:
judging whether the driver yawns according to the change of the position of the key point of the mouth along with the time;
judging whether the driver closes the eyes or not according to the change of the positions of the key points of the eyes along with the time;
and judging whether the driver is in a fatigue driving state or not by judging whether the driver is yawning or not and whether the driver is in eye closing or not.
In one embodiment, the method for detecting a fatigue driving state, which determines whether the driver is in the fatigue driving state by determining whether the driver is yawning and closing eyes, includes:
acquiring the frequency of yawning of a driver;
acquiring the ratio of the time of the pupil being shielded by the eyelid to the specific time when the driver closes the eye;
and judging whether the driver is in a fatigue driving state or not according to the frequency of the yawning and the ratio of the time of the pupil being shielded by the eyelid in the eye closing process to the specific time.
A fatigue driving state detection device comprising:
the acquisition module is used for acquiring a monitoring image of the driving process and segmenting the image to obtain a corresponding area set;
the identification module is used for obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set and identifying a face region in the target candidate region;
the judging module is used for identifying a mouth key point and an eye key point in the face area and judging whether a driver is in a fatigue driving state or not according to the change of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a monitoring image of a driving process, and segmenting the image to obtain a corresponding region set;
obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set, and identifying a face region in the target candidate region;
identifying a mouth key point and an eye key point in the face area, and judging whether a driver is in a fatigue driving state according to the change of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a monitoring image of a driving process, and segmenting the image to obtain a corresponding region set;
obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set, and identifying a face region in the target candidate region;
identifying a mouth key point and an eye key point in the face area, and judging whether a driver is in a fatigue driving state according to the change of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
According to the fatigue driving state detection method, the fatigue driving state detection device, the computer equipment and the storage medium, the monitoring image of the driving process is segmented to obtain the corresponding area set, the target candidate area is obtained by comparing the similarity of two adjacent areas in the area set, the face area in the target candidate area is identified, whether the driver is in the fatigue driving state or not is judged according to the change of the positions of the key points of the mouth and the eyes in the face area along with the time, and the accuracy of judging whether the driver is in the fatigue driving state or not can be improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a fatigue driving state detection method;
FIG. 2 is a flowchart illustrating a method for detecting fatigue driving status according to an embodiment;
FIG. 3 is a diagram illustrating binary pattern feature extraction from an image according to an embodiment;
FIG. 4 is a block diagram showing the structure of a fatigue driving state detecting apparatus according to an embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The fatigue driving state detection method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a fatigue driving state detection method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 202, acquiring a monitoring image of the driving process, and segmenting the image to obtain a corresponding region set.
In this step, the monitored Image may be a video frame Image of the monitored driver, and the monitored Image may be segmented into an original region set R ═ R { R } using a Graph-Based Image Segmentation method (Efficient Graph-Based Image Segmentation)1,...,rnAnd n is an integer and represents the number of areas divided by the original area set.
And 204, acquiring a target candidate region in the monitored image by comparing the similarity of two adjacent regions in the region set, and identifying the face region in the target candidate region.
In the above steps, the similarity may include color similarity, texture similarity, size similarity, and matching similarity, and the finally obtained similarity between two adjacent regions may be a combination of the color similarity, the texture similarity, the size similarity, and the matching similarity. The following steps may be performed in identifying the face region in the target candidate region: describing the common attribute of the human face by utilizing Haar characteristics; a feature called an integral image is established, and based on the integral image, several different rectangular features can be rapidly obtained; training by using an iterative algorithm; and establishing a hierarchical classifier.
And step 206, identifying a mouth key point and an eye key point in the face area, and judging whether the driver is in a fatigue driving state according to the changes of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of mouth.
Wherein, the detection of key points of human eyes and mouth can be carried out through a human face detection library.
In the embodiment, the monitoring image of the driving process is segmented to obtain the corresponding area set, the target candidate area is obtained by comparing the similarity of two adjacent areas in the area set, the face area in the target candidate area is identified, whether the driver is in the fatigue driving state is judged according to the change of the positions of the key points of the mouth and the key points of the eyes in the face area along with the time, and the accuracy of judging whether the driver is in the fatigue driving state can be improved.
In one embodiment, the target candidate region in the monitored image may be acquired by the steps comprising: obtaining each similar area in the monitoring image by calculating the similarity of two adjacent areas in the area set; and determining the similar regions with the areas exceeding a specified threshold value in the similar regions as target candidate regions.
In the above embodiment, the finally obtained similarity between two adjacent regions may be a combination of color similarity, texture similarity, size similarity, and matching similarity. For example, with Scolour(ri,rj) To represent the color similarity of two adjacent regions, Stexture(ri,rj) To represent the texture similarity of two adjacent regions, using Ssize(ri,rj) To indicate adjacencySize similarity of two regions, by Sfill(ri,rj) To express the matching similarity between two adjacent regions, the final similarity can be expressed by the following formula:
S(ri,rj)=a1Scolour(ri,rj)+a2Stexture(ri,rj)+a3Ssize(ri,rj)+a4Sfill(ri,rl)
wherein, aiIs a constant number, ai∈{0,1}。
In the embodiment, each similar region in the monitored image is obtained by calculating the similarity of two adjacent regions in the region set, and the similar region with the area exceeding the specified threshold in each similar region is determined as the target candidate region, so that the target candidate region can be accurately positioned, and the accuracy of judging whether the driver is in the fatigue driving state can be further improved.
In one embodiment, the similarity includes a texture similarity Stexture(ri,rj). The texture similarity of two adjacent regions in the set of regions may be compared by: carrying out graying processing on the monitoring image to obtain a grayscale image; calculating the binary pattern characteristics of each area in the two adjacent areas, and acquiring vectors corresponding to the binary pattern characteristics; and calculating the texture similarity of the two adjacent regions according to the vectors.
In the above embodiment, the image may be converted into a gray scale map, and then Local Binary Pattern (LBP) features of the image are calculated, where the calculation formula is as follows:
(xc,yc) As coordinates of the central pixel, p being the p-th image of the neighbourhoodPixels (eight pixels in total), ipIs the gray value of the neighborhood pixel, icThe formula for S (x) as a function of sign for the gray value of the center pixel is as follows:
referring to the drawings, as shown in fig. 3, the number in a square is the gray value of a pixel point, the left side is the original image, and for the middle square in 9 squares, a thresholding process is performed, where the mark of the pixel greater than or equal to the center point is 1, and the mark of the pixel smaller than the center point is 0. And finally, binary number around the central pixel point is converted into decimal number, and an LBP value is obtained. The LBP features can be represented by a 26bin histogram, and a 26-dimensional vector can be obtained:
the correlation of the texture can be calculated by the following formula:
the color histogram is obtained by calculating the pixels of the color in each small interval, and the more bins, the stronger the resolution of the histogram to the color is.
For color similarity Scolour(ri,rj) Can be calculated according to the following formula:
wherein,for each region corresponding multidimensional vector, n is CiOf (c) is calculated.
For size similarity Ssize(ri,rj) Can be calculated according to the following formula:
wherein, size (r)i) Is a region riThe size (im) represents the number of pixels of the whole picture.
For anastomotic similarity Sfill(ri,rl) It can be calculated according to the following formula:
wherein, BBijRefers to the smallest outsourced area that contains the i, j area.
In the embodiment, when the texture similarity is calculated, the local binary pattern features are extracted, and the calculation amount of feature extraction is greatly reduced under the condition of keeping the accuracy.
In one embodiment, the method for detecting the fatigue driving state, in which each similar region in the monitored image is obtained by calculating the similarity between two adjacent regions in the region set, includes: calculating the similarity of two adjacent regions in the region set; and merging the two adjacent regions with the similarity meeting the preset condition, and setting the merged regions as similar regions.
In the above embodiment, the original region set R may be calculated as R ═ R1,...,rnThe similarity S ═ S of each adjacent region in the Chinese1,...,snAnd fourthly, executing an area merging step: finding out phasesTwo regions r with highest similarityiAnd rjCombine them into a new set rtR is added. Then remove all AND r from SiAnd rjRelated data, and calculating a new set rtSimilarity to all its neighbors s (r)t,r*) And repeating the region merging step until the S set is empty. Until the S set is empty, the regions obtained by merging together are similar regions, and the monitored image can contain a plurality of similar regions.
In the embodiment, the monitoring image of the driving process is segmented to obtain the corresponding area set, the target candidate area is obtained by comparing the similarity of two adjacent areas in the area set, the face area in the target candidate area is identified, whether the driver is in the fatigue driving state is judged according to the change of the positions of the key points of the mouth and the key points of the eyes in the face area along with the time, and the accuracy of judging whether the driver is in the fatigue driving state can be improved.
In one embodiment, the mouth and eye keypoints in the face region may be identified by: and matching the face area with the template, and acquiring the key points of the mouth and the eye in the face area according to the key points of the mouth and the eye in the template.
In the above embodiment, the template may be a Dlib library, and 68 key points of the face may be detected, where the 36 th to 41 th points are key points corresponding to the right eye, the 42 th to 47 th points are key points corresponding to the left eye, and the 48 th to 68 th points are key points corresponding to the mouth.
In the embodiment, the monitoring image of the driving process is segmented to obtain the corresponding area set, the target candidate area is obtained by comparing the similarity of two adjacent areas in the area set, the face area in the target candidate area is identified, whether the driver is in the fatigue driving state is judged according to the change of the positions of the key points of the mouth and the key points of the eyes in the face area along with the time, and the accuracy of judging whether the driver is in the fatigue driving state can be improved.
In one embodiment, it may be determined whether the driver is in a fatigue driving state by: judging whether the driver yawns according to the change of the position of the key point of the mouth along with the time; judging whether the driver closes the eyes or not according to the change of the positions of the key points of the eyes along with the time; whether the driver is in a fatigue driving state is judged by judging whether the driver is yawning or not and whether the driver is eye closing or not.
In the above embodiment, whether the driver is yawning may be determined in the following manner: converting the detected key points of the mouth into relative coordinates, wherein the mouth takes 48 points as an origin coordinate, and the formula is as follows
x′=x-xo;
y′=y-yo;
Wherein xo、yoAs the origin coordinates.
Calculating the difference value of each point and the same point of the previous frame according to the relative coordinates, and calculating the displacement distance:
Dtis the displacement distance, x 'at the point t'tIs the relative X-axis coordinate, X 'at the point t't-1Is the X-axis relative coordinate of a frame at that point. The displacement distance data of each coordinate point can be calculated by the following formula. And accumulating the displacement distance data of 5 frames and all the points, and setting the displacement distance data to be in a yawning state when the accumulated value exceeds the corresponding threshold value.
Wherein,i in (1) is the key point sequence number in the region, e.g. the key point positions for mouth 48-68, and t is the time.
The closed state of the eye can be judged by: calculating the Y-axis difference value of the corresponding point of the eye area:
and (3) for the right eye: e ═ y37-y41)+(y38-y40);
Left eye: e ═ y43-y47)+(y44-y46)。
And when E is smaller than the threshold value, the eye is set to be in a blinking state, and when both eyes are closed, the eye is regarded as being in a closed eye state.
In the embodiment, the monitoring image of the driving process is segmented to obtain the corresponding area set, the target candidate area is obtained by comparing the similarity of two adjacent areas in the area set, the face area in the target candidate area is identified, whether the driver is in the fatigue driving state is judged according to the change of the positions of the key points of the mouth and the key points of the eyes in the face area along with the time, and the accuracy of judging whether the driver is in the fatigue driving state can be improved.
In one embodiment, it may be determined whether the driver is in a fatigue driving state by: acquiring the frequency of yawning of a driver; acquiring the ratio of the time of the pupil being shielded by the eyelid to the specific time when the driver closes the eye; and judging whether the driver is in a fatigue driving state or not according to the frequency of yawning and the ratio of the time of covering the pupil by the eyelid in the eye closing process to the specific time.
The ratio of the time during which the pupil is occluded by the eyelid when the driver closes his eyes to a specific time (PERCLOS) can be calculated by the following equation:
and when the PERCLOS value exceeds a threshold value and yawning behavior does not occur at all, determining that the driver is in a fatigue driving state.
In the embodiment, the monitoring image of the driving process is segmented to obtain the corresponding area set, the target candidate area is obtained by comparing the similarity of two adjacent areas in the area set, the face area in the target candidate area is identified, whether the driver is in the fatigue driving state is judged according to the change of the positions of the key points of the mouth and the key points of the eyes in the face area along with the time, and the accuracy of judging whether the driver is in the fatigue driving state can be improved.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a fatigue driving state detecting device including:
an obtaining module 402, configured to obtain a monitoring image of a driving process, and segment the image to obtain a corresponding region set;
the identifying module 404 is configured to obtain a target candidate region in the monitored image by comparing similarities of two adjacent regions in the region set, and identify a face region in the target candidate region;
the determining module 406 is configured to identify a key point of the mouth and a key point of the eyes in the face area, and determine whether the driver is in a fatigue driving state according to changes of positions of the key point of the mouth and the key point of the eyes over time, where the key point of the eyes is a coordinate point representing positions of the eyes, and the key point of the mouth is a coordinate point representing positions of the mouth.
For specific limitations of the fatigue driving state detection device, reference may be made to the above limitations of the fatigue driving state detection method, which are not described herein again. The above-mentioned fatigue driving state detection device may be implemented in whole or in part by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The terms "comprises" and "comprising," and any variations thereof, of embodiments of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or (module) elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing fatigue driving state detection data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a fatigue driving state detection method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a monitoring image of a driving process, and segmenting the image to obtain a corresponding region set;
obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set, and identifying the face region in the target candidate region;
identifying a mouth key point and an eye key point in the face area, and judging whether the driver is in a fatigue driving state according to the changes of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
In one embodiment, the processor, when executing the computer program, further performs the steps of: obtaining each similar area in the monitoring image by calculating the similarity of two adjacent areas in the area set; and determining the similar regions with the areas exceeding a specified threshold value in the similar regions as target candidate regions.
In one embodiment, the processor, when executing the computer program, further performs the steps of: calculating the similarity of two adjacent regions in the region set; and merging the two adjacent regions with the similarity meeting the preset condition, and setting the merged regions as similar regions.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the similarity comprises texture similarity; the step of comparing the similarity of the textures of two adjacent regions in the region set comprises the following steps: carrying out graying processing on the monitoring image to obtain a grayscale image; calculating the binary pattern characteristics of each area in the two adjacent areas, and acquiring vectors corresponding to the binary pattern characteristics; and calculating the texture similarity of the two adjacent regions according to the vectors.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and matching the face area with the template, and acquiring the key points of the mouth and the eye in the face area according to the key points of the mouth and the eye in the template.
In one embodiment, the processor, when executing the computer program, further performs the steps of: judging whether the driver yawns according to the change of the position of the key point of the mouth along with the time; judging whether the driver closes the eyes or not according to the change of the positions of the key points of the eyes along with the time; whether the driver is in a fatigue driving state is judged by judging whether the driver is yawning or not and whether the driver is eye closing or not.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the frequency of yawning of a driver; acquiring the ratio of the time of the pupil being shielded by the eyelid to the specific time when the driver closes the eye; and judging whether the driver is in a fatigue driving state or not according to the frequency of yawning and the ratio of the time of covering the pupil by the eyelid in the eye closing process to the specific time.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a monitoring image of a driving process, and segmenting the image to obtain a corresponding region set;
obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set, and identifying the face region in the target candidate region;
identifying a mouth key point and an eye key point in the face area, and judging whether the driver is in a fatigue driving state according to the changes of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining each similar area in the monitoring image by calculating the similarity of two adjacent areas in the area set; and determining the similar regions with the areas exceeding a specified threshold value in the similar regions as target candidate regions.
In one embodiment, the computer program when executed by the processor further performs the steps of: calculating the similarity of two adjacent regions in the region set; and merging the two adjacent regions with the similarity meeting the preset condition, and setting the merged regions as similar regions.
In one embodiment, the computer program when executed by the processor further performs the steps of: the similarity comprises texture similarity; the step of comparing the similarity of the textures of two adjacent regions in the region set comprises the following steps: carrying out graying processing on the monitoring image to obtain a grayscale image; calculating the binary pattern characteristics of each area in the two adjacent areas, and acquiring vectors corresponding to the binary pattern characteristics; and calculating the texture similarity of the two adjacent regions according to the vectors.
In one embodiment, the computer program when executed by the processor further performs the steps of: and matching the face area with the template, and acquiring the key points of the mouth and the eye in the face area according to the key points of the mouth and the eye in the template.
In one embodiment, the computer program when executed by the processor further performs the steps of: judging whether the driver yawns according to the change of the position of the key point of the mouth along with the time; judging whether the driver closes the eyes or not according to the change of the positions of the key points of the eyes along with the time; whether the driver is in a fatigue driving state is judged by judging whether the driver is yawning or not and whether the driver is eye closing or not.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring the frequency of yawning of a driver; acquiring the ratio of the time of the pupil being shielded by the eyelid to the specific time when the driver closes the eye; and judging whether the driver is in a fatigue driving state or not according to the frequency of yawning and the ratio of the time of covering the pupil by the eyelid in the eye closing process to the specific time.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A fatigue driving state detection method, characterized by comprising:
acquiring a monitoring image of a driving process, and segmenting the image to obtain a corresponding region set;
obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set, and identifying a face region in the target candidate region;
identifying a mouth key point and an eye key point in the face area, and judging whether a driver is in a fatigue driving state according to the change of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
2. The fatigue driving state detection method according to claim 1, wherein the obtaining of the target candidate region in the monitored image by comparing the similarity of two adjacent regions in the region set comprises:
obtaining each similar area in the monitoring image by calculating the similarity of two adjacent areas in the area set;
and determining the similar regions with the areas exceeding a specified threshold value in each similar region as target candidate regions.
3. The fatigue driving state detection method according to claim 2, wherein the obtaining of each similar region in the monitored image by calculating a similarity between two adjacent regions in the region set comprises:
calculating the similarity of two adjacent regions in the region set;
and merging the two adjacent regions with the similarity meeting the preset condition, and setting the merged regions as similar regions.
4. The fatigue driving state detecting method according to claim 1, wherein the similarity includes a texture similarity;
the step of comparing the similarity of the textures of two adjacent regions in the region set comprises:
carrying out graying processing on the monitoring image to obtain a grayscale image;
calculating the binary pattern characteristics of each area in the two adjacent areas, and acquiring vectors corresponding to the binary pattern characteristics;
and calculating the texture similarity of the two adjacent regions according to the vector.
5. The fatigue driving state detection method according to any one of claims 1 to 4, wherein the identifying of the mouth key points and the eye key points in the face region includes:
and matching the face area with a template, and acquiring the key points of the mouth and the key points of the eyes in the face area according to the key points of the mouth and the key points of the eyes in the template.
6. The fatigue driving state detecting method according to any one of claims 1 to 4, wherein the determining whether the driver is in the fatigue driving state based on the changes over time in the positions of the mouth key point and the eye key point, respectively, comprises:
judging whether the driver yawns according to the change of the position of the key point of the mouth along with the time;
judging whether the driver closes the eyes or not according to the change of the positions of the key points of the eyes along with the time;
and judging whether the driver is in a fatigue driving state or not by judging whether the driver is yawning or not and whether the driver is in eye closing or not.
7. The fatigue driving state detecting method according to claim 6, wherein the determining whether the driver is in the fatigue driving state by determining whether the driver is yawning and eyes are closed includes:
acquiring the frequency of yawning of a driver;
acquiring the ratio of the time of the pupil being shielded by the eyelid to the specific time when the driver closes the eye;
and judging whether the driver is in a fatigue driving state or not according to the frequency of the yawning and the ratio of the time of the pupil being shielded by the eyelid in the eye closing process to the specific time.
8. A fatigue driving state detection device, characterized by comprising:
the acquisition module is used for acquiring a monitoring image of the driving process and segmenting the image to obtain a corresponding area set;
the identification module is used for obtaining a target candidate region in the monitoring image by comparing the similarity of two adjacent regions in the region set and identifying a face region in the target candidate region;
the judging module is used for identifying a mouth key point and an eye key point in the face area and judging whether a driver is in a fatigue driving state or not according to the change of the positions of the mouth key point and the eye key point along with time, wherein the eye key point is a coordinate point representing the position of eyes, and the mouth key point is a coordinate point representing the position of a mouth.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and being executable on the processor, characterized in that the processor realizes the steps of the fatigue driving state detection method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the fatigue driving state detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810649253.XA CN108830240A (en) | 2018-06-22 | 2018-06-22 | Fatigue driving state detection method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810649253.XA CN108830240A (en) | 2018-06-22 | 2018-06-22 | Fatigue driving state detection method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108830240A true CN108830240A (en) | 2018-11-16 |
Family
ID=64143257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810649253.XA Pending CN108830240A (en) | 2018-06-22 | 2018-06-22 | Fatigue driving state detection method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830240A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109948434A (en) * | 2019-01-31 | 2019-06-28 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium for demographics of going on board |
CN110723072A (en) * | 2019-10-09 | 2020-01-24 | 卓尔智联(武汉)研究院有限公司 | Driving assistance method and device, computer equipment and storage medium |
CN111241874A (en) * | 2018-11-28 | 2020-06-05 | 中国移动通信集团有限公司 | Behavior monitoring method and device and computer readable storage medium |
CN111797654A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Driver fatigue state detection method and device, storage medium and mobile terminal |
CN112052770A (en) * | 2020-08-31 | 2020-12-08 | 北京地平线信息技术有限公司 | Method, apparatus, medium, and electronic device for fatigue detection |
CN112699768A (en) * | 2020-12-25 | 2021-04-23 | 哈尔滨工业大学(威海) | Fatigue driving detection method and device based on face information and readable storage medium |
CN117935231A (en) * | 2024-03-20 | 2024-04-26 | 杭州臻稀生物科技有限公司 | Non-inductive fatigue driving monitoring and intervention method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950355A (en) * | 2010-09-08 | 2011-01-19 | 中国人民解放军国防科学技术大学 | Method for detecting fatigue state of driver based on digital video |
CN102254151A (en) * | 2011-06-16 | 2011-11-23 | 清华大学 | Driver fatigue detection method based on face video analysis |
CN102436715A (en) * | 2011-11-25 | 2012-05-02 | 大连海创高科信息技术有限公司 | Detection method for fatigue driving |
CN104574820A (en) * | 2015-01-09 | 2015-04-29 | 安徽清新互联信息科技有限公司 | Fatigue drive detecting method based on eye features |
CN104732251A (en) * | 2015-04-23 | 2015-06-24 | 郑州畅想高科股份有限公司 | Video-based method of detecting driving state of locomotive driver |
CN105844248A (en) * | 2016-03-29 | 2016-08-10 | 北京京东尚科信息技术有限公司 | Human face detection method and human face detection device |
CN105844252A (en) * | 2016-04-01 | 2016-08-10 | 南昌大学 | Face key part fatigue detection method |
CN106372621A (en) * | 2016-09-30 | 2017-02-01 | 防城港市港口区高创信息技术有限公司 | Face recognition-based fatigue driving detection method |
CN106778633A (en) * | 2016-12-19 | 2017-05-31 | 江苏慧眼数据科技股份有限公司 | A kind of pedestrian recognition method based on region segmentation |
-
2018
- 2018-06-22 CN CN201810649253.XA patent/CN108830240A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950355A (en) * | 2010-09-08 | 2011-01-19 | 中国人民解放军国防科学技术大学 | Method for detecting fatigue state of driver based on digital video |
CN102254151A (en) * | 2011-06-16 | 2011-11-23 | 清华大学 | Driver fatigue detection method based on face video analysis |
CN102436715A (en) * | 2011-11-25 | 2012-05-02 | 大连海创高科信息技术有限公司 | Detection method for fatigue driving |
CN104574820A (en) * | 2015-01-09 | 2015-04-29 | 安徽清新互联信息科技有限公司 | Fatigue drive detecting method based on eye features |
CN104732251A (en) * | 2015-04-23 | 2015-06-24 | 郑州畅想高科股份有限公司 | Video-based method of detecting driving state of locomotive driver |
CN105844248A (en) * | 2016-03-29 | 2016-08-10 | 北京京东尚科信息技术有限公司 | Human face detection method and human face detection device |
CN105844252A (en) * | 2016-04-01 | 2016-08-10 | 南昌大学 | Face key part fatigue detection method |
CN106372621A (en) * | 2016-09-30 | 2017-02-01 | 防城港市港口区高创信息技术有限公司 | Face recognition-based fatigue driving detection method |
CN106778633A (en) * | 2016-12-19 | 2017-05-31 | 江苏慧眼数据科技股份有限公司 | A kind of pedestrian recognition method based on region segmentation |
Non-Patent Citations (4)
Title |
---|
KING NGI NGAN等: "《视频分割及其应用》", 30 April 2014 * |
曾龙龙: "基于视频监控的实时人脸检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王琳琳: "基于肤色模型和AdaBoost算法的人脸检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
雷万军等: "《生物医学工程专业实验指导》", 30 September 2012 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111241874A (en) * | 2018-11-28 | 2020-06-05 | 中国移动通信集团有限公司 | Behavior monitoring method and device and computer readable storage medium |
CN109948434A (en) * | 2019-01-31 | 2019-06-28 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium for demographics of going on board |
CN109948434B (en) * | 2019-01-31 | 2023-07-21 | 平安科技(深圳)有限公司 | Method, device, computer equipment and storage medium for boarding number statistics |
CN111797654A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Driver fatigue state detection method and device, storage medium and mobile terminal |
CN110723072A (en) * | 2019-10-09 | 2020-01-24 | 卓尔智联(武汉)研究院有限公司 | Driving assistance method and device, computer equipment and storage medium |
CN110723072B (en) * | 2019-10-09 | 2021-06-01 | 卓尔智联(武汉)研究院有限公司 | Driving assistance method and device, computer equipment and storage medium |
CN112052770A (en) * | 2020-08-31 | 2020-12-08 | 北京地平线信息技术有限公司 | Method, apparatus, medium, and electronic device for fatigue detection |
CN112699768A (en) * | 2020-12-25 | 2021-04-23 | 哈尔滨工业大学(威海) | Fatigue driving detection method and device based on face information and readable storage medium |
CN117935231A (en) * | 2024-03-20 | 2024-04-26 | 杭州臻稀生物科技有限公司 | Non-inductive fatigue driving monitoring and intervention method |
CN117935231B (en) * | 2024-03-20 | 2024-06-07 | 杭州臻稀生物科技有限公司 | Non-inductive fatigue driving monitoring and intervention method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108830240A (en) | Fatigue driving state detection method, device, computer equipment and storage medium | |
CN110852285B (en) | Object detection method and device, computer equipment and storage medium | |
CN111178245B (en) | Lane line detection method, lane line detection device, computer equipment and storage medium | |
CN109359602B (en) | Lane line detection method and device | |
CN110390666B (en) | Road damage detection method, device, computer equipment and storage medium | |
TWI686774B (en) | Human face live detection method and device | |
CN111860670A (en) | Domain adaptive model training method, image detection method, device, equipment and medium | |
US20180204057A1 (en) | Object detection method and object detection apparatus | |
CN108229297B (en) | Face recognition method and device, electronic equipment and computer storage medium | |
CN109325412B (en) | Pedestrian recognition method, device, computer equipment and storage medium | |
US10275677B2 (en) | Image processing apparatus, image processing method and program | |
JP4479478B2 (en) | Pattern recognition method and apparatus | |
CN109035295B (en) | Multi-target tracking method, device, computer equipment and storage medium | |
Bedruz et al. | Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach | |
CN111539986B (en) | Target tracking method, device, computer equipment and storage medium | |
CN112560796A (en) | Human body posture real-time detection method and device, computer equipment and storage medium | |
CN111191533A (en) | Pedestrian re-identification processing method and device, computer equipment and storage medium | |
CN112241952B (en) | Brain midline identification method, device, computer equipment and storage medium | |
CN110046577B (en) | Pedestrian attribute prediction method, device, computer equipment and storage medium | |
CN111401196A (en) | Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space | |
CN111582077A (en) | Safety belt wearing detection method and device based on artificial intelligence software technology | |
CN111914668A (en) | Pedestrian re-identification method, device and system based on image enhancement technology | |
CN112348116A (en) | Target detection method and device using spatial context and computer equipment | |
CN111860582B (en) | Image classification model construction method and device, computer equipment and storage medium | |
EP3726421A2 (en) | Recognition method and apparatus for false detection of an abandoned object and image processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181116 |