[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114463779A - Smoking identification method, device, equipment and storage medium - Google Patents

Smoking identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN114463779A
CN114463779A CN202011143609.6A CN202011143609A CN114463779A CN 114463779 A CN114463779 A CN 114463779A CN 202011143609 A CN202011143609 A CN 202011143609A CN 114463779 A CN114463779 A CN 114463779A
Authority
CN
China
Prior art keywords
human body
image
mouth
smoking
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011143609.6A
Other languages
Chinese (zh)
Inventor
谢春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hongxiang Technical Service Co Ltd
Original Assignee
Beijing Hongxiang Technical Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hongxiang Technical Service Co Ltd filed Critical Beijing Hongxiang Technical Service Co Ltd
Priority to CN202011143609.6A priority Critical patent/CN114463779A/en
Publication of CN114463779A publication Critical patent/CN114463779A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Compared with the prior art in which smoking identification is performed by detecting smoke, in the invention, a human body region in an image to be identified is extracted to perform human body posture analysis, key point information corresponding to each part of a human body is obtained to determine the relative distance between a hand and a mouth, and then a target region comprising the hand and the mouth is determined.

Description

Smoking identification method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a smoking recognition method, a device, equipment and a storage medium.
Background
The existing smoking identification method mainly uses a smoke alarm to warn, but the warning is not easy to be triggered under certain smoke concentration. In the other existing smoking identification method, the smoke is detected by the camera to identify smoking, but the smoke is greatly influenced by illumination, background and the like in the camera environment, and the overall smoke identification degree is low. Therefore, the accuracy of the existing smoking identification methods is not high, and the good smoking identification effect cannot be achieved.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a smoking identification method, a device, equipment and a storage medium, and aims to solve the technical problems that in the prior art, the accuracy of smoking identification is not high, and a good smoking identification effect cannot be achieved.
In order to achieve the above object, the present invention provides a smoking identification method, including the steps of:
extracting a human body region from an image to be recognized;
analyzing the human body posture of the human body region to obtain key point information corresponding to each part of the human body;
determining the relative distance between the hand and the mouth of the human body according to the key point information;
when the relative distance is smaller than a preset distance threshold value, extracting a target area containing a hand and a mouth from the human body area;
and smoking identification is carried out according to the target area and a preset image classification model.
Optionally, the extracting a human body region from the image to be recognized includes:
detecting the image to be recognized based on a preset human shape detection model, and determining the position of a human body according to a detection result;
and extracting a human body region from the image to be recognized according to the human body position.
Optionally, the analyzing the human body posture of the human body region to obtain the key point information corresponding to each part of the human body includes:
intercepting a human body image from the image to be identified according to the human body area;
and analyzing the human body posture of the human body image based on a preset human body posture estimation model to obtain key point information corresponding to each part of the human body.
Optionally, the analyzing the human body posture of the human body image based on the preset human body posture estimation model to obtain the key point information corresponding to each part of the human body includes:
identifying the human body image based on the preset human body posture estimation model to obtain part position information corresponding to each part of the human body;
and determining key point information corresponding to each part of the human body according to the part position information.
Optionally, the determining a relative distance between the hand and the mouth of the human body according to the key point information includes:
extracting hand key point information corresponding to a hand of a human body and mouth key point information corresponding to a mouth of the human body from the key point information, respectively;
determining hand key points according to the hand key point information, and determining mouth key points according to the mouth key point information;
and determining the relative distance between the hand and the mouth of the human body according to the hand key points and the mouth key points.
Optionally, the determining a relative distance between the hand and the mouth of the human body according to the hand key point and the mouth key point includes:
selecting a point in the human body image as a coordinate origin, and establishing a coordinate system according to the coordinate origin;
determining hand coordinates of the hand keypoints in the coordinate system, and determining mouth coordinates of the mouth keypoints in the coordinate system;
and determining the relative distance between the hand and the mouth of the human body according to the hand coordinates and the mouth coordinates.
Optionally, the determining a relative distance between the hand and the mouth of the human body from the hand coordinates and the mouth coordinates comprises:
acquiring image size information of the human body image, and determining the image width and the image height according to the image size information;
normalizing the hand coordinates and the mouth coordinates according to the image width and the image height to obtain target hand coordinates and target mouth coordinates;
and determining the relative distance between the hand and the mouth of the human body according to the target hand coordinates and the target mouth coordinates.
Optionally, the intercepting a human body image from the image to be recognized according to the human body region includes:
determining the number of human bodies in the image to be recognized according to the human body area;
judging whether the number of the human bodies is a preset number or not;
and when the number of the human bodies is a preset number, intercepting a human body image from the image to be identified according to the human body area.
Optionally, after determining whether the number of the human bodies is a preset number, the method further includes:
when the number of the human bodies is not a preset number, determining the number of the images according to the number of the human bodies;
and intercepting a plurality of human body images from the image to be identified according to the human body area and the number of the images.
Optionally, before performing smoking recognition according to the target area and a preset image classification model, the method further includes:
acquiring smoking sample images and non-smoking sample images;
and training a preset neural network model according to the smoking sample image and the non-smoking sample image to obtain a preset image classification model.
Optionally, the training a preset neural network model according to the smoking sample image and the non-smoking sample image to obtain a preset image classification model includes:
dividing the smoking sample image into a smoking training image and a smoking test image, and dividing the non-smoking sample image into a non-smoking training image and a non-smoking test image;
generating a training data set according to the smoking training image and the non-smoking training image, and generating a test data set according to the smoking test image and the non-smoking test image;
training a preset neural network model according to the training data set to obtain an initial image classification model;
testing the initial image classification model through the test data set;
and when the test is passed, taking the initial image classification model as a preset image classification model.
Optionally, the smoking recognition according to the target area and a preset image classification model includes:
intercepting a target image containing a hand and a mouth according to the target area;
performing secondary classification processing on the target image through a preset image classification model to obtain a classification result;
and determining a smoking identification result according to the classification result.
Optionally, after determining the smoking identification result according to the classification result, the method further includes:
when the smoking identification result indicates that smoking behavior exists, searching position information corresponding to a camera, wherein the camera is used for collecting the image to be identified;
determining a target location where the camera is located according to the position information;
searching the alarm equipment corresponding to the target location, and giving smoking alarm through the alarm equipment.
Optionally, after the capturing the target image including the hand and the mouth according to the target area, the method further includes:
searching an infrared detector corresponding to a camera, and acquiring a thermal image acquired by the infrared detector;
determining temperature distribution data in areas corresponding to the hand and the mouth of the human body according to the thermal image, and determining a target temperature according to the temperature distribution data;
comparing the target temperature with a preset temperature;
and when the target temperature is higher than the preset temperature, executing the step of carrying out secondary classification processing on the target image through a preset image classification model to obtain a classification result.
Optionally, before extracting the human body region from the image to be recognized, the method further includes:
acquiring a monitoring video acquired by a camera, and extracting a video clip containing a human body from the monitoring video;
and extracting a single-frame image from the video clip, and taking the single-frame image as an image to be identified.
Optionally, the extracting a single frame image from the video segment includes:
and sampling the video clip at intervals of preset image frame numbers to obtain a single-frame image.
In addition, in order to achieve the above object, the present invention further provides a smoking recognition device, including:
the human shape detection module is used for extracting a human body region from the image to be identified;
the gesture analysis module is used for analyzing the human body gesture of the human body region to obtain key point information corresponding to each part of the human body;
the distance determining module is used for determining the relative distance between the hand and the mouth of the human body according to the key point information;
the region selection module is used for extracting a target region including a hand and a mouth from the human body region when the relative distance is smaller than a preset distance threshold;
and the smoking identification module is used for identifying smoking according to the target area and a preset image classification model.
Optionally, the human shape detection module is further configured to detect the image to be recognized based on a preset human shape detection model, and determine a human body position according to a detection result; and extracting a human body region from the image to be recognized according to the human body position.
Optionally, the gesture analysis module is further configured to intercept a human body image from the image to be recognized according to the human body region; and analyzing the human body posture of the human body image based on a preset human body posture estimation model to obtain key point information corresponding to each part of the human body.
In addition, in order to achieve the above object, the present invention also provides a smoking recognition apparatus, including: the smoking identification program is used for realizing the steps of the smoking identification method when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a storage medium having a smoking identification program stored thereon, wherein the smoking identification program, when executed by a processor, implements the steps of the smoking identification method as described above.
The invention provides a smoking identification method, which extracts a human body area from an image to be identified; analyzing the human body posture of the human body region to obtain key point information corresponding to each part of the human body; determining the relative distance between the hand and the mouth of the human body according to the key point information; when the relative distance is smaller than a preset distance threshold value, extracting a target area containing a hand and a mouth from the human body area; and smoking identification is carried out according to the target area and a preset image classification model. Compared with the mode of smoking identification by detecting smoke in the prior art, the method and the device have the advantages that the human body region in the image to be identified is extracted for human body posture analysis, the key point information corresponding to each part of the human body is obtained to determine the relative distance between the hand and the mouth, the target region comprising the hand and the mouth is further determined, and smoking identification is carried out according to the target region and the preset image classification model, so that smoking identification is carried out by a human body behavior analysis mode, the defect of inaccurate smoke detection is overcome, the accuracy of smoking identification is improved, and a better smoking identification effect is achieved.
Drawings
Fig. 1 is a schematic structural diagram of a smoking identification device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the smoking identification method of the present invention;
FIG. 3 is a schematic diagram of an image to be identified according to an embodiment of the smoking identification method of the present invention;
FIG. 4 is a schematic flow chart of a second embodiment of the smoking identification method of the present invention;
FIG. 5 is a key point diagram of an embodiment of the smoking identification method of the present invention;
FIG. 6 is a schematic flow chart of a third embodiment of the smoking identification method of the present invention;
FIG. 7 is a schematic diagram of a coordinate system according to an embodiment of the smoking identification method of the present invention;
FIG. 8 is a schematic flow chart of a fourth embodiment of the smoking identification method of the present invention;
FIG. 9 is a schematic flow chart of a fifth embodiment of the smoking identification method of the present invention;
FIG. 10 is a schematic diagram of a target image according to an embodiment of the smoking identification method of the present invention;
fig. 11 is a functional block diagram of a first embodiment of the smoking identification device of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a smoking identification device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the smoking recognition device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may comprise a Display screen (Display), an input unit such as keys, and the optional user interface 1003 may also comprise a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The Memory 1005 may be a Random Access Memory (RAM) Memory or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
It will be appreciated by those skilled in the art that the device configuration shown in figure 1 does not constitute a limitation of the smoke recognition device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a smoking identification program.
In the smoking identification device shown in fig. 1, the network interface 1004 is mainly used for connecting an external network and performing data communication with other network devices; the user interface 1003 is mainly used for connecting to a user equipment and performing data communication with the user equipment; the apparatus of the present invention calls the smoking identification program stored in the memory 1005 through the processor 1001, and executes the smoking identification method provided by the embodiment of the present invention.
Based on the hardware structure, the embodiment of the smoking identification method is provided.
Referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the smoking identification method of the present invention.
In a first embodiment, the method of identifying smoking comprises the steps of:
in step S10, a human body region is extracted from the image to be recognized.
It should be noted that, the execution subject of this embodiment may be a smoking identification device, the smoking identification device may be a computer device, or may also be other devices that can achieve the same or similar functions.
It should be understood that the image to be recognized may be detected to extract a human body region from the image to be recognized, where the human body region may be a rectangular region including a human body, or may also be a circular region or a diamond-shaped region, and the present embodiment does not limit this, and the rectangular region is taken as an example in the present embodiment for description.
It is to be understood that when extracting the human body region from the image to be recognized, one or more human body regions may be extracted according to the number of human bodies, for example, when one human body exists in the image to be recognized, one human body region is extracted, and when a plurality of human bodies exist in the image to be recognized, a plurality of human body regions are extracted.
In a specific implementation, as shown in fig. 3, fig. 3 is a schematic diagram of an image to be recognized, fig. 3 is a diagram of an image to be recognized, and there are two human bodies in the diagram, and two human body regions, namely, a frame a and a frame B, can be extracted from the image to be recognized.
Further, in order to improve efficiency and accuracy of extracting the human body region, the step S10 includes:
detecting the image to be recognized based on a preset human shape detection model, and determining the position of a human body according to a detection result; and extracting a human body region from the image to be recognized according to the human body position.
It should be noted that the preset human shape detection model may be a model using an OpenCV human shape detection technology, and may also be another model capable of implementing the same function, which is not limited in this embodiment.
It should be understood that the image to be recognized may be detected based on a preset human shape detection model, the human body position of the human body in the image to be recognized is determined according to the detection result, and after the human body position is determined, the human body region is extracted from the image to be recognized according to the human body position and a preset selection frame, where the preset selection frame may be a rectangular selection frame.
Further, in order to more conveniently obtain the image to be recognized for image recognition and improve the image recognition effect, before the step S10, the method further includes:
acquiring a monitoring video acquired by a camera, and extracting a video clip containing a human body from the monitoring video; and extracting a single-frame image from the video clip, and taking the single-frame image as an image to be identified.
It should be noted that the usage scenario of the present embodiment may be places such as shopping malls, office buildings, schools, and residential buildings, and a plurality of cameras are provided in these places, and video monitoring is performed through these cameras. Moreover, since the cameras are all fixedly arranged at a certain place, the place corresponding to each camera can be recorded in advance.
It should be understood that, in the embodiment, since the images are identified, and the cameras generally collect videos, the surveillance videos collected by the cameras may be obtained, and a video clip including a human body is extracted from the surveillance videos, and then a single-frame image is extracted from the video clip, and the single-frame image is used as the image to be identified.
In this embodiment, the manner of extracting a single frame image from a video clip may be as follows: and sampling the video clip at intervals of preset image frame numbers to obtain a single-frame image. The way of extracting a single frame image from a video clip can also be: and sampling the video clip at preset time intervals to obtain a single-frame image. Other extraction methods are also possible, and this embodiment is not limited to this.
It should be understood that the preset number of image frames may be 30 frames, i.e., the video segment is sampled at intervals of 30 frames to obtain a single frame image. The preset time may be 1S, that is, the video segment is sampled at an interval of 1 second to obtain a single frame image. The preset image frame number and the preset time may also be other values, and may be set according to actual conditions, which is not limited in this embodiment.
And step S20, analyzing the human body posture of the human body area to obtain key point information corresponding to each part of the human body.
It should be understood that each part of the human body may include a foot, a knee, an elbow, a hand, a shoulder, a crotch, an ear, a nose, an eye, a mouth, a chest, and other parts of the human body, and may also be other more parts of the human body.
And step S30, determining the relative distance between the hand and the mouth of the human body according to the key point information.
It can be understood that the hand key point information and the mouth key point information of the human body can be determined according to the key point information, so as to determine the hand key point and the mouth key point, and the relative distance between the hand and the mouth of the human body can be determined according to the hand key point and the mouth key point.
And step S40, when the relative distance is smaller than a preset distance threshold value, extracting a target area including a hand and a mouth from the human body area.
It should be understood that the preset distance threshold may be a preset value, and after determining the relative distance between the hand and the mouth of the human body, the relative distance may be compared with the preset distance threshold, and when the relative distance is smaller than the preset distance threshold, the target area including the hand and the mouth may be extracted from the human body area.
The target area may be a rectangular area including a hand and a mouth, or may also be an area with another shape, such as a circular area or a diamond-shaped area.
It will be appreciated that since the person has two hands, a left hand and a right hand respectively, the relative distances may comprise a first relative distance and a second relative distance, the left hand keypoint and the right hand keypoint may be determined from the hand keypoints, the first relative distance between the left hand and the mouth may be calculated from the left hand keypoint and the mouth keypoint, and the second relative distance between the right hand and the mouth may be calculated from the right hand keypoint and the mouth keypoint. Then comparing the two relative distances with a preset distance threshold, and extracting a target area containing the left hand and the mouth from the human body area when the first relative distance is smaller than the preset distance threshold; and when the second relative distance is smaller than a preset distance threshold value, extracting a target area containing the right hand and the mouth from the human body area.
And step S50, carrying out smoking identification according to the target area and a preset image classification model.
It should be understood that, because the target area includes the hand and the mouth, the target image including the hand and the mouth may be intercepted according to the target area, and the target area may be classified by the preset image classification model for smoking identification. The preset image classification model can be a neural network model, and specifically can be a deep learning image classification model.
The human body region is extracted from the image to be identified in the embodiment; analyzing the human body posture of the human body region to obtain key point information corresponding to each part of the human body; determining the relative distance between the hand and the mouth of the human body according to the key point information; when the relative distance is smaller than a preset distance threshold value, extracting a target area containing a hand and a mouth from the human body area; and smoking identification is carried out according to the target area and a preset image classification model. Compared with the mode of smoking identification by detecting smoke in the prior art, in the embodiment, the human body region in the image to be identified is extracted to perform human body posture analysis, the key point information corresponding to each part of the human body is obtained to determine the relative distance between the hand and the mouth, and then the target region comprising the hand and the mouth is determined, and smoking identification is performed according to the target region and the preset image classification model, so that smoking identification is performed by a human body behavior analysis mode, the defect of inaccurate smoke detection is overcome, the accuracy of smoking identification is improved, and a better smoking identification effect is achieved.
In an embodiment, as shown in fig. 4, a second embodiment of the smoking identification method of the present invention is proposed based on the first embodiment, and the step S20 includes:
step S201, intercepting a human body image from the image to be identified according to the human body area.
It should be understood that after the body region is determined, the body image may be cut out from the image to be recognized according to the body region, for example, there are two body regions of a frame and a B frame in fig. 3, and two body images corresponding to the a frame and the B frame may be cut out from the image to be recognized, for example, the body image corresponding to the a frame may be referred to as an image a, and the body image corresponding to the B frame may be referred to as an image B.
Further, in order to more conveniently intercept the human body image, the intercepting the human body image from the image to be recognized according to the human body region includes:
determining the number of human bodies in the image to be recognized according to the human body area; judging whether the number of the human bodies is a preset number or not; and when the number of the human bodies is a preset number, intercepting a human body image from the image to be identified according to the human body area. When the number of the human bodies is not a preset number, determining the number of the images according to the number of the human bodies; and intercepting a plurality of human body images from the image to be identified according to the human body area and the number of the images.
It should be understood that the number of human bodies in the image to be recognized may be determined according to the human body region, and when there is one human body region, the number of human bodies in the image to be recognized is 1; when there are two human body regions, the number of human bodies in the image to be recognized is 2 or the like, and the number of human bodies in the image to be recognized is the same as the number of human body regions.
In specific implementation, a preset number can be preset, the preset number can be 1, after the number of the human bodies in the image to be recognized is determined, whether the number of the human bodies is 1 or not can be judged, and when the number of the human bodies is 1, a human body image is captured from the image to be recognized. When the number of human bodies is not 1, the number of images is determined according to the number of human bodies, for example, when the number of human bodies is 2, the number of images is determined to be 2, and 2 human body images are intercepted from the image to be identified according to the human body area and the number of images.
And S202, analyzing the human body posture of the human body image based on a preset human body posture estimation model to obtain key point information corresponding to each part of the human body.
It should be noted that the preset Human Body posture Estimation model may be a DensePose model, an openpos model, an alphapos model, a Human Body position Estimation model, a deeppos model, or other models that can implement the same function, which is not limited in this embodiment.
It should be understood that, the human body image may be subjected to human body posture analysis based on the preset human body posture estimation model to obtain key point information corresponding to each part of the human body, where the key point information records key points corresponding to each part of the human body.
In a specific implementation, taking the human body image a as an example, as shown in fig. 5, 14 key points can be obtained, where point 1 in fig. 5 is a key point corresponding to the right hand, point 2 is a key point corresponding to the right elbow, point 3 is a key point corresponding to the chest, point 4 is a key point corresponding to the left elbow, point 5 is a key point corresponding to the left hand, point 6 is a key point corresponding to the mouth, point 7 is a key point corresponding to the nose, point 8 is a key point corresponding to the right eye, point 9 is a key point corresponding to the left eye, point 10 is a key point corresponding to the crotch, point 11 is a key point corresponding to the right knee, point 12 is a key point corresponding to the right foot, point 13 is a key point corresponding to the left knee, and point 14 is a key point corresponding to the left foot. Besides the above key points, key points corresponding to other human body parts can be determined, which is not limited in this embodiment.
Further, in order to determine the key point information more accurately, the step S202 includes:
identifying the human body image based on the preset human body posture estimation model to obtain part position information corresponding to each part of the human body; and determining key point information corresponding to each part of the human body according to the part position information.
It can be understood that the human body image may be recognized based on a preset human body posture estimation model to obtain part position information corresponding to each part of the human body, for example, mouth position information corresponding to the mouth, mouth position information corresponding to the hand, and the like may be determined, and then, key points corresponding to each part may be determined according to the part position information, and then, key point information corresponding to each part of the human body may be generated according to the key points.
In the embodiment, a human body image is intercepted from the image to be identified according to the human body area; and analyzing the human body posture of the human body image based on a preset human body posture estimation model to obtain key point information corresponding to each part of the human body, and accurately determining the key point information corresponding to each part of the human body in a human body posture analysis mode for subsequent smoking identification, thereby further improving the accuracy of smoking identification.
In an embodiment, as shown in fig. 6, a third embodiment of the smoking identification method of the present invention is provided based on the first embodiment or the second embodiment, and in this embodiment, the step S30 includes:
step S301 extracts, from the key point information, hand key point information corresponding to a hand of a human body and mouth key point information corresponding to a mouth of the human body.
It should be understood that hand keypoint information corresponding to the hand of the human body may be extracted from the keypoint information, and mouth keypoint information corresponding to the mouth of the human body may be extracted from the keypoint information.
Step S302, determining the hand key points according to the hand key point information, and determining the mouth key points according to the mouth key point information.
It is to be understood that after determining the hand keypoint information and the mouth keypoint information, the hand keypoints may be determined from the hand keypoint information and the mouth keypoints may be determined from the mouth keypoint information.
And step S303, determining the relative distance between the hand and the mouth of the human body according to the hand key points and the mouth key points.
It should be understood that as shown in FIG. 5, the hand keypoints may be determined to be point 1 and point 5, where point 1 is the right hand keypoint, point 5 is the left hand keypoint, and the mouth keypoint is point 6. The relative distance L1 between points 1 and 6, and the relative distance L2 between points 5 and 6, respectively, can then be calculated.
It is understood that the relative distance may be compared with a preset distance threshold, and when the relative distance is smaller than the preset distance threshold, a target region including the hand and the mouth is extracted from the human body region.
In a specific implementation, the preset distance threshold may be set to L0, and L1 and L2 are compared with L0, respectively, it is known that L1 is larger than L0, and L2 is smaller than L0, so that the target area including the left hand and the mouth can be extracted from the human body area. As shown in fig. 3, the C frame in fig. 3 is a target area.
Further, in order to improve efficiency of calculating a relative distance between the hand and the mouth of the human body, the step S303 includes:
selecting a point in the human body image as a coordinate origin, and establishing a coordinate system according to the coordinate origin; determining hand coordinates of the hand keypoints in the coordinate system, and determining mouth coordinates of the mouth keypoints in the coordinate system; and determining the relative distance between the hand and the mouth of the human body according to the hand coordinates and the mouth coordinates.
It is understood that a point may be selected from the human body image as a coordinate origin, and a coordinate system may be established according to the coordinate origin, so that hand coordinates and mouth coordinates of the hand key point and the mouth key point in the coordinate system may be determined, and a relative distance between the human body and the mouth may be determined according to the hand coordinates and the mouth coordinates.
In a specific implementation, as shown in fig. 7, the top left vertex of the human body image may be used as the origin of coordinates O, and a coordinate system may be established as shown in fig. 7. The right-hand coordinates of point 1 in the coordinate system, the left-hand coordinates of point 5 in the coordinate system, and the mouth coordinates of point 6 in the coordinate system may be determined, respectively, and the relative distance between the hand and the mouth calculated from the above coordinates.
Further, in order to improve the accuracy of the data, the determining the relative distance between the hand and the mouth of the human body according to the hand coordinates and the mouth coordinates includes:
acquiring image size information of the human body image, and determining the image width and the image height according to the image size information; normalizing the hand coordinates and the mouth coordinates according to the image width and the image height to obtain target hand coordinates and target mouth coordinates; and determining the relative distance between the hand and the mouth of the human body according to the target hand coordinates and the target mouth coordinates.
In a specific implementation, image size information of a human body image may be acquired, an image width H1 and an image height H2 may be determined according to the image size information, and normalization processing may be performed on hand coordinates and mouth coordinates according to the image width and the image height, for example, when an original coordinate of a point 1 is (1, 4), an original coordinate of a point 5 is (6, 2), an original coordinate of a point 6 is (5, 3), and both H1 and H2 are 2, normalization processing may be performed on the original coordinates of the point 1, the point 5, and the point 6 according to H1 and H2, respectively, to obtain target coordinates of the point 1 as (0.5, 2), a target coordinate of the point 5 as (3, 1), and a target coordinate of the point 6 as (2.5, 1.5).
It can be understood that, after the target hand coordinates and the target mouth coordinates are obtained, the relative distance between the hand and the mouth of the human body can be determined according to the target hand coordinates and the target mouth coordinates.
In the embodiment, hand key point information corresponding to a hand of a human body and mouth key point information corresponding to a mouth of the human body are extracted from the key point information respectively; determining hand key points according to the hand key point information, and determining mouth key points according to the mouth key point information; the relative distance between the hand and the mouth of the human body is determined according to the hand key points and the mouth key points, so that the hand key point information and the mouth key point information are extracted to determine the hand key points and the mouth key points, the relative distance between the hand and the mouth of the human body can be accurately determined according to the hand key points and the mouth key points, the subsequent smoking identification is used, and the accuracy of smoking identification is further improved.
In an embodiment, as shown in fig. 8, a fourth embodiment of the smoking identification method according to the present invention is provided based on the above embodiment, and in this embodiment, as explained based on the first embodiment, before the step S50, the method further includes:
in step S01, a smoking sample image and a non-smoking sample image are acquired.
It should be understood that a plurality of smoking sample images and non-smoking sample images may be acquired in advance before smoking identification is performed. In this embodiment, the number of the smoking sample images and the number of the non-smoking sample images are not limited.
And step S02, training a preset neural network model according to the smoking sample image and the non-smoking sample image to obtain a preset image classification model.
It can be understood that the preset neural network model can be trained through a large number of smoking sample images and non-smoking sample images, and a preset image classification model for smoking recognition is obtained.
Further, the step S02 includes:
dividing the smoking sample image into a smoking training image and a smoking test image, and dividing the non-smoking sample image into a non-smoking training image and a non-smoking test image; generating a training data set according to the smoking training image and the non-smoking training image, and generating a test data set according to the smoking test image and the non-smoking test image; training a preset neural network model according to the training data set to obtain an initial image classification model; testing the initial image classification model through the test data set; and when the test is passed, taking the initial image classification model as a preset image classification model.
It should be noted that the preset neural network model may be an untrained image classification model, and in order to apply the image classification model to the smoking identification scheme of the present application, the image classification model needs to be trained and tested first to obtain the preset image classification model for subsequent smoking identification.
It should be understood that the sample image may be divided into a training data set and a testing data set, the preset neural network model is trained through the training data set to obtain an initial image classification model, and then the initial image classification model is tested to obtain the preset image classification model.
In a specific implementation, the smoking sample image may be divided into a smoking training image and a smoking test image, the non-smoking sample image may be divided into a non-smoking training image and a non-smoking test image, a training data set may be generated according to the smoking training image and the non-smoking training image, and a test data set may be generated according to the smoking test image and the non-smoking test image. The method for dividing the training image and the test image may be to use 70% of the sample image as the training image and 30% of the sample image as the test image, or may be other dividing methods, which is not limited in this embodiment.
It can be understood that, by means of the training model, the initial image classification model for classifying smoking and non-smoking can be obtained by training the preset neural network model according to the training data set, in order to further improve the classification accuracy, the initial image classification model can be tested according to the test data set, and when the test is passed, the initial image classification model is used as the preset image classification model.
In the embodiment, the smoking sample image and the non-smoking sample image are obtained, the preset neural network model is trained according to the smoking sample image and the non-smoking sample image to obtain the preset image classification model, so that the preset neural network model is trained according to the training data set in a mode of training the model to obtain the initial image classification model for smoking and non-smoking classification, the initial image classification model is tested according to the testing data set, and when the test is passed, the initial image classification model is used as the preset image classification model, so that the classification accuracy of the model is improved.
In an embodiment, as shown in fig. 9, a fifth embodiment of the smoking identification method of the present invention is provided based on the above embodiment, and in this embodiment, the description is made based on the first embodiment, and the step S50 includes:
and step S501, intercepting a target image containing a hand and a mouth according to the target area.
It will be appreciated that after the target area is determined, a target image containing the hand and mouth may be captured based on the target area.
In a specific implementation, the target image including the left hand and the mouth may be cut according to a C box in fig. 3, and specifically, as shown in fig. 10, fig. 10 is the cut target image including the left hand and the mouth.
Step S502, performing secondary classification processing on the target image through a preset image classification model to obtain a classification result.
It should be understood that, the output results of the preset image classification model are two, i.e. smoking and non-smoking, respectively, and the target image can be input into the preset image classification model to perform a secondary classification process to obtain the classification result.
And S503, determining a smoking identification result according to the classification result.
It is to be understood that after determining the classification result, a smoking identification result may be determined based on the classification result. In a specific implementation, when the classification result is smoking, the smoking identification result is that smoking behavior exists; and when the classification result is no smoking, the smoking identification result is that no smoking behavior exists.
Further, in order to perform a targeted smoking alarm and prompt a user of smoking when identifying smoking behavior, after step S503, the method further includes:
when the smoking identification result indicates that smoking behavior exists, searching position information corresponding to a camera, wherein the camera is used for collecting the image to be identified; determining a target location where the camera is located according to the position information; searching the alarm equipment corresponding to the target location, and giving smoking alarm through the alarm equipment.
It should be understood that since a plurality of cameras are provided, each camera is disposed in a different area, for example, camera a is disposed in area a, camera B is disposed in area B, camera C is disposed in area C, and so on. When the smoking identification result indicates that smoking behavior exists, the position information corresponding to the camera for collecting the image to be identified can be searched, for example, when the camera for collecting the image to be identified is the camera A, the target location can be determined to be the area A according to the position information of the camera A.
It can be understood that, in order to achieve a better alarm effect, an alarm device may be provided in each area, wherein the alarm device may be an audible alarm device that emits an alarm sound, for example, a buzzer sound; it is also possible to use a light alarm device for light display, for example flashing a red light, when alarming. Other types of alarm devices besides the above-mentioned alarm devices are also possible, and the present embodiment is not limited thereto.
Further, since the distance between the hand and the mouth is very close in the situations of covering the mouth, wiping the mouth, or eating and drinking, the inaccuracy of image detection may be caused, too many target images are generated, and in order to avoid the above situations, the efficiency of smoking identification is improved, after step S501, the method further includes:
searching an infrared detector corresponding to a camera, and acquiring a thermal image acquired by the infrared detector; determining temperature distribution data in areas corresponding to the hand and the mouth of the human body according to the thermal image, and determining a target temperature according to the temperature distribution data; comparing the target temperature with a preset temperature; and when the target temperature is higher than the preset temperature, executing the step of carrying out secondary classification processing on the target image through a preset image classification model to obtain a classification result.
It should be noted that, because the temperature of the cigarette is high when the cigarette is ignited, which is much higher than the temperature of the human body and the air in the environment, in order to improve the accuracy of the detection, the infrared detector can be combined to perform the temperature detection.
It will be appreciated that an infrared detector may be provided adjacent the camera head in each zone, with the temperature of the surrounding environment being detected by the infrared detector and a thermal image being generated. For example, an infrared detector a may be disposed beside a camera a in an area a, an infrared detector B may be disposed beside a camera B in an area B, and an infrared detector C may be disposed beside a camera C in an area C.
In specific implementation, when the camera A is used for collecting the image to be identified, the infrared detector A corresponding to the camera A can be searched, the thermal image collected by the infrared detector A is obtained, the temperature distribution data in the area corresponding to the hand and the mouth of the human body can be determined according to the thermal image, and the target temperature can be determined according to the temperature distribution data. The target temperature may be an average temperature in the region, or may be a maximum temperature in the region, which is not limited in this embodiment.
It is understood that the preset temperature may be a preset cigarette combustion temperature, the target temperature may be compared with the preset temperature, when the target temperature is higher than the preset temperature, it is indicated that there is a high possibility of smoking behavior, and the step of performing a second classification process on the target image through a preset image classification model to obtain a classification result may be performed.
In the embodiment, a target image containing a hand and a mouth is intercepted according to the target area; performing secondary classification processing on the target image through a preset image classification model to obtain a classification result; and determining a smoking identification result according to the classification result, so as to intercept a target image which comprises a hand and a mouth and corresponds to the target area, accurately classify the target image through a preset image classification model, obtain a classification result, and further determine the smoking identification result according to the classification result, thereby further improving the accuracy of the smoking identification result.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium stores a smoking identification program, and the smoking identification program, when executed by a processor, implements the steps of the smoking identification method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
In addition, referring to fig. 11, an embodiment of the present invention further provides a smoking identification device, where the smoking identification device includes:
and the human shape detection module 10 is used for extracting a human body region from the image to be identified.
And the posture analysis module 20 is configured to perform human posture analysis on the human body region to obtain key point information corresponding to each part of the human body.
And a distance determining module 30 for determining a relative distance between the hand and the mouth of the human body according to the key point information.
And the region selection module 40 is configured to extract a target region including a hand and a mouth from the human body region when the relative distance is smaller than a preset distance threshold.
And the smoking identification module 50 is used for identifying smoking according to the target area and a preset image classification model.
The human body region is extracted from the image to be identified in the embodiment; analyzing the human body posture of the human body region to obtain key point information corresponding to each part of the human body; determining the relative distance between the hand and the mouth of the human body according to the key point information; when the relative distance is smaller than a preset distance threshold value, extracting a target area containing a hand and a mouth from the human body area; and smoking identification is carried out according to the target area and a preset image classification model. Compared with the mode of smoking identification by detecting smoke in the prior art, in the embodiment, the human body region in the image to be identified is extracted to perform human body posture analysis, the key point information corresponding to each part of the human body is obtained to determine the relative distance between the hand and the mouth, and then the target region comprising the hand and the mouth is determined, and smoking identification is performed according to the target region and the preset image classification model, so that smoking identification is performed by a human body behavior analysis mode, the defect of inaccurate smoke detection is overcome, the accuracy of smoking identification is improved, and a better smoking identification effect is achieved.
Other embodiments or specific implementation methods of the smoking identification device of the present invention may refer to the above embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of software product, which is stored in a storage medium (such as ROM/RAM, magnetic disc, optical disc) readable by an estimator as described above and includes instructions for enabling an intelligent device (such as a mobile phone, an estimator, a smoking recognition device, an air conditioner, or a network smoking recognition device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
The invention discloses A1 and a smoking identification method, which comprises the following steps:
extracting a human body region from an image to be recognized;
analyzing the human body posture of the human body region to obtain key point information corresponding to each part of the human body;
determining the relative distance between the hand and the mouth of the human body according to the key point information;
when the relative distance is smaller than a preset distance threshold value, extracting a target area containing a hand and a mouth from the human body area;
and smoking identification is carried out according to the target area and a preset image classification model.
A2, the smoking identification method A1, wherein the method for extracting the human body region from the image to be identified comprises the following steps:
detecting the image to be recognized based on a preset human shape detection model, and determining the position of a human body according to a detection result;
and extracting a human body region from the image to be recognized according to the human body position.
A3, the smoking identification method according to a1, wherein the analyzing the human body posture of the human body area to obtain the key point information corresponding to each part of the human body includes:
intercepting a human body image from the image to be identified according to the human body area;
and analyzing the human body posture of the human body image based on a preset human body posture estimation model to obtain key point information corresponding to each part of the human body.
A4, the smoking identification method according to A3, wherein the analyzing the human body posture based on the preset human body posture estimation model to obtain the key point information corresponding to each part of the human body, comprises:
identifying the human body image based on the preset human body posture estimation model to obtain part position information corresponding to each part of the human body;
and determining key point information corresponding to each part of the human body according to the part position information.
A5, the smoking identification method according to A1, wherein the determining the relative distance between the hand and the mouth of the human body according to the key point information comprises:
extracting hand key point information corresponding to a hand of a human body and mouth key point information corresponding to a mouth of the human body from the key point information, respectively;
determining hand key points according to the hand key point information, and determining mouth key points according to the mouth key point information;
and determining the relative distance between the hand and the mouth of the human body according to the hand key points and the mouth key points.
A6, the method for identifying smoking according to A5, wherein the determining the relative distance between the hand and the mouth of the human body according to the hand key point and the mouth key point comprises:
selecting a point in the human body image as a coordinate origin, and establishing a coordinate system according to the coordinate origin;
determining hand coordinates of the hand keypoints in the coordinate system, and determining mouth coordinates of the mouth keypoints in the coordinate system;
and determining the relative distance between the hand and the mouth of the human body according to the hand coordinates and the mouth coordinates.
A7, the method for identifying smoking according to A6, wherein the determining the relative distance between the hand and the mouth of the human body according to the hand coordinate and the mouth coordinate comprises:
acquiring image size information of the human body image, and determining the image width and the image height according to the image size information;
normalizing the hand coordinates and the mouth coordinates according to the image width and the image height to obtain target hand coordinates and target mouth coordinates;
and determining the relative distance between the hand and the mouth of the human body according to the target hand coordinates and the target mouth coordinates.
A8, the smoking identification method A3, the method for intercepting the human body image from the image to be identified according to the human body area comprises:
determining the number of human bodies in the image to be recognized according to the human body area;
judging whether the number of the human bodies is a preset number or not;
and when the number of the human bodies is a preset number, intercepting a human body image from the image to be identified according to the human body area.
A9, the method for identifying smoking according to A8, further comprising the step of, after determining whether the number of human bodies is a preset number:
when the number of the human bodies is not a preset number, determining the number of the images according to the number of the human bodies;
and intercepting a plurality of human body images from the image to be identified according to the human body area and the number of the images.
The method for identifying smoking according to any one of a1 to a9 as described in a10, further comprising, before identifying smoking according to the target area and a preset image classification model:
acquiring smoking sample images and non-smoking sample images;
and training a preset neural network model according to the smoking sample image and the non-smoking sample image to obtain a preset image classification model.
A11, the smoking identification method according to a10, wherein the training of the preset neural network model according to the smoking sample image and the non-smoking sample image to obtain a preset image classification model comprises:
dividing the smoking sample image into a smoking training image and a smoking test image, and dividing the non-smoking sample image into a non-smoking training image and a non-smoking test image;
generating a training data set according to the smoking training image and the non-smoking training image, and generating a test data set according to the smoking test image and the non-smoking test image;
training a preset neural network model according to the training data set to obtain an initial image classification model;
testing the initial image classification model through the test data set;
and when the test is passed, taking the initial image classification model as a preset image classification model.
A12, the method for recognizing smoking according to any one of A1-A9, wherein the method for recognizing smoking according to the target area and a preset image classification model comprises the following steps:
intercepting a target image containing a hand and a mouth according to the target area;
performing secondary classification processing on the target image through a preset image classification model to obtain a classification result;
and determining a smoking identification result according to the classification result.
A13, the method for identifying smoking according to A12, further comprising the following steps:
when the smoking identification result indicates that smoking behavior exists, searching position information corresponding to a camera, wherein the camera is used for collecting the image to be identified;
determining a target location where the camera is located according to the position information;
searching the alarm equipment corresponding to the target location, and giving smoking alarm through the alarm equipment.
A14, the method for identifying smoking as in a12, further comprising, after intercepting a target image containing a hand and a mouth according to the target area:
searching an infrared detector corresponding to a camera, and acquiring a thermal image acquired by the infrared detector;
determining temperature distribution data in areas corresponding to the hand and the mouth of the human body according to the thermal image, and determining a target temperature according to the temperature distribution data;
comparing the target temperature with a preset temperature;
and when the target temperature is higher than a preset temperature, executing the step of carrying out secondary classification processing on the target image through a preset image classification model to obtain a classification result.
A15, the method for identifying smoking according to any one of a1 to a9, further comprising, before extracting the human body region from the image to be identified:
acquiring a monitoring video acquired by a camera, and extracting a video clip containing a human body from the monitoring video;
and extracting a single-frame image from the video clip, and taking the single-frame image as an image to be identified.
A16, the method for identifying smoking according to A15, wherein the extracting a single frame image from the video segment includes:
and sampling the video clip at intervals of preset image frame numbers to obtain a single-frame image.
The invention also discloses B17 and a smoking identification device, wherein the smoking identification device comprises:
the human shape detection module is used for extracting a human body region from the image to be identified;
the gesture analysis module is used for analyzing the human body gesture of the human body region to obtain key point information corresponding to each part of the human body;
the distance determining module is used for determining the relative distance between the hand and the mouth of the human body according to the key point information;
the region selection module is used for extracting a target region including a hand and a mouth from the human body region when the relative distance is smaller than a preset distance threshold;
and the smoking identification module is used for identifying smoking according to the target area and a preset image classification model.
B18, the smoking identification device according to B17, the human shape detection module is further configured to detect the image to be identified based on a preset human shape detection model, and determine the position of the human body according to the detection result; and extracting a human body region from the image to be recognized according to the human body position.
The invention also discloses C19, a smoking identification device, the smoking identification device includes: the smoking identification program is stored on the memory and can be operated on the processor, and when being executed by the processor, the smoking identification program realizes the steps of the smoking identification method.
The invention also discloses D20 and a storage medium, wherein the storage medium is stored with a smoking identification program, and the smoking identification program realizes the steps of the smoking identification method when being executed by a processor.

Claims (10)

1. A smoking identification method is characterized by comprising the following steps:
extracting a human body region from an image to be recognized;
analyzing the human body posture of the human body region to obtain key point information corresponding to each part of the human body;
determining the relative distance between the hand and the mouth of the human body according to the key point information;
when the relative distance is smaller than a preset distance threshold value, extracting a target area containing a hand and a mouth from the human body area;
and smoking identification is carried out according to the target area and a preset image classification model.
2. The method for identifying smoking according to claim 1, wherein the extracting a human body region from the image to be identified comprises:
detecting the image to be recognized based on a preset human shape detection model, and determining the position of a human body according to a detection result;
and extracting a human body region from the image to be recognized according to the human body position.
3. The method for identifying smoking according to claim 1, wherein the analyzing the human body posture of the human body region to obtain the key point information corresponding to each part of the human body comprises:
intercepting a human body image from the image to be identified according to the human body area;
and analyzing the human body posture of the human body image based on a preset human body posture estimation model to obtain key point information corresponding to each part of the human body.
4. The smoking identification method of claim 3, wherein the analyzing the body pose of the body image based on a preset body pose estimation model to obtain the key point information corresponding to each part of the body comprises:
identifying the human body image based on the preset human body posture estimation model to obtain part position information corresponding to each part of the human body;
and determining key point information corresponding to each part of the human body according to the part position information.
5. The method of identifying smoking according to claim 1, wherein said determining a relative distance between a hand and a mouth of a human body from said keypoint information comprises:
extracting hand key point information corresponding to a hand of a human body and mouth key point information corresponding to a mouth of the human body from the key point information, respectively;
determining hand key points according to the hand key point information, and determining mouth key points according to the mouth key point information;
and determining the relative distance between the hand and the mouth of the human body according to the hand key points and the mouth key points.
6. The method of identifying smoking according to claim 5, wherein said determining a relative distance between a hand and a mouth of a human body from the hand keypoints and the mouth keypoints comprises:
selecting a point in the human body image as a coordinate origin, and establishing a coordinate system according to the coordinate origin;
determining hand coordinates of the hand keypoints in the coordinate system, and determining mouth coordinates of the mouth keypoints in the coordinate system;
and determining the relative distance between the hand and the mouth of the human body according to the hand coordinates and the mouth coordinates.
7. The method of identifying smoking according to claim 6, wherein said determining a relative distance between the hand and the mouth of the human body from the hand coordinates and the mouth coordinates comprises:
acquiring image size information of the human body image, and determining the image width and the image height according to the image size information;
normalizing the hand coordinates and the mouth coordinates according to the image width and the image height to obtain target hand coordinates and target mouth coordinates;
and determining the relative distance between the hand and the mouth of the human body according to the target hand coordinates and the target mouth coordinates.
8. A smoking identification device, comprising:
the human shape detection module is used for extracting a human body region from the image to be identified;
the gesture analysis module is used for analyzing the human body gesture of the human body region to obtain key point information corresponding to each part of the human body;
the distance determining module is used for determining the relative distance between the hand and the mouth of the human body according to the key point information;
the region selection module is used for extracting a target region including a hand and a mouth from the human body region when the relative distance is smaller than a preset distance threshold;
and the smoking identification module is used for identifying smoking according to the target area and a preset image classification model.
9. A smoking identification device, characterized in that the smoking identification device comprises: memory, a processor and a smoking identification program stored on the memory and executable on the processor, the smoking identification program when executed by the processor implementing the steps of the smoking identification method according to any one of claims 1 to 7.
10. A storage medium having a smoking identification program stored thereon, the smoking identification program when executed by a processor implementing the steps of the smoking identification method according to any one of claims 1 to 7.
CN202011143609.6A 2020-10-22 2020-10-22 Smoking identification method, device, equipment and storage medium Pending CN114463779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011143609.6A CN114463779A (en) 2020-10-22 2020-10-22 Smoking identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011143609.6A CN114463779A (en) 2020-10-22 2020-10-22 Smoking identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114463779A true CN114463779A (en) 2022-05-10

Family

ID=81404076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011143609.6A Pending CN114463779A (en) 2020-10-22 2020-10-22 Smoking identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114463779A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115440015A (en) * 2022-08-25 2022-12-06 深圳泰豪信息技术有限公司 Video analysis method and system capable of being intelligently and safely controlled
CN116563673A (en) * 2023-07-10 2023-08-08 浙江华诺康科技有限公司 Smoke training data generation method and device and computer equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115440015A (en) * 2022-08-25 2022-12-06 深圳泰豪信息技术有限公司 Video analysis method and system capable of being intelligently and safely controlled
CN115440015B (en) * 2022-08-25 2023-08-11 深圳泰豪信息技术有限公司 Video analysis method and system capable of being intelligently and safely controlled
CN116563673A (en) * 2023-07-10 2023-08-08 浙江华诺康科技有限公司 Smoke training data generation method and device and computer equipment
CN116563673B (en) * 2023-07-10 2023-12-12 浙江华诺康科技有限公司 Smoke training data generation method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN108734125B (en) Smoking behavior identification method for open space
Lestari et al. Fire hotspots detection system on CCTV videos using you only look once (YOLO) method and tiny YOLO model for high buildings evacuation
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
CN111126153B (en) Safety monitoring method, system, server and storage medium based on deep learning
CN109887234B (en) Method and device for preventing children from getting lost, electronic equipment and storage medium
CN110674680B (en) Living body identification method, living body identification device and storage medium
CN109766755A (en) Face identification method and Related product
CN107948585A (en) Video recording labeling method, device and computer-readable recording medium
CN114463776A (en) Fall identification method, device, equipment and storage medium
CN111539358A (en) Working state determination method and device, computer equipment and storage medium
WO2020167155A1 (en) Method and system for detecting troubling events during interaction with a self-service device
CN114463779A (en) Smoking identification method, device, equipment and storage medium
CN111753587B (en) Ground falling detection method and device
CN114445768A (en) Target identification method and device, electronic equipment and storage medium
CN111832450B (en) Knife holding detection method based on image recognition
JP5552946B2 (en) Face image sample collection device, face image sample collection method, program
JP7075460B2 (en) Information recognition system and its method
CN113627321A (en) Image identification method and device based on artificial intelligence and computer equipment
KR102316799B1 (en) Method and system for recognizing object and environment
CN112149527A (en) Wearable device detection method and device, electronic device and storage medium
CN111191575B (en) Naked flame detection method and system based on flame jumping modeling
CN114387557A (en) Deep learning-based method and system for detecting smoking and calling of gas station
CN113469138A (en) Object detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 100020 1765, 15th floor, 17th floor, building 3, No.10, Jiuxianqiao Road, Chaoyang District, Beijing

Applicant after: Beijing 360 Zhiling Technology Co.,Ltd.

Address before: 100020 1765, 15th floor, 17th floor, building 3, No.10, Jiuxianqiao Road, Chaoyang District, Beijing

Applicant before: Beijing Hongxiang Technical Service Co.,Ltd.

Country or region before: China

CB02 Change of applicant information