[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115862059A - Enterprise employee fatigue monitoring method and device based on machine learning - Google Patents

Enterprise employee fatigue monitoring method and device based on machine learning Download PDF

Info

Publication number
CN115862059A
CN115862059A CN202211465943.2A CN202211465943A CN115862059A CN 115862059 A CN115862059 A CN 115862059A CN 202211465943 A CN202211465943 A CN 202211465943A CN 115862059 A CN115862059 A CN 115862059A
Authority
CN
China
Prior art keywords
image
fatigue
information
key points
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211465943.2A
Other languages
Chinese (zh)
Inventor
孙浩鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202211465943.2A priority Critical patent/CN115862059A/en
Publication of CN115862059A publication Critical patent/CN115862059A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides an enterprise employee fatigue monitoring method based on machine learning, which comprises the steps of collecting employee office image information, performing frame extraction processing based on office image data interval preset frame numbers, and performing image enhancement processing on the image information to obtain a fatigue detection data frame set; inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain the outline information of at least one person, marking an outline identification rectangular frame, and determining a plurality of human body key point information based on the outline information; determining the association states and the duration of at least three target key points based on the plurality of human body key point information; and when the correlation states of the at least three target key points meet a certain condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a fatigue working behavior data frame. The monitoring and early warning measures for judging whether the staff is suddenly dead or not are enhanced, and the time for discovering the suspected staff who is suddenly dead is shortened.

Description

Enterprise employee fatigue monitoring method and device based on machine learning
Technical Field
The application relates to the technical field of enterprise health management, in particular to a method and a device for monitoring fatigue of enterprise employees based on machine learning.
Background
With the improvement of living standard of people, people do not need to do physical labor for a long time, and the people are accompanied by the work of non-physical labor in an office environment. While the silkworm with overhigh working pressure walks with the psychological and physical health of workers, the workers are often in a long-term fatigue working state after sitting for one day, and many young people suffer from diseases caused by long-term sitting, such as cervical spondylosis, lumbar muscle strain, lower limb venous embolism, sudden death and the like. For example, a clerk on duty at a bank counter, during the post, is sitting at the counter almost all the time to receive a customer, and needs to be on duty even though the body is already in an extremely uncomfortable state.
Research shows that the work efficiency of the staff is negatively affected by the excessively high work intensity, and the staff is easy to cause occupational fatigue in a team, cause the sudden death rate of the staff to rise, and bring serious illness risk to the weak staff. When the employees get serious diseases during the work, the enterprises may face a series of results of personnel loss, enterprise recruitment cost increase, enterprise compensation and the like. In addition, in the present day that personal health is more and more valued, if sudden illness can be discovered in the first time, the most precious time can be won for the individual.
The existing fatigue monitoring measures on the market are all to realize human health monitoring through portable wearable equipment, and the method is suitable for wearing the wearable equipment on everyone, but is not suitable for mass scenes and has no universality. Therefore, the fatigue monitoring method based on the enterprise staff is provided, so that the time and the cost for knowing the physical and mental states of the staff can be saved better for assisting an enterprise, the employment risk of the enterprise is reduced, the dimensionality for managing human resources from the health perspective is increased, and data analysis and an adjustment scheme are performed in a targeted manner. Helping enterprises create special health culture, ensuring that physical and mental states of employees are in a healthy level, improving production efficiency of enterprises, employer brands and employee happiness indexes, and becoming a technical problem to be solved urgently by technical personnel in the field.
Disclosure of Invention
The application provides a method and a device for monitoring fatigue of enterprise employees based on machine learning, which improve the method for monitoring the fatigue of the employees and are beneficial to improving the response efficiency of emergency situations.
In a first aspect, an embodiment of the present application provides a method for monitoring fatigue of employees of an enterprise based on machine learning, which is characterized by including:
acquiring office image information of staff through an image acquisition and operation device, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, overturning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set;
inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain the outline information of at least one person, marking an outline identification rectangular frame, and determining a plurality of human body key point information based on the outline information;
determining the association states and the duration of at least three target key points based on the plurality of human body key point information;
and when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a data frame with fatigue working behaviors.
Optionally, the image information is subjected to image enhancement processing of at least rotating, stretching, shearing, translating, flipping, increasing or decreasing brightness, contrast, sharpness, and noise, and specifically includes:
and newly establishing a frame buffer area to receive the special effect data, and establishing an image sharpener to perform image enhancement processing on the special effect data to obtain special effect image data.
Optionally, the frame extraction processing and image enhancement based on the preset frame number of the office image data interval to obtain the fatigue detection data frame set specifically includes:
adjusting the image in the outline recognition rectangular frame to be in a standard size H v ,W v ]After that, the air conditioner is started to work,carrying out normalized extraction;
determining a candidate based on an embedded sparsity feature selection strategy, and extracting feature parameters corresponding to the image in the outline recognition rectangular frame and the candidate, wherein the width and the height of the candidate are respectively 1/2 of the standard size;
and inputting the adjusted rectangle into a residual error neural network to obtain and mark the contour information of at least one person.
Optionally, after determining candidates based on the embedded sparsity feature selection policy and extracting feature parameters corresponding to the image in the outline recognition rectangular frame and the candidate direction, where the candidate width and the candidate height are 1/2 of the standard size, respectively, the method further includes:
selecting several sizes as [ H ] based on embedded sparsity feature selection strategy p ,W p ]=[H v /2,W v /2]And extracting corresponding features vp from the features uniformly extracted from the employee office image information.
Optionally, the determining the association states and the durations of the at least three target key points based on the plurality of pieces of human body key point information specifically includes:
selecting a plurality of designated key points, wherein the designated key points at least comprise shoulder key points, elbow key points and wrist key points, or waist key points, knee joint key points and ankle joint key points, and determining that the state of the moving target in the image is abnormal when the angle between the position of the designated key points in the image and a preset position or an associated key point exceeds a preset confidence level.
Optionally, the first condition for determining whether the angle between the position of the specified key point in the image and the preset position or the angle between the specified key point and the associated key point exceeds the preset reliability is as follows:
if((y1 location <100)and(y2 location <100) (1)
optionally, the second condition for determining whether the angle between the position of the specified key point in the image and the preset position or the angle between the specified key point and the associated key point exceeds the preset reliability is as follows:
langle=calculate_angle(l hip ,l knee ,l heel ) (2)
and if the first condition and the second condition are all met, judging that the employee is in a lying down state, starting a timer, and calculating the duration of the employee lying down.
In a second aspect, the present application provides an image acquisition and operation device, including an image acquisition module, a first processing module, a second processing module, a coding module, and a marking module, where the image acquisition and operation device is connected to a display of an office hall; wherein,
the image acquisition module is used for acquiring an image data stream by a user and respectively outputting the acquired image data stream to the first processing module and the second processing module;
the first processing module is used for processing the received image data stream entering image enhancement and transmitting the processed image data stream to the encoding module;
the second processing module is used for processing the received image data stream so as to identify a moving target in the image, determining position information of the moving target, outputting the position information of the moving target to the marking module, and marking the target position and transmitting the marked target position to the display for display.
And the encoding module is used for encoding the image data received from the first processing module and transmitting the image data to a display to present preview data.
In a third aspect, the present application provides a device for monitoring fatigue of employees of an enterprise based on machine learning, including:
the training image processing module is used for acquiring staff office image information through an image acquisition and operation device, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, overturning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set;
the contour marking module is used for inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain contour information of at least one person, marking a contour identification rectangular frame and determining a plurality of human body key point information based on the contour information;
the state monitoring module is used for determining the correlation states and the duration of at least three target key points based on the plurality of human body key point information;
and the fatigue work reminding module is used for marking the fatigue detection data frame to be processed as a data frame with fatigue work behavior when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration.
Optionally, the training image processing module includes a frame buffer processing module, configured to create a frame buffer to receive special effect data, and create an image sharpener to perform image enhancement processing on the frame buffer to obtain special effect image data. .
In the embodiment, the health state of the staff is monitored by arranging the enterprise staff fatigue monitoring device based on machine learning in the staff office area. Acquiring employee office image information, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, overturning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set; inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain the outline information of at least one person, marking an outline identification rectangular frame, and determining a plurality of human body key point information based on the outline information; determining the association states and the duration of at least three target key points based on the plurality of human body key point information; and when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a data frame with fatigue working behaviors. The monitoring and early warning measures for judging whether the staff is suddenly dead or not are enhanced, the time for the suspected staff to be discovered is shortened, and the survival probability of the staff to be suddenly dead is greatly improved. Saving the personal loss of enterprises and employees.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic diagram of a process for monitoring employee fatigue of an enterprise based on machine learning according to an embodiment of the present application;
FIG. 2 is a schematic diagram of key points of a human body according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an image acquisition and operation device according to an embodiment of the present disclosure;
fig. 4 is a schematic view of an enterprise employee fatigue monitoring device based on machine learning according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances, in other words, the described embodiments may be practiced other than as illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, may also include other things, such as processes, methods, systems, articles, or apparatus that comprise a list of steps or elements is not necessarily limited to only those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such processes, methods, articles, or apparatus.
It should be noted that the descriptions in this application referring to "first", "second", etc. are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In addition, technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope claimed in the present application.
Please refer to fig. 1, which is a method for monitoring fatigue of employees of an enterprise based on machine learning according to an embodiment of the present application, and the method includes:
s1, acquiring office image information of staff through an image acquisition and operation device, performing frame extraction processing based on an office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, overturning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set.
In some embodiments, the device for acquiring the employee image is an image acquisition computing device, and the linux system of the image acquisition computing device may specifically include: the device comprises an image acquisition module, a first processing module, a second processing module, a coding module and a marking module. In addition, the image acquisition and operation device is connected with a display of an office hall.
After the operation is started, the image acquisition module starts to acquire image data streams and respectively outputs the acquired image data streams to the first processing module and the second processing module. The first processing module performs image enhancement on the received image data stream, transmits the image data stream to the encoding module, and displays preview data on a display after the image data stream is encoded by the encoding module; the second processing module processes the received image data stream to identify a moving target in the image, determines position information of the moving target, outputs the position information of the moving target to the marking module, and marks the target position and transmits the target position to the display for display.
In some embodiments, a machine learning related neural network model is stored in the second processing module to perform model training and image depth analysis on the image transmitted by the image acquisition module.
The image acquisition operation data is the original image data acquired by the image acquisition operation, the frame refresh rate of the original image data may be high, and actually, in the image analysis, in order to reduce the operation time, only a certain number of frames need to be extracted. In some embodiments, the refresh rate of the video collected at 1s is 30Hz or 60Hz, and although the high refresh rate can smooth the video when displayed, in the image analysis, two adjacent frames of images are too close to each other, which consumes the resources of the analysis, and in some examples, the purpose of detecting the fatigue work of the staff can be achieved by extracting 16 frames of images or less.
In addition, in order to increase the generalization of the correlation model in the second processing module, the first processing module performs necessary data enhancement on the image data, including applying random geometric transformations of rotation, stretching, shearing, translation, flipping, and the like to the image, and random operations of increasing and decreasing brightness, contrast, sharpness, noise, and the like. This is done to produce as many training images as possible that are rich in noise, with limited data sets, and to increase the robustness of the network by simulating different parameter settings of the ultrasound device.
Illustratively, a Frame Buffer Object (FBO) is newly created to receive special effect data, and an image sharpener (Shock Filter) is created to perform image enhancement processing on the special effect data, so as to obtain special effect image data.
In some embodiments, the second processing module includes a pre-trained neural network model, the image data with uniform size is used as the input of the neural network model, the neural network model is operated by the DSP to process the input image data, and the position information of the moving object in the image is output. It should be appreciated that training the initial model using sample sets corresponding to different moving objects may result in a neural network model for identifying different moving objects.
It is noted that, in some embodiments, the second processing module may include one or more processing units, which respectively correspond to one or more neural network models. For example, the second processing module may comprise a contour recognition unit, which first recognizes the user's limb contour from the image data.
S2, inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain and mark the outline information of at least one person, and determining a plurality of human body key point information based on the outline information.
In some embodiments, a residual neural network, such as ResNet18, is employed as a backbone network in the acquisition of high resolution feature maps to achieve a good balance between accuracy and speed. Pedestrian detection is performed by a YOLOX detector, and in order to improve the efficiency of subsequent calculation, only the detection of staff is reserved, and the prediction parameters related to other types of environments are tailored.
The residual network has the characteristic of easy optimization, and can improve the accuracy rate by increasing the equivalent depth. The inner residual block uses jump connection, and the problem of gradient disappearance caused by depth increase in a deep neural network is relieved.
In an implementation scene that the moving object is the user, in order to accurately express the position of the user in the image, the position information of the key points of the limbs or the limb frames of the user in the image is used as the position information of the user in the image. The key points may refer to a series of points in the human body image that can represent human body features. Such as eyes, ears, nose, neck, shoulders, elbows, wrists, waists, knees, and ankles, among others. The number of the key points can be set to be a plurality, and all or part of the key points need to be subjected to position extraction in one identification process so as to determine the outer frame area wrapping the limb. For example, as shown in fig. 2, the key points may include 33, i.e., 10 facial key points, 2 shoulder key points, 2 elbow key points, 2 wrist key points, 2 lumbar key points (or hip key points), 2 knee key points, and 2 ankle key points. Obviously, when the position of the user changes or the posture changes, the positions of some key points will change. With the change, the relative position of the human body in the image acquired by the camera will also change.
In some embodiments, the keypoints for representing the user are referred to as designated keypoints, such as keypoints determined based on user hand or leg features, such as shoulder keypoints, elbow keypoints, wrist keypoints, or waist keypoints (or hip keypoints), knee keypoints, and ankle keypoints.
In some embodiments, when marking the contour information, the following steps are specifically included:
s201, adjusting the image in the outline recognition rectangular frame to be in a standard size [ H ] v ,W v ]And then, carrying out normalized extraction.
S202, determining a candidate based on an embedded sparsity feature selection strategy, and extracting feature parameters corresponding to the image in the outline recognition rectangular frame and the candidate, wherein the width and the height of the candidate are respectively 1/2 of the standard size.
In the calculation process, based on an embedded sparsity feature selection strategy, selecting several sizes of
[H p ,W p ]=[H v /2,W v /2]And extracting corresponding features vp from the features uniformly extracted from the employee office image information. For each valid ith tracking target, the feature template combines the extracted features v of the detected rectangle on the current n frames i ={v i,1 ,v i,2 ,…,v i,n }. To improve the computational efficiency, n is set to 16. To keep the feature size consistent, the detected rectangle is also adjusted to [ H ] before feature extraction t ,W t ]=[H p ,W p ]The size of (2).
And S203, inputting the adjusted rectangle into a residual error neural network to obtain and mark the contour information of at least one person.
In some embodiments, the first input feature map has a size H p1 ×W p1 ×C 1 The size of the second characteristic diagram is H p2 ×W p2 ×C 2 The size of the third feature map is such that the output result is a graph containing C 3 One-dimensional data. Wherein H p1 Height, W, of input feature map image1 p1 Width, C, of the input feature map image1 1 The number of input channels representing the first input feature map; h p2 Height, W, of the second characteristic diagram p2 Width, C, of the second characteristic diagram 2 The output channel number of the second characteristic diagram is shown; h p3 Height, W, of the third feature extraction diagram p3 Width of the third feature extraction graph, C 3 The number of output channels representing the feature extraction graph image 3; c 4 And the number of one-dimensional data of the output result is shown.
And S3, determining the association states and the duration of at least three target key points based on the plurality of human body key point information.
In some embodiments, the location of the moving object in the image may be represented by specifying the location of the keypoints in the image. In these embodiments, when the position of the specified key point in the image does not conform to the preset position or the angle, distance, orientation, etc. of the associated key point and/or exceeds the preset confidence level, it is determined that the state of the moving object in the image is abnormal.
In some embodiments, the user position may be represented by keypoint coordinates, such as a waist keypoint (x 1, y 1), a knee keypoint (x 2, y 2), and an ankle keypoint (x 3, y 3) on one side thereof. Illustratively, the y-axis coordinate of the waist key point is judged: when the corrected image has a resolution of (1920,1080), the x-axis coordinate of the center position of the corrected image is 960.
When the sum of the longitudinal coordinates y1 of the waist keypoints is determined to be seriously lower than 960, and the position of the user's waist entirely lower than the ground is illustrated. In order to prevent the squat action of the employee from being included, the judgment of the longitudinal coordinate of the key point of the knee joint needs to be included, namely, whether the longitudinal coordinate y2 of the key point of the knee joint is also lower than 480 seriously is judged. The specific judgment conditions are as follows:
if((y1 location <100)and(y2 location <100) (1)
at this time, the included angles between the critical points of the waist and ankle and the critical points of the knee joint need to be monitored. If the angle is calculated using a built-in function:
langle=calculate_angle(l hip ,l knee ,l heel ) (2)
and judging whether the patient is in a lying state or not according to the angle values.
And S4, when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a fatigue working behavior data frame.
And if the first condition and the second condition are all met, judging that the employee is in a lying down state, starting a timer, and calculating the duration of the employee lying down.
In the application, image acquisition operation equipment is installed in office places inside employees to monitor the health states of the employees, office image information of the employees is acquired, frame extraction processing is carried out on the basis of preset frame numbers of office image data at intervals, and image enhancement processing of at least rotating, stretching, shearing, translating, turning, increasing and decreasing brightness, contrast, sharpness and noise is carried out on the image information to obtain a fatigue detection data frame set; inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain the outline information of at least one person, marking an outline identification rectangular frame, and determining a plurality of human body key point information based on the outline information; determining the association states and the duration of at least three target key points based on the plurality of human body key point information; and when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a data frame with fatigue working behaviors. The monitoring and early warning measures for judging whether the staff is suddenly dead or not are enhanced, the time for the suspected staff to be discovered is shortened, and the survival probability of the staff to be suddenly dead is greatly improved. Saving the personal loss of enterprises and employees.
Please refer to fig. 3, which is an image capturing operation device according to an embodiment of the present application, including an image capturing module 31, a first processing module 32, a second processing module 33, and a coding module 34, wherein the image capturing operation device is connected to a display of an office hall,
the image acquisition module 31 acquires an image data stream by a user, and outputs the acquired image data stream to the first processing module and the second processing module respectively;
the first processing module 32 is configured to perform image enhancement on the received image data stream, and transmit the processed image data stream to the encoding module;
the second processing module 33 is configured to process the received image data stream to identify a moving target in the image, determine position information of the moving target, output the position information of the moving target to the marking module, and mark the target position and transmit the marked target position to the display for display.
The encoding module 34 is configured to encode the image data received from the first processing module, and transmit the encoded image data to a display to present preview data;
in some embodiments, the second processing module stores a machine learning related neural network model for performing model training and image depth analysis on the image transmitted from the image acquisition module.
In some embodiments, the first processing module performs data enhancement on the image data, including applying random geometric transformations of rotation, stretching, shearing, translation, flipping, and the like, and random operations of increasing and decreasing brightness, contrast, sharpness, noise, and the like to the image. This is done to produce as many training images as possible that are rich in noise, with limited data sets, and to increase the robustness of the network by simulating different parameter settings of the ultrasound device.
In some embodiments, the second processing module includes a pre-trained neural network model, the image data with uniform size is used as the input of the neural network model, the neural network model is operated by the DSP to process the input image data, and the position information of the moving object in the image is output. It should be appreciated that training the initial model using sample sets corresponding to different moving objects may result in a neural network model for identifying different moving objects.
It is noted that, in some embodiments, the second processing module may include one or more processing units, which respectively correspond to one or more neural network models. For example, the second processing module may comprise a contour recognition unit, which first recognizes the user's limb contour from the image data.
Please refer to fig. 4, which is a device for monitoring fatigue of employees of an enterprise based on machine learning according to an embodiment of the present application, including:
the training image processing module 41 is used for acquiring employee office image information through an image acquisition and operation device, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, turning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set;
the contour marking module 42 is used for inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain contour information of at least one person, marking a contour identification rectangular frame, and determining a plurality of human body key point information based on the contour information;
a state monitoring module 43, which determines the correlation state and the duration of at least three target key points based on the plurality of human body key point information;
and the fatigue work reminding module 44 is configured to mark the fatigue detection data frame to be processed as a data frame with fatigue work behavior when the correlation states of the at least three target key points meet the first condition and the second condition and exceed a preset duration.
In some embodiments, the training image processing module includes a Frame Buffer processing module, configured to create a Frame Buffer (FBO) to receive the special effect data, and create an image sharpener (Shock Filter) to perform image enhancement processing on the Frame Buffer to obtain image data with special effect.
In the embodiment, the health state of the staff is monitored by arranging the enterprise staff fatigue monitoring device based on machine learning in the staff office area. Acquiring employee office image information, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, turning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set; inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain the outline information of at least one person, marking an outline identification rectangular frame, and determining a plurality of human body key point information based on the outline information; determining the association states and the durations of at least three target key points based on the plurality of pieces of human body key point information; and when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a fatigue working behavior data frame. The monitoring and early warning measures for judging whether the staff is suddenly dead or not are enhanced, the time for the suspected staff to be discovered is shortened, and the survival probability of the staff to be suddenly dead is greatly improved. Saving the personal loss of enterprises and employees.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, to the extent that such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, it is intended that the present application also encompass such modifications and variations.
The above-mentioned embodiments are only examples of the present invention, and the scope of the claims of the present invention should not be limited by these examples, so that the claims of the present invention should be construed as equivalent and still fall within the scope of the present invention.

Claims (10)

1. A method for monitoring fatigue of enterprise employees based on machine learning is characterized by comprising the following steps:
acquiring office image information of staff through an image acquisition and operation device, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, overturning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set;
inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain the outline information of at least one person, marking an outline identification rectangular frame, and determining a plurality of human body key point information based on the outline information;
determining the association states and the duration of at least three target key points based on the plurality of human body key point information;
and when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration, marking the fatigue detection data frame to be processed as a fatigue working behavior data frame.
2. The method for monitoring fatigue of enterprise employees as claimed in claim 1, wherein said image enhancement processing of at least rotating, stretching, cutting, translating, flipping and increasing/decreasing brightness, contrast, sharpness, noise of said image information comprises:
and newly establishing a frame buffer area to receive the special effect data, and establishing an image sharpener to perform image enhancement processing on the special effect data to obtain special effect image data.
3. The method for monitoring fatigue of employees of an enterprise as claimed in claim 1, wherein said performing frame extraction processing and image enhancement based on said office image data interval preset frame number to obtain a fatigue detection data frame set specifically comprises:
adjusting the image in the outline recognition rectangular frame to be in a standard size H v ,W v ]Then, carrying out normalized extraction;
determining a candidate based on an embedded sparsity feature selection strategy, and extracting feature parameters corresponding to the image in the outline recognition rectangular frame and the candidate, wherein the width and the height of the candidate are respectively 1/2 of the standard size;
and inputting the adjusted rectangle into a residual error neural network to obtain and mark the contour information of at least one person.
4. The method of monitoring fatigue of employees of an enterprise as recited in claim 3, wherein after said determining candidates based on an embedded sparsity-based feature selection policy and extracting feature parameters corresponding to images in said outline recognition rectangular box and said candidate directions, wherein said candidate width and height are 1/2 of said standard size, respectively, said method further comprises:
selecting several sizes of [ H ] based on embedded sparsity feature selection strategy p ,W p ]=[H v /2,W v /2]And extracting corresponding features vp from the features uniformly extracted from the employee office image information.
5. The method for monitoring fatigue of employees of an enterprise as recited in claim 1, wherein said determining the association status and the duration of at least three target key points based on said plurality of human key point information specifically comprises:
selecting a plurality of appointed key points, wherein the appointed key points at least comprise shoulder key points, elbow key points and wrist key points, or waist key points, knee joint key points and ankle joint key points, and determining that the state of the moving target in the image is abnormal when the angle between the position of the appointed key points in the image and a preset position or an angle between the appointed key points and an associated key point exceeds a preset confidence level.
6. A method as claimed in claim 5, wherein the first condition for determining whether the angle between the position of the specified key point in the image and the preset position or the associated key point exceeds a preset confidence level is:
if((y1 location <100)and(y2 location <100) (1)
7. the method for monitoring fatigue of enterprise employees as claimed in claim 6, wherein the second condition for judging whether the angle between the position of the specified key point in the image and the preset position or the associated key point exceeds the preset confidence level is:
langle=calculate_angle(l hip ,l knee ,l heel ) (2)
and if the first condition and the second condition are all met, judging that the employee is in a lying down state, starting a timer, and calculating the duration of the employee lying down.
8. An image acquisition operation device comprises an image acquisition module, a first processing module, a second processing module, a coding module and a marking module, wherein the image acquisition operation device is connected with a display of an office hall; wherein,
the image acquisition module is used for acquiring an image data stream by a user and respectively outputting the acquired image data stream to the first processing module and the second processing module;
the first processing module is used for processing the received image data stream entering image enhancement and transmitting the processed image data stream to the encoding module;
the second processing module is used for processing the received image data stream so as to identify a moving target in the image, determining position information of the moving target, outputting the position information of the moving target to the marking module, and marking the target position and transmitting the marked target position to the display for display.
And the encoding module is used for encoding the image data received from the first processing module and transmitting the image data to a display to present preview data.
9. A machine learning based enterprise employee fatigue monitoring device, comprising:
the training image processing module is used for acquiring staff office image information through an image acquisition and operation device, performing frame extraction processing based on the office image data interval preset frame number, and performing image enhancement processing of at least rotating, stretching, shearing, translating, overturning, increasing and decreasing brightness, contrast, sharpness and noise on the image information to obtain a fatigue detection data frame set;
the contour marking module is used for inputting the fatigue detection data frame set into the YOLOv5 network model for reasoning to obtain contour information of at least one person, marking a contour identification rectangular frame and determining a plurality of human body key point information based on the contour information;
the state monitoring module is used for determining the correlation states and the duration of at least three target key points based on the plurality of human body key point information;
and the fatigue work reminding module is used for marking the fatigue detection data frame to be processed as a data frame with fatigue work behavior when the correlation states of the at least three target key points meet a first condition and a second condition and exceed a preset duration.
10. The device for monitoring fatigue of employees of enterprise as claimed in claim 9, wherein said training image processing module comprises a frame buffer processing module for creating a frame buffer to receive special effect data and creating an image sharpener to perform image enhancement processing on it to obtain special effect image data.
CN202211465943.2A 2022-11-22 2022-11-22 Enterprise employee fatigue monitoring method and device based on machine learning Pending CN115862059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211465943.2A CN115862059A (en) 2022-11-22 2022-11-22 Enterprise employee fatigue monitoring method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211465943.2A CN115862059A (en) 2022-11-22 2022-11-22 Enterprise employee fatigue monitoring method and device based on machine learning

Publications (1)

Publication Number Publication Date
CN115862059A true CN115862059A (en) 2023-03-28

Family

ID=85664837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211465943.2A Pending CN115862059A (en) 2022-11-22 2022-11-22 Enterprise employee fatigue monitoring method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN115862059A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118417764A (en) * 2024-04-18 2024-08-02 苏州诺克智能装备股份有限公司 Welding production line monitoring alarm system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118417764A (en) * 2024-04-18 2024-08-02 苏州诺克智能装备股份有限公司 Welding production line monitoring alarm system

Similar Documents

Publication Publication Date Title
Islam et al. Yoga posture recognition by detecting human joint points in real time using microsoft kinect
CN111753747B (en) Violent motion detection method based on monocular camera and three-dimensional attitude estimation
JP7057589B2 (en) Medical information processing system, gait state quantification method and program
CN113903455A (en) System and method for identifying persons and/or identifying and quantifying pain, fatigue, mood and intent while preserving privacy
Nagalakshmi Vallabhaneni The analysis of the impact of yoga on healthcare and conventional strategies for human pose recognition
CN111345823B (en) Remote exercise rehabilitation method, device and computer readable storage medium
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
US20230040650A1 (en) Real-time, fine-resolution human intra-gait pattern recognition based on deep learning models
CN112185514A (en) Rehabilitation training effect evaluation system based on action recognition
CN112036267A (en) Target detection method, device, equipment and computer readable storage medium
Zhang et al. Visual surveillance for human fall detection in healthcare IoT
CN111597872A (en) Health supervision law enforcement illegal medical practice face recognition method based on deep learning
CN113974612A (en) Automatic assessment method and system for upper limb movement function of stroke patient
CN115862059A (en) Enterprise employee fatigue monitoring method and device based on machine learning
Kapoor et al. Light-weight seated posture guidance system with machine learning and computer vision
Dai Vision-based 3d human motion analysis for fall detection and bed-exiting
CN108447562A (en) A kind of user movement capability assessment method and system
Hai et al. PCA-SVM algorithm for classification of skeletal data-based eigen postures
CN115268531B (en) Water flow temperature regulation control method, device and equipment for intelligent bathtub and storage medium
CN116831565A (en) Human gait monitoring and evaluating method
CN116543455A (en) Method, equipment and medium for establishing parkinsonism gait damage assessment model and using same
Wang et al. A novel modeling approach to fall detection and experimental validation using motion capture system
CN113271848B (en) Body health state image analysis device, method and system
Huihui Machine Vision Technology Based on Wireless Sensor Network Data Analysis for Monitoring Injury Prevention Data in Yoga Sports
CN107480604A (en) Gait recognition method based on the fusion of more contour features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination