CN112597888A - On-line education scene student attention recognition method aiming at CPU operation optimization - Google Patents
On-line education scene student attention recognition method aiming at CPU operation optimization Download PDFInfo
- Publication number
- CN112597888A CN112597888A CN202011530619.5A CN202011530619A CN112597888A CN 112597888 A CN112597888 A CN 112597888A CN 202011530619 A CN202011530619 A CN 202011530619A CN 112597888 A CN112597888 A CN 112597888A
- Authority
- CN
- China
- Prior art keywords
- face
- engage
- attention
- image
- student
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000005457 optimization Methods 0.000 title claims abstract description 10
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 13
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 6
- 238000011156 evaluation Methods 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 26
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000537 electroencephalography Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Tourism & Hospitality (AREA)
- Multimedia (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Economics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an on-line education scene student attention recognition method aiming at CPU operation optimization, which comprises the steps of firstly, carrying out face detection and face key point detection on each frame of image in a training data set by using an MTCNN face recognition model to obtain a face image and face key points, carrying out face alignment by using affine transformation based on the face key points, and carrying out attention scoring on the face; constructing an Engage-CNN model on the basis of an Engage-Detection network, performing full supervision training on the Engage-CNN model by adopting an aligned face image and an attention score, and optimizing the operation speed of the Engage-CNN model on a common CPU to obtain an optimized Engage-CNN model; and then, carrying out attention evaluation on the face image of the student in the course of the class by adopting the optimized Engage-CNN model. The method has high processing speed and high accuracy, and can be used for carrying out face detection on the image with low resolution.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a student attention identification method.
Background
With the development of the times, the importance degree of the society to education is continuously improved, people learn and research the teaching process more and more deeply, and begin to be aware of the complexity of the behaviors of students in classroom teaching. On-line education is different from teaching in a classroom, a teacher cannot visually see all students to know the class input states of the students, the class input states of the students are obtained by using technical means, and powerful help is provided for the teacher to improve the class learning efficiency of the students. Student attention recognition is an important research project in intelligent classes. Currently, there are two main technical routes for the attention recognition task: a wearable physical sensor based method and a camera video information based method.
Among the methods based on wearable physical sensors are methods based on electroencephalograph headphones, proposed by m.hassb et al In the literature "m.hasssib, s.schnegass, p.eiglperserger, n.henze, a.schmidt, and f.alt.engagemeter: a system for imaging the audio sensing using electroencephalography, In Proceedings of the 2017CHI Conference on Human Factors In Computing Systems, pp.5114-5119,2017", which calculates student attentiveness states by collecting alpha waves, beta waves, theta waves In student electroencephalogram signals.
The method based on camera video information includes a model for performing multi-frame video attention recognition by using a Gaze-AU-Pose (GAP) feature in combination with a GRU network, proposed by X.Niu et al in documents of X.Niu, H.Han, J.Zeng, X.Sun, S.Shan, Y.Huang, S.Yang, X.Chen.automatic encoding prediction with GAP feature in Proceedings of the 20th ACM International Conference Multi Interaction, pp.599-603,2018.
Both of these approaches have their limitations. The wearable physical sensor based approach, while accurate, requires additional sensors for each student, which is costly. The method based on the camera video information has high performance overhead due to the need of processing and identifying continuous video frames, and is difficult to meet the requirements of practical application.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an online education scene student attention recognition method aiming at CPU operation optimization, which comprises the steps of firstly, carrying out face detection and face key point detection on each frame of image in a training data set by using an MTCNN face recognition model, obtaining a face image and face key points, carrying out face alignment by using affine transformation based on the face key points, and carrying out attention scoring on a face; constructing an Engage-CNN model on the basis of an Engage-Detection network, performing full supervision training on the Engage-CNN model by adopting an aligned face image and an attention score, and optimizing the operation speed of the Engage-CNN model on a common CPU to obtain an optimized Engage-CNN model; and then, carrying out attention evaluation on the face image of the student in the course of the class by adopting the optimized Engage-CNN model. The method has high processing speed and high accuracy, and can be used for carrying out face detection on the image with low resolution.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: establishing an Engage-CNN model;
step 1-1: carrying out face detection on each image in the face image data set by using an MTCNN face recognition model to obtain a face image I and detecting key points to obtain face key points L; scoring the attention of the face in the face image I to obtain an attention score S, wherein S belongs to [0,1.0], 0 represents that the attention is not concentrated, and 1.0 represents that the attention is completely concentrated;
step 1-2: defining affine transformation matricesWherein a is1And a2As image rotation parameters, b1And b2As an image scaling parameter, c1And c2Is an image translation parameter;
solving Q to M by using a least square method, wherein Q is a predefined standard face coordinate;
using affine transformation matrix M to align face image I to obtain aligned face image IA;
Step 1-3: replacing a Dropout layer in an Engage-Detection network with L2 regularization, modifying an output layer softmax classifier into sigmoid output, and enabling the model output to be attention scoring to obtain an Engage-CNN model;
step 1-4: will align the face image IAPerforming full-supervision training on the Engage-CNN model by taking the attention score S as a label as input of the Engage-CNN model, and obtaining a final Engage-CNN model after the training is finished;
step 1-5: optimizing the Engage-CNN model by using a TVM deep learning compiler to obtain a finally optimized Engage-CNN model;
step 2: recognizing attention of students;
step 2-1: acquiring a student computer camera image in the course of class, and performing face detection by using an MTCNN face recognition model to obtain a student face image and student face key points;
step 2-2: obtaining an aligned student face image by using the method in the step 1-2;
step 2-3: and (3) inputting the aligned student face image obtained in the step (2-2) into the finally optimized Engage-CNN model to obtain the attention score corresponding to the student face image.
Preferably, the face key points L include three key points: left eye center, right eye center, and mouth center.
The invention has the following beneficial effects:
1. the invention does not need additional wearable sensing equipment, only needs to use the student computer camera to acquire data, and can effectively reduce the cost.
2. The student attention recognition method aiming at the CPU operation optimization provided by the invention can finish attention recognition only by a single student face image when in use, effectively reduces the performance overhead of the algorithm, directly finishes the attention recognition calculation process on a student computer with lower CPU occupancy rate, and avoids a large amount of network transmission overhead generated by sending the student face image to a teacher-side computer and the performance overhead required by the attention recognition calculation. In the actual test, the single-frame processing time is 0.052s, the CPU occupancy rate is 5.1%, the real-time performance can be well guaranteed, the blocking caused by occupying too many computer computing resources of students is avoided, and the attention detection effect is effectively improved.
3. The invention adopts the MTCNN network, has high processing speed and high accuracy and can detect the face of the image with low resolution.
4. The invention models the relationship between the face image and the attention score in an end-to-end mode, provides more space for the Engage-CNN model to be automatically adjusted according to data, increases the overall engagement degree of the Engage-CNN model and the data, and notices that the error of the force state is 0.118 on a test set.
Drawings
FIG. 1 is a flowchart of the method of the present invention, wherein the left diagram is a flowchart of the training of the Engage-CNN model, and the right diagram is a flowchart of student attention recognition.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, a method for identifying student attention in an online education scene optimized for CPU operations includes the following steps:
step 1: establishing an Engage-CNN model;
step 1-1: using an MTCNN face recognition model to perform face detection on each image in a face image data set to obtain a face image I and performing key point detection to obtain a face key point L, wherein the L comprises three key points: a left eye center, a right eye center, and a mouth center; scoring the attention of the face in the face image I to obtain an attention score S, wherein S belongs to [0,1.0], 0 represents that the attention is not concentrated, and 1.0 represents that the attention is completely concentrated; the MTCNN face recognition model is a multitask convolutional neural network, and can realize rapid face detection and face key point detection;
step 1-2: defining affine transformation matricesWherein a is1And a2As image rotation parameters, b1And b2As an image scaling parameter, c1And c2Is an image translation parameter;
solving Q to M by using a least square method, wherein Q is a predefined standard face coordinate;
using affine transformation matrix M to align face image I to obtain aligned face imageLike IA;
Step 1-3: replacing a Dropout layer in an Engage-Detection network with L2 regularization, modifying an output layer softmax classifier into sigmoid output, and enabling the model output to be attention scoring to obtain an Engage-CNN model;
step 1-4: will align the face image IAPerforming full-supervision training on the Engage-CNN model by taking the attention score S as a label as input of the Engage-CNN model, and obtaining a final Engage-CNN model after the training is finished;
step 1-5: optimizing the final Engage-CNN model in the steps 1-4 by using a TVM deep learning compiler and applying optimization modes such as operator fusion, branch convolution optimization and the like, and then performing assembly-level optimization aiming at AVX2 parallel operation instruction sets which are owned by most CPUs in the market at present to obtain the finally optimized Engage-CNN model and a dynamic link library file required by operation;
step 2: recognizing attention of students;
step 2-1: acquiring a student computer camera image in the course of class, and performing face detection by using an MTCNN face recognition model to obtain a student face image and student face key points;
step 2-2: obtaining an aligned student face image by using the method in the step 1-2;
step 2-3: and (3) inputting the aligned student face image obtained in the step (2-2) into the finally optimized Engage-CNN model to obtain the attention score corresponding to the student face image.
The specific embodiment is as follows:
1. conditions for carrying out
The present embodiment is implemented by a central processing unitThe CPU i7-6700HQ @2.60GHz, the memory 32G and the graphics processor areThe Geforce GTX1070 GPU and the Windows 10 operating system are carried out by utilizing a Pythrch deep learning framework and a TVM model reasoning framework.
Data used in the embodiment is collected from computer camera data of 112 students in an online education environment, and comprises 9068 video segments with the duration of 10 seconds, wherein 7255 training videos, 1813 testing videos, attention scores S e [0,1.0], 0 represents that attention is not focused, 1.0 represents that attention is completely focused, and the attention scores are labeled by 5 annotators.
2. Content of implementation
First, a depth model is trained using training set data. And then, carrying out inference optimization on the depth model by using a TVM inference framework, and testing the error between the attention score and the true value, the model operation speed and the performance overhead on the test set.
To demonstrate the effectiveness of the algorithm, an LSTM model (GAP-GRU) using the Gaze-AU-Pose feature was chosen, and as a comparison algorithm, an Engage-Detection model of the convolutional neural network was used, which is described in detail in the literature "X.Niu, H.Han, J.Zeng, X.Sun, S.Shann, Y.Huang, S.Yang, X.Chen.automatic engagement prediction with GAP feature. in Proceedings of the 20th ACM International Conference on Multi Interaction, pp.599-603,2018."; the Engage-Detection model is proposed in the literature "M.Murshed, MA.Dewan, F.Lin, D.Wen.engage Detection in e-Learning Environments using relational Neural networks.In 2019IEEE Intl Conf on dependent, Automic and Secure Computing, Intl Conf on innovative analysis and Computing, Intl Conf on statistical analysis and Computing, Intl Conf on Cyber Science and Technology Congress, pp.80-86,2019". The comparative results are shown in Table 1.
TABLE 1
As can be seen from Table 1, the mean absolute error of the attention score of the present invention is reduced by 0.033 compared to the Engage-Detection model also calculated using a single frame image, and is increased by 0.032 compared to the GAP-GRU model calculated using multiple frame images. Table 2 shows the run-time and performance overhead for each algorithm.
TABLE 2
As can be seen from Table 2, the optimized algorithm of the present invention is significantly superior to other algorithms in terms of operating speed and performance overhead. The practicability and effectiveness of the invention can be verified through the experiment.
Claims (2)
1. A student attention recognition method for an online education scene aiming at CPU operation optimization is characterized by comprising the following steps:
step 1: establishing an Engage-CNN model;
step 1-1: carrying out face detection on each image in the face image data set by using an MTCNN face recognition model to obtain a face image I and detecting key points to obtain face key points L; scoring the attention of the face in the face image I to obtain an attention score S, wherein S belongs to [0,1.0], 0 represents that the attention is not concentrated, and 1.0 represents that the attention is completely concentrated;
step 1-2: defining affine transformation matricesWherein a is1And a2As image rotation parameters, b1And b2As an image scaling parameter, c1And c2Is an image translation parameter;
solving Q to M by using a least square method, wherein Q is a predefined standard face coordinate;
using affine transformation matrix M to align face image I to obtain aligned face image IA;
Step 1-3: replacing a Dropout layer in an Engage-Detection network with L2 regularization, modifying an output layer softmax classifier into sigmoid output, and enabling the model output to be attention scoring to obtain an Engage-CNN model;
step 1-4: will align the face image IAAs Engage-CNN modeInputting the model, taking the attention score S as a label, carrying out full supervision training on the Engage-CNN model, and obtaining a final Engage-CNN model after the training is finished;
step 1-5: optimizing the Engage-CNN model by using a TVM deep learning compiler to obtain a finally optimized Engage-CNN model;
step 2: recognizing attention of students;
step 2-1: acquiring a student computer camera image in the course of class, and performing face detection by using an MTCNN face recognition model to obtain a student face image and student face key points;
step 2-2: obtaining an aligned student face image by using the method in the step 1-2;
step 2-3: and (3) inputting the aligned student face image obtained in the step (2-2) into the finally optimized Engage-CNN model to obtain the attention score corresponding to the student face image.
2. The method for on-line education scene student attention recognition optimized for CPU calculation according to claim 1, wherein the face key points L comprise three key points: left eye center, right eye center, and mouth center.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011530619.5A CN112597888B (en) | 2020-12-22 | 2020-12-22 | Online education scene student attention recognition method aiming at CPU operation optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011530619.5A CN112597888B (en) | 2020-12-22 | 2020-12-22 | Online education scene student attention recognition method aiming at CPU operation optimization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112597888A true CN112597888A (en) | 2021-04-02 |
CN112597888B CN112597888B (en) | 2024-03-08 |
Family
ID=75200091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011530619.5A Active CN112597888B (en) | 2020-12-22 | 2020-12-22 | Online education scene student attention recognition method aiming at CPU operation optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112597888B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011054200A (en) * | 2010-11-11 | 2011-03-17 | Fuji Electric Systems Co Ltd | Neural network learning method |
CN106503669A (en) * | 2016-11-02 | 2017-03-15 | 重庆中科云丛科技有限公司 | A kind of based on the training of multitask deep learning network, recognition methods and system |
CN107103281A (en) * | 2017-03-10 | 2017-08-29 | 中山大学 | Face identification method based on aggregation Damage degree metric learning |
CN109543606A (en) * | 2018-11-22 | 2019-03-29 | 中山大学 | A kind of face identification method that attention mechanism is added |
CN110532920A (en) * | 2019-08-21 | 2019-12-03 | 长江大学 | Smallest number data set face identification method based on FaceNet method |
CN110956082A (en) * | 2019-10-17 | 2020-04-03 | 江苏科技大学 | Face key point detection method and detection system based on deep learning |
CN111178242A (en) * | 2019-12-27 | 2020-05-19 | 上海掌学教育科技有限公司 | Student facial expression recognition method and system for online education |
CN111368830A (en) * | 2020-03-03 | 2020-07-03 | 西北工业大学 | License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm |
CN111539370A (en) * | 2020-04-30 | 2020-08-14 | 华中科技大学 | Image pedestrian re-identification method and system based on multi-attention joint learning |
CN111563476A (en) * | 2020-05-18 | 2020-08-21 | 哈尔滨理工大学 | Face recognition method based on deep learning |
CN112101074A (en) * | 2019-06-18 | 2020-12-18 | 深圳市优乐学科技有限公司 | Online education auxiliary scoring method and system |
-
2020
- 2020-12-22 CN CN202011530619.5A patent/CN112597888B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011054200A (en) * | 2010-11-11 | 2011-03-17 | Fuji Electric Systems Co Ltd | Neural network learning method |
CN106503669A (en) * | 2016-11-02 | 2017-03-15 | 重庆中科云丛科技有限公司 | A kind of based on the training of multitask deep learning network, recognition methods and system |
CN107103281A (en) * | 2017-03-10 | 2017-08-29 | 中山大学 | Face identification method based on aggregation Damage degree metric learning |
CN109543606A (en) * | 2018-11-22 | 2019-03-29 | 中山大学 | A kind of face identification method that attention mechanism is added |
CN112101074A (en) * | 2019-06-18 | 2020-12-18 | 深圳市优乐学科技有限公司 | Online education auxiliary scoring method and system |
CN110532920A (en) * | 2019-08-21 | 2019-12-03 | 长江大学 | Smallest number data set face identification method based on FaceNet method |
CN110956082A (en) * | 2019-10-17 | 2020-04-03 | 江苏科技大学 | Face key point detection method and detection system based on deep learning |
CN111178242A (en) * | 2019-12-27 | 2020-05-19 | 上海掌学教育科技有限公司 | Student facial expression recognition method and system for online education |
CN111368830A (en) * | 2020-03-03 | 2020-07-03 | 西北工业大学 | License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm |
CN111539370A (en) * | 2020-04-30 | 2020-08-14 | 华中科技大学 | Image pedestrian re-identification method and system based on multi-attention joint learning |
CN111563476A (en) * | 2020-05-18 | 2020-08-21 | 哈尔滨理工大学 | Face recognition method based on deep learning |
Non-Patent Citations (2)
Title |
---|
张坤;章东平;杨力;: "样本增强的人脸识别算法研究", 中国计量大学学报, no. 02 * |
方书雅;刘守印;: "基于学生人体检测的无感知课堂考勤方法", 计算机应用, no. 09 * |
Also Published As
Publication number | Publication date |
---|---|
CN112597888B (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11010600B2 (en) | Face emotion recognition method based on dual-stream convolutional neural network | |
Fan et al. | Inferring shared attention in social scene videos | |
CN110889672B (en) | Student card punching and class taking state detection system based on deep learning | |
Li et al. | Sign language recognition based on computer vision | |
Yang et al. | Facs3d-net: 3d convolution based spatiotemporal representation for action unit detection | |
Abdulkader et al. | Optimizing student engagement in edge-based online learning with advanced analytics | |
CN112541529A (en) | Expression and posture fusion bimodal teaching evaluation method, device and storage medium | |
Vasudevan et al. | Introduction and analysis of an event-based sign language dataset | |
CN109241830A (en) | It listens to the teacher method for detecting abnormality in the classroom for generating confrontation network based on illumination | |
CN111666829A (en) | Multi-scene multi-subject identity behavior emotion recognition analysis method and intelligent supervision system | |
Wu et al. | Pose-Guided Inflated 3D ConvNet for action recognition in videos | |
CN112102129A (en) | Intelligent examination cheating identification system based on student terminal data processing | |
Gündüz et al. | Turkish sign language recognition based on multistream data fusion | |
Islam et al. | A deep Spatio-temporal network for vision-based sexual harassment detection | |
Zhao et al. | Human action recognition based on improved fusion attention CNN and RNN | |
CN113723277B (en) | Learning intention monitoring method and system integrated with multi-mode visual information | |
CN111626197B (en) | Recognition method based on human behavior recognition network model | |
Chen et al. | Intelligent Recognition of Physical Education Teachers' Behaviors Using Kinect Sensors and Machine Learning. | |
CN112597888B (en) | Online education scene student attention recognition method aiming at CPU operation optimization | |
Huang et al. | Research on learning state based on students’ attitude and emotion in class learning | |
Ejaz et al. | Real-time analysis of student's behavioural engagement in digital smart classrooms using fog computing and IoT devices | |
Mai et al. | Video-based emotion recognition in the wild for online education systems | |
CN115719497A (en) | Student concentration degree identification method and system | |
CN110427920B (en) | Real-time pedestrian analysis method oriented to monitoring environment | |
Agarwal et al. | Semi-Supervised Learning to Perceive Children's Affective States in a Tablet Tutor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |