CN111353363A - Teaching effect detection method and device and electronic equipment - Google Patents
Teaching effect detection method and device and electronic equipment Download PDFInfo
- Publication number
- CN111353363A CN111353363A CN201910765175.4A CN201910765175A CN111353363A CN 111353363 A CN111353363 A CN 111353363A CN 201910765175 A CN201910765175 A CN 201910765175A CN 111353363 A CN111353363 A CN 111353363A
- Authority
- CN
- China
- Prior art keywords
- information
- detection
- determining
- image information
- detection objects
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 146
- 230000000694 effects Effects 0.000 title claims abstract description 72
- 230000006399 behavior Effects 0.000 claims abstract description 98
- 230000014509 gene expression Effects 0.000 claims abstract description 96
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims description 36
- 230000015654 memory Effects 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 abstract description 4
- 210000000887 face Anatomy 0.000 description 8
- 210000004709 eyebrow Anatomy 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 206010063659 Aversion Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Strategic Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Tourism & Hospitality (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a teaching effect detection method and device and electronic equipment, wherein the teaching effect detection method comprises the following steps: acquiring image information including at least one detection object; processing the image information to determine the identity information of all detection objects; processing the image information, determining the current behaviors of all detection objects, and determining the behavior scores corresponding to all detection objects according to the current behaviors of all detection objects; processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects; and determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects. The invention can detect and evaluate the classroom teaching effect according to the classroom reaction of students, and the result is objective and accurate.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a teaching effect detection method and device and electronic equipment.
Background
At present, the classroom teaching effect of a teacher is generally comprehensively evaluated through classroom reactions of students, examination scores of the students, post-school questionnaires and the like. Examination results and questionnaire survey modes are subjective and cannot completely reflect classroom teaching effects in a real and objective manner, classroom responses of students are real and objective, but teachers cannot give consideration to classroom responses of everyone due to the fact that the number of students is large.
Disclosure of Invention
In view of the above, the present invention provides a teaching effect detection method and apparatus, and an electronic device, which can detect a classroom teaching effect according to behavior and expression of students in a classroom.
Based on the above purpose, the invention provides a teaching effect detection method, which comprises the following steps:
acquiring image information including at least one detection object;
processing the image information to determine the identity information of all detection objects;
processing the image information, determining the current behaviors of all detection objects, and determining the behavior scores corresponding to all detection objects according to the current behaviors of all detection objects;
processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects;
and determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects.
Optionally, the processing all image information to determine the identity information of all detection objects includes:
generating a grid-shaped seat table divided according to the seat position of each detection object in advance, wherein each grid in the seat table comprises a human face sample and basic information of the seat position;
carrying out face recognition processing on the image information, determining the positions of all faces in the image information, dividing the image information according to the positions of all the faces to generate a grid-shaped position table, wherein each grid in the position table comprises face information of a detection object, matching the position table with each corresponding grid in the seat table, judging the matching degree of the face information in the corresponding grid and a face sample, if the matching degree reaches a preset matching degree, considering that the face information is consistent with the face sample, and determining the identity information of the face information according to basic information corresponding to the face sample.
Optionally, the method further includes:
storing curriculum schedule information, wherein the curriculum schedule information comprises an image acquisition equipment identifier, a curriculum name, a name of a lessee, the time of going to lesson and the time of going to lesson;
and when the class time is up, sending a starting instruction to the image acquisition equipment with the image acquisition equipment identification, and when the class time is up, sending a stopping instruction to the image acquisition equipment with the image acquisition equipment identification.
Optionally, when the detection result is that the teaching effect is poor, a prompt message is sent to the touch all-in-one machine in the classroom or the mobile terminal for the teacher.
The embodiment of the present invention further provides a device for detecting teaching effects, including:
an information acquisition module for acquiring image information including at least one detection object;
the identity recognition module is used for processing the image information and determining the identity information of all the detection objects;
the behavior recognition module is used for processing the image information, determining the current behaviors of all the detection objects and determining the behavior scores corresponding to all the detection objects according to the current behaviors of all the detection objects;
the expression recognition module is used for processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects;
and the effect detection module is used for determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects.
Optionally, the processing the image information by the identity recognition module to determine the identity information of all the detection objects includes:
generating a grid-shaped seat table divided according to the seat position of each detection object in advance, wherein each grid in the seat table comprises a human face sample and basic information of the seat position;
carrying out face recognition processing on the image information, determining the positions of all faces in the image information, dividing the image information according to the positions of all the faces to generate a grid-shaped position table, wherein each grid in the position table comprises face information of a detection object, matching the position table with each corresponding grid in the seat table, judging the matching degree of the face information in the corresponding grid and a face sample, if the matching degree reaches a preset matching degree, considering that the face information is consistent with the face sample, and determining the identity information of the face information according to basic information corresponding to the face sample.
Optionally, the apparatus further comprises:
the system comprises a storage module, a display module and a display module, wherein the storage module is used for storing preset curriculum schedule information, and the curriculum schedule information comprises an image acquisition equipment identifier, a curriculum name, a name of a teacher giving lessons, the time of giving lessons and the time of leaving lessons;
and the message sending module is used for sending a starting instruction to the image acquisition equipment with the image acquisition equipment identification when the class time is up, and sending a stopping instruction to the image acquisition equipment with the image acquisition equipment identification when the class time is up.
Optionally, the message sending module is configured to send a prompt message to a touch all-in-one machine in a classroom or a mobile terminal for a teacher when the detection result indicates that the teaching effect is poor.
The embodiment of the invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can be run on the processor, wherein the processor realizes the teaching effect detection method when executing the program.
As can be seen from the above, the teaching effect detection method, the teaching effect detection device and the electronic device provided by the invention identify and process image information by acquiring image information of all detection objects during a class, determine identity information of each detection object, obtain behavior and expression of each detection object, determine behavior score and expression score of each detection object, and determine a detection result of a classroom teaching effect according to a total score of all detection objects. The invention can detect and evaluate the classroom teaching effect according to the classroom reaction of students, has objective and accurate results, and can formulate a differentiated teaching plan and target according to the classroom reaction recognition results of the students.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
fig. 2 is a block diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention. As shown in the figure, the teaching effect detection method provided by the embodiment of the invention comprises the following steps:
s10: acquiring image information including at least one detection object;
in some embodiments, the server is preset with schedule information of each class, where the schedule information includes an image acquisition device identifier, a course name, a name of a teacher who is giving a lesson, a time of taking a lesson, and the like, and when the time of taking a lesson reaches, the server sends a start instruction to the image acquisition device having the image acquisition device identifier, and when the time of taking a lesson reaches, the server sends a stop instruction to the image acquisition device having the image acquisition device identifier. The image acquisition equipment receives a starting instruction and a stopping instruction, the image acquisition equipment starts to acquire video information when receiving the starting instruction, the image acquisition equipment stops acquiring the video information when receiving the stopping instruction, the image acquisition equipment transmits the acquired video information to the server, and the server extracts a video frame image from the video information according to preset time to be used as image information for subsequent identification processing. For example, one image information is extracted from the video information every 30 seconds.
In some embodiments, each class may be configured with a recording and playing device, the server sends the schedule information of each class to the recording and playing device of the corresponding class, when the class-on time of a certain class is reached, the recording and playing device sends a start instruction to the image acquisition device of the class, and when the class-off time is reached, the server sends a stop instruction to the image acquisition device of the class. The image acquisition equipment transmits the acquired video information to the recording and broadcasting equipment, and the recording and broadcasting equipment extracts video frame images from the video information according to preset time as image information to perform subsequent identification processing.
Optionally, in an application scenario of classroom teaching, two pan-tilt cameras may be installed in front of a classroom, two pan-tilt cameras may be installed behind the classroom, video information of all students sitting in a front row may be acquired by adjusting a shooting angle, a focal length, a shooting range, and the like of each pan-tilt camera within a class time, and video information of all students sitting in a back row may be acquired by acquiring with the two pan-tilt cameras behind, so that clear image information including all students may be acquired.
Optionally, the image acquisition device may directly acquire the image information, send the image information to the server, and perform subsequent processing by the server.
S11: processing the image information to determine the identity information of all detection objects;
in the application scene of school, because the number of students in each classroom is fixed and the seats of students are fixed, the seat table including all students can be determined in advance by the following method: for a specific class, inputting face samples and basic information of all students, enabling the face samples and the basic information of each student to correspond to the seat positions of the students, and generating a grid-shaped seat table divided according to the seat positions of the students, wherein each grid in the seat table comprises the face samples and the basic information (including information such as names, sexes, school numbers, classes, schools and the like) of the students at the seat positions.
During the class, acquiring image information of all students in a classroom acquired by image acquisition equipment, carrying out face recognition processing on the image information, determining the positions of all faces in the image information, dividing the image information according to the positions of all the faces, generating a grid-shaped position table divided according to the face position of each student, wherein each grid in the position table comprises face information of one student, matching the position table with each corresponding grid in the seat table according to the position table and the seat table, judging the matching degree of the face information in the corresponding grid and a face sample, considering that the face information is consistent with the face sample if the matching degree reaches a preset matching degree, determining the identity information of the face information according to basic information corresponding to the face sample, namely determining the identity information of a detection object in the grid, and if the matching degree does not reach the matching degree or the face information is not detected in the grid, the attendance status of the corresponding student can be further judged. By using the method, the identity information corresponding to all the detection objects is determined.
S12: processing the image information, determining the current behaviors of all detection objects, and determining the behavior scores corresponding to all detection objects according to the current behaviors of all detection objects;
in some embodiments, the image information is identified by using a behavior identification model to determine the current behavior of the detected object, such as lifting hands, standing, speaking, lying down on a desk, etc.
Based on the location table, the current behavior of the student within each grid is identified. For a student in each grid, a method for identifying current behavior using a behavior recognition model includes: detecting key parts in the grid, including the head, the hands, the shoulders and the like, tracking the positions of the key parts, and determining the action of the key parts according to the position change of the key parts. For example, if the hand changes from other positions to the set hand lifting position, the hand lifting behavior is determined; if the positions of the head, the hands, the shoulders and the like reach the set standing positions, the standing behavior is judged; if the mouth continuously opens and closes within a certain time, the speaking behavior is judged; and judging whether the desk is prone according to the relative positions of the head, the hands and the desk. And identifying the current behaviors of all students in all grids in the position table by using the behavior identification model.
And setting corresponding behavior scores according to different behaviors, setting the classroom positive performance behaviors as a first score, and setting the classroom negative performance behaviors as a second score, wherein the first score is higher than the second score. For example, the behavior score of the hand-up behavior is 30 points, the behavior score of the standing-up speaking (considered as speaking in class) is 30 points, and the behavior score of the lying-down desk is 0, etc. And after the current behaviors of all students in all grids in the position table are identified, determining the behavior scores corresponding to all students according to the set behavior scores.
S13: processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects;
in some embodiments, the image information is subjected to recognition processing by using an expression recognition model, and a current expression of the detection object is determined, such as happiness, disgust, anger, hurry, surprise, confusion, no emotion and the like.
Based on the location table, the current expression of the student within each grid is identified. For a student in each grid, the method for recognizing the current expression by using the expression recognition model comprises the following steps: the method comprises the steps of identifying a face area in a grid, identifying key parts including eyes, a nose, a mouth, eyebrows and the like in the face area, tracking the positions of the key parts, and determining facial expressions according to the position changes of the key parts. For example, the mouth corner is raised by a certain angle and judged to be happy; the mouth is closed and the qi is generated; the distance between the two eyebrows is close and forms a certain angle, the eyebrows are judged to be frown, and doubtful expression or aversion expression can be judged by combining mouth movements; the nose is large in nostril and large in eye glaring, so that the nose can be judged as surprise, if frown at the same time, the nose is judged as angry expression, and according to the position relation between the hand and the face, if the hand supports the cheek, the nose can be judged as puzzlement, and the like. And identifying the current expressions of all students in all grids in the position table by using the expression identification model.
And setting corresponding expression scores according to different expressions, setting a third score for the classroom positive expression and a fourth score for the classroom negative expression, wherein the third score is higher than the fourth score. For example, a happy expression score is 35 points, a suspicious expression score is 10 points, an angry, disliked expression score is 0, and the like. And after the current expressions of all students in all grids in the position table are identified, determining the expression scores corresponding to all students according to the set expression scores.
S14: and determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects.
According to the foregoing steps S10-S13, the total score of all students in a certain period (e.g., class, week, month, period, etc.) can be obtained through collection and recognition processing, the total score is the sum of the behavior score and the expression score, and the feedback condition of the students on the teaching effect of a certain subject is determined according to the total score of each student, so as to obtain the teaching effect detection result. Optionally, a plurality of score thresholds may be set, an average score is calculated according to the total score of all students, and a teaching effect detection result is determined according to the relationship between the average score and each score threshold.
For example, video information of all students in a Chinese class is acquired, the video information is processed to obtain a plurality of image information, the image information is identified to obtain behaviors and expressions of all students, the behavior scores and the expression scores of all students in the Chinese class are determined according to the set behavior scores and the set expression scores, and the total scores of all students in the Chinese class are obtained. And calculating an average score according to the total scores of all students, judging that the teaching effect of the Chinese class is good if the average score is greater than or equal to a first threshold, judging that the teaching effect of the Chinese class is general if the average score is greater than or equal to a second threshold and is smaller than the first threshold, and judging that the teaching effect of the Chinese class is poor if the average score is greater than or equal to a third threshold and is smaller than the second threshold.
During the course, if the teaching effect is judged to be poor according to the teaching effect detection result, the prompting module is used for reminding a teacher to adjust the teaching mode. Specifically, during the class period, the server judges that the teaching effect of the classroom is poor according to the total scores of all students, the server sends a prompt message to a touch control all-in-one machine in the classroom or a mobile terminal used by a teacher, and the touch control all-in-one machine or the mobile terminal can prompt the teacher to pay attention to the reflection of the students in a prompt frame or sound prompt mode and the like so as to adjust the teaching mode in time; the recording and broadcasting equipment in the classroom judges that the teaching effect of the classroom is poor according to the total scores of all students, the recording and broadcasting equipment sends a prompt message to a touch control all-in-one machine in the classroom or a mobile terminal used by the teacher, and the touch control all-in-one machine or the mobile terminal can prompt the teacher in a prompt box or a voice prompt mode and the like. The touch all-in-one machine comprises an interactive display screen, a touch all-in-one machine, an electronic blackboard, an electronic whiteboard, an intelligent interactive large screen or an intelligent interactive flat plate.
The characters of each student are different, and the partial situation also exists, and in practical application, the total score of each student can be integrated by combining with the subject score. Further, with respect to a particular subject, the overall score for each student may be re-determined based on the total score for each student over a period of time, in conjunction with the examination score for each student. The learning condition of each student in each subject can be analyzed by combining the comprehensive score of each student in each subject, and the weak subject of each student can be focused and coached with pertinence to the student to formulate a differentiated teaching target.
Fig. 2 is a block diagram of an apparatus according to an embodiment of the present invention. As shown in the drawings, the teaching effect detection device provided by the embodiment of the present invention includes:
an information acquisition module for acquiring image information including at least one detection object;
the identity recognition module is used for processing the image information and determining the identity information of all the detection objects;
the behavior recognition module is used for processing the image information, determining the current behaviors of all the detection objects and determining the behavior scores corresponding to all the detection objects according to the current behaviors of all the detection objects;
the expression recognition module is used for processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects;
and the effect detection module is used for determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects.
In some embodiments, the teaching effect detection apparatus further includes:
the storage module is used for storing preset curriculum schedule information; the curriculum schedule information comprises an image acquisition equipment identification, a curriculum name, a name of a lessee, the time of class attendance and the like;
and the message sending module is used for sending a starting instruction or a stopping instruction to the image acquisition equipment when the class time is reached or the class time is reached.
The server is preset with the curriculum schedule information of each class, the curriculum schedule information comprises an image acquisition device identification, a curriculum name, a name of a teacher who gives lessons, the time of giving lessons, the time of leaving lessons and the like, when the time of giving lessons reaches a certain class, the server sends a starting instruction to the image acquisition device with the image acquisition device identification, and when the time of leaving lessons reaches, the server sends a stopping instruction to the image acquisition device with the image acquisition device identification. The image acquisition equipment receives a starting instruction and a stopping instruction, the image acquisition equipment starts to acquire video information when receiving the starting instruction, the image acquisition equipment stops acquiring the video information when receiving the stopping instruction, the image acquisition equipment transmits the acquired video information to the server, the server acquires the video information, and video frame images are extracted from the video information according to preset time and serve as image information to be subjected to subsequent identification processing. For example, one image information is extracted from the video information every 30 seconds.
In some embodiments, each class may be configured with a recording and playing device, the server sends the schedule information of each class to the recording and playing device of the corresponding class, when the class-on time of a certain class is reached, the recording and playing device sends a start instruction to the image acquisition device of the class, and when the class-off time is reached, the server sends a stop instruction to the image acquisition device of the class. The image acquisition equipment transmits the acquired video information to the recording and broadcasting equipment, the recording and broadcasting equipment receives the video information, and video frame images are extracted from the video information according to preset time and serve as image information to be subjected to subsequent identification processing.
Optionally, in an application scenario of classroom teaching, two pan-tilt cameras may be installed in front of a classroom, two pan-tilt cameras may be installed behind the classroom, video information of all students sitting in a front row may be acquired by adjusting a shooting angle, a focal length, a shooting range, and the like of each pan-tilt camera within a class time, and video information of all students sitting in a back row may be acquired by acquiring with the two pan-tilt cameras behind, so that clear image information including all students may be acquired.
In the application scene of school, because the number of students in each classroom is fixed and the seats of students are fixed, the seat table including all students can be determined in advance by the following method: for a specific class, inputting face samples and basic information of all students, enabling the face samples and the basic information of each student to correspond to the seat positions of the students, and generating a grid-shaped seat table divided according to the seat positions of the students, wherein each grid in the seat table comprises the face samples and the basic information (including information such as names, sexes, school numbers, classes, schools and the like) of the students at the seat positions.
During the class, acquiring image information of all students in a classroom acquired by image acquisition equipment, carrying out face recognition processing on the image information by an identity recognition module, determining the positions of all faces in the image information, dividing the image information according to the positions of all the faces to generate a grid-shaped position table divided according to the face position of each student, wherein each grid in the position table comprises face information of one student, matching the position table with each corresponding grid in the seat table according to the position table and the seat table, judging the matching degree of the face information in the corresponding grid and a face sample, considering the face information to be consistent with the face sample if the matching degree reaches a preset matching degree, determining the identity information of the face information according to basic information corresponding to the face sample, namely determining the identity information of a detection object in the grid, and if the matching degree does not reach the matching degree or the face information is not detected in the grid, the attendance status of the corresponding student can be further judged. By using the method, the identity information corresponding to all the detection objects is determined.
In some embodiments, the behavior recognition module comprises:
the behavior identification submodule is used for identifying the image information and determining the current behavior of all the detected objects; recognizable behavior actions are, for example, lifting hands, standing, speaking, lying down a table, etc.;
and the behavior assignment submodule is used for determining the behavior scores of all the detection objects according to the current behavior actions of all the detection objects and the preset behavior scores.
Based on the location table, the current behavior of the student within each grid is identified. For a student in each grid, the method for identifying the current behavior by the behavior identification submodule comprises the following steps: detecting key parts in the grid, including the head, the hands, the shoulders and the like, tracking the positions of the key parts, and determining the action of the key parts according to the position change of the key parts. For example, if the hand changes from other positions to the set hand lifting position, the hand lifting behavior is determined; if the positions of the head, the hands, the shoulders and the like reach the set standing positions, the standing behavior is judged; if the mouth continuously opens and closes within a certain time, the speaking behavior is judged; and judging whether the desk is prone according to the relative positions of the head, the hands and the desk. And identifying the current behaviors of all students in all grids in the position table by using the behavior identification model.
And the behavior assignment sub-module determines the behavior scores of all the detection objects according to the current behavior actions of all the detection objects and the preset behavior scores. And presetting corresponding behavior scores according to different behaviors, setting the classroom positive performance behaviors as a first score, and setting the classroom negative performance behaviors as a second score, wherein the first score is higher than the second score. For example, the behavior score of the hand-up behavior is 30 points, the behavior score of the standing-up speaking (considered as speaking in class) is 30 points, and the behavior score of the lying-down desk is 0, etc. And after the current behaviors of all students in all grids in the position table are identified, determining the behavior scores corresponding to all students according to the set behavior scores.
In some embodiments, the expression recognition module comprises:
the expression recognition submodule is used for recognizing the image information and determining the current expressions of all the detection objects; recognizable expressions are e.g. happy, disgust, anger, hurry, surprise, confusion, no emotion, etc.;
and the expression assignment submodule is used for determining the expression scores of all the detection objects according to the current expressions of all the detection objects and the preset expression scores.
Based on the location table, the current expression of the student within each grid is identified. For a student in each grid, the method for recognizing the current expression by the expression recognition submodule comprises the following steps: the method comprises the steps of identifying a face area in a grid, identifying key parts including eyes, a nose, a mouth, eyebrows and the like in the face area, tracking the positions of the key parts, and determining facial expressions according to the position changes of the key parts. For example, the mouth corner is raised by a certain angle and judged to be happy; the mouth is closed and the qi is generated; the distance between the two eyebrows is close and forms a certain angle, the eyebrows are judged to be frown, and doubtful expression or aversion expression can be judged by combining mouth movements; the nose is large in nostril and large in eye glaring, so that the nose can be judged as surprise, if frown at the same time, the nose is judged as angry expression, and according to the position relation between the hand and the face, if the hand supports the cheek, the nose can be judged as puzzlement, and the like. And identifying the current expressions of all students in all grids in the position table by using the expression identification model.
And the expression assignment submodule is used for determining the expression scores of all the detection objects according to the current expressions of all the detection objects and the preset expression scores. And presetting corresponding expression scores according to different expressions, setting a third score for the classroom positive expression and a fourth score for the classroom negative expression, wherein the third score is higher than the fourth score. For example, a happy expression score is 35 points, a suspicious expression score is 10 points, an angry, disliked expression score is 0, and the like. And after the current expressions of all students in all grids in the position table are identified, determining the expression scores corresponding to all students according to the set expression scores.
The total score of all students in a certain time (such as each class, one week, one month, one schooling period and the like) can be obtained and identified according to the information acquisition module, the behavior identification module and the expression identification module, and the total score is the sum of the behavior score and the expression score. The effect detection module judges the feedback condition of the students to the teaching effect of a certain subject according to the total score of each student, and obtains a teaching effect detection result. Optionally, a plurality of score thresholds may be set, an average score is calculated according to the total score of all students, and a teaching effect detection result is determined according to the relationship between the average score and each score threshold.
For example, the information acquisition module acquires video information of all students in a Chinese class, processes the video information to obtain a plurality of image information, the behavior recognition module and the expression recognition module recognize the image information to obtain behaviors and expressions of all students, and determines behavior scores and expression scores of all students in the Chinese class according to the set behavior scores and expression scores to obtain total scores of all students in the Chinese class. The effect detection module calculates an average score according to the total scores of all students, can judge that the teaching effect of the Chinese class is good if the average score is greater than or equal to a first threshold, can judge that the teaching effect of the Chinese class is general if the average score is greater than or equal to a second threshold and is less than the first threshold, and can judge that the teaching effect of the Chinese class is poor if the average score is greater than or equal to a third threshold and is less than the second threshold.
In some embodiments, the message sending module is further configured to send a prompt message when the detection result of the effect detection module is that the teaching effect is poor.
During the course, if the teaching effect is judged to be poor according to the teaching effect detection result, the prompting module is used for reminding a teacher to adjust the teaching mode. Specifically, during the class period, the effect detection module judges that the teaching effect of the classroom is poor according to the total scores of all students, the message sending module is used for sending a prompt message to a touch all-in-one machine in the classroom or a mobile terminal used by a teacher, and the touch all-in-one machine or the mobile terminal can prompt the teacher to pay attention to the student in a prompt frame or voice prompt mode and the like so as to adjust the teaching mode in time.
The characters of each student are different, and there may be a partial condition, and in practical application, the total score of each student can be integrated by combining with the subject score. Further, with respect to a particular subject, the overall score for each student may be re-determined based on the total score for each student over a period of time, in conjunction with the examination score for each student. The learning condition of each student in each subject can be analyzed by combining the comprehensive score of each student in each subject, and the weak subject of each student can be focused and coached with pertinence to the student to make a differentiated teaching plan and target.
Based on the above purpose, the embodiment of the present invention further provides an embodiment of an apparatus for executing the teaching effect detection method. The device comprises:
one or more processors, and a memory.
The apparatus for performing the teaching effect detection method may further include: an input device and an output device.
The processor, memory, input device, and output device may be connected by a bus or other means.
The memory, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the teaching effect detection method in the embodiment of the present invention. The processor executes various functional applications and data processing of the server by running the nonvolatile software program, instructions and modules stored in the memory, namely, the teaching effect detection method of the above method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an apparatus that performs the teaching effect detection method, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory remotely located from the processor, and these remote memories may be connected to the member user behavior monitoring device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device may receive input numeric or character information and generate key signal inputs related to user settings and function control of the device performing the teaching effect detection method. The output device may include a display device such as a display screen.
The one or more modules are stored in the memory and, when executed by the one or more processors, perform the teaching effect detection method of any of the method embodiments described above. The technical effect of the embodiment of the device for executing the teaching effect detection method is the same as or similar to that of any method embodiment.
The embodiment of the invention also provides a non-transitory computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the processing method of the list item operation in any method embodiment. Embodiments of the non-transitory computer storage medium may be the same or similar in technical effect to any of the method embodiments described above.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes in the methods of the above embodiments may be implemented by a computer program that can be stored in a computer-readable storage medium and that, when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. The technical effect of the embodiment of the computer program is the same as or similar to that of any of the method embodiments described above.
Furthermore, the apparatuses, devices, etc. described in the present disclosure may be various electronic terminal devices, such as a mobile phone, a Personal Digital Assistant (PDA), a tablet computer (PAD), a smart television, etc., and may also be large terminal devices, such as a server, etc., and therefore the scope of protection of the present disclosure should not be limited to a specific type of apparatus, device. The client disclosed by the present disclosure may be applied to any one of the above electronic terminal devices in the form of electronic hardware, computer software, or a combination of both.
Furthermore, the method according to the present disclosure may also be implemented as a computer program executed by a CPU, which may be stored in a computer-readable storage medium. The computer program, when executed by the CPU, performs the above-described functions defined in the method of the present disclosure.
Further, the above method steps and system elements may also be implemented using a controller and a computer readable storage medium for storing a computer program for causing the controller to implement the functions of the above steps or elements.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
The apparatus of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
In addition, well known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures for simplicity of illustration and discussion, and so as not to obscure the invention. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the invention, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the present invention is to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that the invention can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (10)
1. A teaching effect detection method is characterized by comprising the following steps:
acquiring image information including at least one detection object;
processing the image information to determine the identity information of all detection objects;
processing the image information, determining the current behaviors of all detection objects, and determining the behavior scores corresponding to all detection objects according to the current behaviors of all detection objects;
processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects;
and determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects.
2. The method of claim 1, wherein the processing all image information to determine identity information of all detection objects comprises:
generating a grid-shaped seat table divided according to the seat position of each detection object in advance, wherein each grid in the seat table comprises a human face sample and basic information of the seat position;
carrying out face recognition processing on the image information, determining the positions of all faces in the image information, dividing the image information according to the positions of all the faces to generate a grid-shaped position table, wherein each grid in the position table comprises face information of a detection object, matching the position table with each corresponding grid in the seat table, judging the matching degree of the face information in the corresponding grid and a face sample, if the matching degree reaches a preset matching degree, considering that the face information is consistent with the face sample, and determining the identity information of the face information according to basic information corresponding to the face sample.
3. The method of claim 1, further comprising:
storing curriculum schedule information, wherein the curriculum schedule information comprises an image acquisition equipment identifier, a curriculum name, a name of a lessee, the time of going to lesson and the time of going to lesson;
and when the class time is up, sending a starting instruction to the image acquisition equipment with the image acquisition equipment identification, and when the class time is up, sending a stopping instruction to the image acquisition equipment with the image acquisition equipment identification.
4. The method as claimed in claim 1, wherein when the detection result is poor teaching effect, a prompt message is sent to a touch all-in-one machine in a classroom or a mobile terminal for teachers.
5. The method according to claim 1, wherein a teaching effect detection result is determined according to the behavior score and the expression score of the detection object and by combining examination results of the detection object.
6. The utility model provides a teaching effect detection device which characterized in that includes:
an information acquisition module for acquiring image information including at least one detection object;
the identity recognition module is used for processing the image information and determining the identity information of all the detection objects;
the behavior recognition module is used for processing the image information, determining the current behaviors of all the detection objects and determining the behavior scores corresponding to all the detection objects according to the current behaviors of all the detection objects;
the expression recognition module is used for processing the image information, determining the current expressions of all the detection objects, and determining the expression scores corresponding to all the detection objects according to the current expressions of all the detection objects;
and the effect detection module is used for determining a teaching effect detection result according to the behavior scores and the expression scores of all the detection objects.
7. The apparatus of claim 6, wherein the identity recognition module processes the image information to determine the identity information of all the detection objects, including:
generating a grid-shaped seat table divided according to the seat position of each detection object in advance, wherein each grid in the seat table comprises a human face sample and basic information of the seat position;
carrying out face recognition processing on the image information, determining the positions of all faces in the image information, dividing the image information according to the positions of all the faces to generate a grid-shaped position table, wherein each grid in the position table comprises face information of a detection object, matching the position table with each corresponding grid in the seat table, judging the matching degree of the face information in the corresponding grid and a face sample, if the matching degree reaches a preset matching degree, considering that the face information is consistent with the face sample, and determining the identity information of the face information according to basic information corresponding to the face sample.
8. The apparatus of claim 6, further comprising:
the system comprises a storage module, a display module and a display module, wherein the storage module is used for storing preset curriculum schedule information, and the curriculum schedule information comprises an image acquisition equipment identifier, a curriculum name, a name of a teacher giving lessons, the time of giving lessons and the time of leaving lessons;
and the message sending module is used for sending a starting instruction to the image acquisition equipment with the image acquisition equipment identification when the class time is up, and sending a stopping instruction to the image acquisition equipment with the image acquisition equipment identification when the class time is up.
9. The apparatus of claim 6,
and the message sending module is used for sending a prompt message to a touch all-in-one machine in a classroom or a mobile terminal for a teacher when the detection result is that the teaching effect is poor.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910765175.4A CN111353363A (en) | 2019-08-19 | 2019-08-19 | Teaching effect detection method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910765175.4A CN111353363A (en) | 2019-08-19 | 2019-08-19 | Teaching effect detection method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111353363A true CN111353363A (en) | 2020-06-30 |
Family
ID=71193939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910765175.4A Pending CN111353363A (en) | 2019-08-19 | 2019-08-19 | Teaching effect detection method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111353363A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931585A (en) * | 2020-07-14 | 2020-11-13 | 东云睿连(武汉)计算技术有限公司 | Classroom concentration degree detection method and device |
CN112308746A (en) * | 2020-09-28 | 2021-02-02 | 北京邮电大学 | Teaching state evaluation method and device and electronic equipment |
CN112819665A (en) * | 2021-01-29 | 2021-05-18 | 上海商汤科技开发有限公司 | Classroom state evaluation method and related device and equipment |
CN112883867A (en) * | 2021-02-09 | 2021-06-01 | 广州汇才创智科技有限公司 | Student online learning evaluation method and system based on image emotion analysis |
CN112990735A (en) * | 2021-03-30 | 2021-06-18 | 东营职业学院 | Classroom quality feedback evaluation method and device based on mathematical teaching |
CN113283383A (en) * | 2021-06-15 | 2021-08-20 | 北京有竹居网络技术有限公司 | Live broadcast behavior recognition method, device, equipment and readable medium |
CN113723284A (en) * | 2021-08-30 | 2021-11-30 | 未鲲(上海)科技服务有限公司 | Information generation method, terminal device and storage medium |
CN114007105A (en) * | 2021-10-20 | 2022-02-01 | 浙江绿城育华教育科技有限公司 | Online course interaction method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150044657A1 (en) * | 2013-08-07 | 2015-02-12 | Xerox Corporation | Video-based teacher assistance |
KR20160044315A (en) * | 2014-10-15 | 2016-04-25 | 한국과학기술연구원 | Analysis system and method for class attitude |
CN108764149A (en) * | 2018-05-29 | 2018-11-06 | 北京中庆现代技术股份有限公司 | A kind of training method for class student faceform |
CN109461104A (en) * | 2018-10-22 | 2019-03-12 | 杭州闪宝科技有限公司 | Classroom monitoring method, device and electronic equipment |
CN109740498A (en) * | 2018-12-28 | 2019-05-10 | 广东新源信息技术有限公司 | A kind of wisdom classroom based on face recognition technology |
-
2019
- 2019-08-19 CN CN201910765175.4A patent/CN111353363A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150044657A1 (en) * | 2013-08-07 | 2015-02-12 | Xerox Corporation | Video-based teacher assistance |
KR20160044315A (en) * | 2014-10-15 | 2016-04-25 | 한국과학기술연구원 | Analysis system and method for class attitude |
CN108764149A (en) * | 2018-05-29 | 2018-11-06 | 北京中庆现代技术股份有限公司 | A kind of training method for class student faceform |
CN109461104A (en) * | 2018-10-22 | 2019-03-12 | 杭州闪宝科技有限公司 | Classroom monitoring method, device and electronic equipment |
CN109740498A (en) * | 2018-12-28 | 2019-05-10 | 广东新源信息技术有限公司 | A kind of wisdom classroom based on face recognition technology |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931585A (en) * | 2020-07-14 | 2020-11-13 | 东云睿连(武汉)计算技术有限公司 | Classroom concentration degree detection method and device |
CN112308746A (en) * | 2020-09-28 | 2021-02-02 | 北京邮电大学 | Teaching state evaluation method and device and electronic equipment |
CN112819665A (en) * | 2021-01-29 | 2021-05-18 | 上海商汤科技开发有限公司 | Classroom state evaluation method and related device and equipment |
CN112883867A (en) * | 2021-02-09 | 2021-06-01 | 广州汇才创智科技有限公司 | Student online learning evaluation method and system based on image emotion analysis |
CN112990735A (en) * | 2021-03-30 | 2021-06-18 | 东营职业学院 | Classroom quality feedback evaluation method and device based on mathematical teaching |
CN113283383A (en) * | 2021-06-15 | 2021-08-20 | 北京有竹居网络技术有限公司 | Live broadcast behavior recognition method, device, equipment and readable medium |
CN113723284A (en) * | 2021-08-30 | 2021-11-30 | 未鲲(上海)科技服务有限公司 | Information generation method, terminal device and storage medium |
CN114007105A (en) * | 2021-10-20 | 2022-02-01 | 浙江绿城育华教育科技有限公司 | Online course interaction method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111353363A (en) | Teaching effect detection method and device and electronic equipment | |
CN107292271B (en) | Learning monitoring method and device and electronic equipment | |
CN109522815B (en) | Concentration degree evaluation method and device and electronic equipment | |
US10095850B2 (en) | User identity authentication techniques for on-line content or access | |
WO2021232775A1 (en) | Video processing method and apparatus, and electronic device and storage medium | |
WO2021047185A1 (en) | Monitoring method and apparatus based on facial recognition, and storage medium and computer device | |
CN111353366A (en) | Emotion detection method and device and electronic equipment | |
CN109685007B (en) | Eye habit early warning method, user equipment, storage medium and device | |
KR102593624B1 (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour and method thereof | |
US20150262496A1 (en) | Multimedia educational content delivery with identity authentication and related compensation model | |
JP6859640B2 (en) | Information processing equipment, evaluation systems and programs | |
Alburaiki et al. | Mobile based attendance system: face recognition and location detection using machine learning | |
US20240048842A1 (en) | Assisted image capturing methods and apparatuses for pets | |
KR102711511B1 (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour by using a front camera of examinee terminal and an auxiliary camera and method thereof | |
CN113536893A (en) | Online teaching learning concentration degree identification method, device, system and medium | |
KR102581415B1 (en) | UBT system using face contour recognition AI to prevent the cheating behaviour and method thereof | |
CN112418068B (en) | On-line training effect evaluation method, device and equipment based on emotion recognition | |
KR102615709B1 (en) | Online Test System using face contour recognition AI to prevent the cheating behavior by using a front camera of examinee terminal installed audible video recording program and an auxiliary camera and method thereof | |
CN111325082A (en) | Personnel concentration degree analysis method and device | |
CN111402096A (en) | Online teaching quality management method, system, equipment and medium | |
CN111339809A (en) | Classroom behavior analysis method and device and electronic equipment | |
Yi et al. | Real time learning evaluation based on gaze tracking | |
Satre et al. | Online Exam Proctoring System Based on Artificial Intelligence | |
JP6855737B2 (en) | Information processing equipment, evaluation systems and programs | |
KR20230169880A (en) | Cathexis learning system and method using AI in an untact learning based on Web browser |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |