CN112215056A - Information processing method, device, system and storage medium - Google Patents
Information processing method, device, system and storage medium Download PDFInfo
- Publication number
- CN112215056A CN112215056A CN202010832158.0A CN202010832158A CN112215056A CN 112215056 A CN112215056 A CN 112215056A CN 202010832158 A CN202010832158 A CN 202010832158A CN 112215056 A CN112215056 A CN 112215056A
- Authority
- CN
- China
- Prior art keywords
- person
- face
- examined
- video frame
- examination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 23
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 230000001815 facial effect Effects 0.000 claims abstract description 61
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 230000006399 behavior Effects 0.000 claims description 46
- 238000000034 method Methods 0.000 claims description 33
- 230000006854 communication Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 14
- 230000004044 response Effects 0.000 description 15
- 238000012360 testing method Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000008520 organization Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 210000000214 mouth Anatomy 0.000 description 2
- 210000001331 nose Anatomy 0.000 description 2
- 230000000474 nursing effect Effects 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000209202 Bromus secalinus Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides an information processing method, equipment, a system and a storage medium. In the embodiment of the application, a video stream containing facial images of a person to be examined can be acquired during examination; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and then, the test-taking behavior of the person to be examined is determined according to the posture of the facial image of the person to be examined in the examination period, so that the automatic monitoring and identification of the test-taking behavior of the person to be examined are realized, and the invigilation cost is favorably reduced.
Description
Technical Field
The present application relates to the field of internet technologies, and in particular, to an information processing method, device, system, and storage medium.
Background
With the development of the internet technology, many assessment projects are changed from off-line assessment to on-line assessment. In the existing on-line examination mode, an examination organization part often designates an examination place, and a person to be examined completes corresponding examination on computer equipment of the examination place.
In the examination process, the examination organization department generally assigns the invigilators for invigilation, and a large amount of manpower and material resources are consumed.
Disclosure of Invention
Aspects of the present application provide an information processing method, device, system, and storage medium, which are used to realize automatic detection of a test action of a test person during an online assessment period, and contribute to reducing an invigilation cost.
An embodiment of the present application provides an information processing method, including:
during assessment, acquiring a video stream containing a facial image of a person to be assessed;
determining the posture of the face of the person to be checked during the checking according to the face image in the video stream;
and determining the behavior to be tested of the person to be tested according to the posture of the face of the person to be tested during the examination.
An embodiment of the present application further provides an information processing system, including: the system comprises terminal equipment and server-side equipment;
the terminal equipment is used for acquiring a video stream containing a facial image of a person to be assessed during the assessment period; and providing the video stream to the server device;
the server-side equipment is used for determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and determining the behavior to be tested of the person to be tested according to the posture of the face of the person to be tested during the examination.
An embodiment of the present application further provides a computer device, including: a memory and a processor, wherein the memory is to store a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-mentioned information processing method.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-mentioned information processing method.
In the embodiment of the application, a video stream containing facial images of a person to be examined can be acquired during examination; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and then, the test-taking behavior of the person to be examined is determined according to the posture of the facial image of the person to be examined in the examination period, so that the automatic monitoring and identification of the test-taking behavior of the person to be examined are realized, and the invigilation cost is favorably reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1a is a schematic structural diagram of an information processing system according to an embodiment of the present disclosure;
fig. 1b and fig. 1c are schematic diagrams of a face image provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an information processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the existing assessment process, monitoring personnel are generally assigned by an assessment organization department for invigilation, and a large amount of manpower and material resources are consumed. The embodiment of the present application provides a new solution, which mainly includes: a video stream containing facial images of a person to be examined can be acquired during examination; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and then, the test-taking behavior of the person to be examined is determined according to the posture of the facial image of the person to be examined in the examination period, so that the automatic monitoring and identification of the test-taking behavior of the person to be examined are realized, and the invigilation cost is favorably reduced.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1a is a schematic structural diagram of an information processing system according to an embodiment of the present disclosure. As shown in fig. 1a, the system comprises: a terminal device 11 and a server device 12.
In this embodiment, the terminal device 11 is a terminal device used by a person to be checked. The terminal equipment can be provided with assessment application software. The assessment application software can be an independent software product or a certain functional module in the software product. The terminal devices 11 of the persons to be assessed have different implementation forms, and the assessment application software can have different implementation forms. For example: if the terminal device of the person to be checked is a mobile phone, a tablet computer or the like, the examination application software may be an APP corresponding to the examination application. For another example, if the terminal device of the person to be checked is a desktop computer, a notebook computer, or the like, the assessment application software may be a client corresponding to the assessment application. The terminal device 11 further includes a camera capable of acquiring a facial image of the person to be checked, and the terminal device 11 may send a video stream of the facial image of the person to be checked acquired by the camera to the server device 12.
In this embodiment, the server device 12 may be a computer device that provides services related to online examination for the user, and generally has the capability of undertaking and guaranteeing the services, for performing data management, responding to a service request of the terminal device, and so on. The server device 12 may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. In addition, the server device may also refer to other computing devices with corresponding service capabilities, such as a terminal device (running a service program) such as a computer.
The server device 12 and the terminal device 11 may be connected wirelessly or by wire. Optionally, the service-side device 12 may be communicatively connected to the terminal device 11 through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Alternatively, the server device 12 may also be communicatively connected to the terminal device 11 through bluetooth, WiFi, infrared, or the like.
In this embodiment, when the examination is started, the person to be examined can log in the examination application software in the terminal device 11, and perform the examination by using the online examination function module in the examination application software. Accordingly, the terminal device 11 may request the examination information from the server device 12 in response to the examination-start event. The assessment information mainly refers to assessment questions and the like. Optionally, the assessment application class software can provide an initial assessment control. The person to be examined can trigger the examination starting control to obtain examination questions. Accordingly, the examination starting event is an examination starting event generated by the trigger operation aiming at the examination starting control.
Accordingly, the server device 12 may issue the assessment information to the terminal device 11 in response to the request of the terminal device 11. Alternatively, in some embodiments, the assessment organization department may set assessment start time and automatically issue assessment information to the terminal device 11 when the assessment start time is reached.
The terminal device 11 can receive the assessment information sent by the service terminal device 12 and display the assessment information. The person to be checked can answer the question on line. The terminal device 11 may store the answering information in response to the answering operation of the to-be-examined person for the examination information. After the person to be checked finishes answering, the answer sheet can be submitted. Optionally, the online assessment application software may provide a submission answer sheet control, and the person to be examined may click the submission answer sheet control to submit the response information. Accordingly, the terminal device 11 may provide the response information to the server device 12 in response to the submission operation of the to-be-examined person for the assessment information. The server device 12 can receive the response information of the to-be-assessed person and calculate the assessment result of the to-be-assessed person according to the response information. Alternatively, the server device 12 may match the answering information with the standard answers, calculate the score of the person to be assessed, and use the score as the assessment result of the person to be assessed. Or, the server device 12 may determine the assessment level according to the score of the person to be examined, and take the assessment level as the assessment result of the person to be examined; or, the server device 12 may determine whether the person to be examined passes the examination according to the score of the person to be examined, and use the result of whether the person to be examined passes the examination as the examination result of the person to be examined.
However, during the examination, because the examination-waiting personnel is in an unsupervised state, the examination-waiting personnel may have some improper examination-waiting behaviors, such as cheating, examination surrogates, etc., which affect the fairness and effectiveness of the examination. In order to improve the fairness of the examination and the validity of the examination result of the person to be examined, and prevent the person to be examined from cheating during the examination, in this embodiment, the terminal device 11 may be connected to a camera. The camera may be a built-in camera of the terminal device 11, or an external camera of the terminal device 11. The terminal device 11 is a mobile phone, a notebook, a tablet computer, etc., and the camera may be a camera integrated in the terminal device 11; the terminal device 11 is a desktop computer or the like, and the camera may be an external camera of the terminal device 11. In the present embodiment, the camera is referred to as a camera of the terminal device 11, regardless of whether the camera is a camera integrated in the terminal device 11 or an external camera. The terminal device 11 may respond to the examination starting event, turn on the camera of the terminal device 11, and start entering the examination stage. The examination stage mainly refers to an on-line pen test answering stage.
During the examination, the terminal device 11 may collect a video stream of facial images of the person to be examined during the examination. Specifically, the camera of the terminal device 11 collects a video stream of facial images of the person to be assessed during the assessment.
In practical application, during the on-line answering period of the person to be checked by using the terminal device 11, the person to be checked generally faces a camera of the terminal device 11, and the posture of the face of the person to be checked, which is acquired by the camera, is relatively fixed. If the person to be checked cheats by using other terminal equipment, books and the like, the person to be checked can incline the body or the head, so that the posture of the face of the person to be checked, which is acquired by the camera, is changed. Based on this, in the present embodiment, the terminal device 11 may provide the captured video stream to the server device 12. The server device 12 receives the video stream, and determines the posture of the face of the person to be examined during the examination according to the face image of the person to be examined in the video stream. Further, the server device 12 may determine the behavior of the person to be examined according to the posture of the person to be examined during the examination. The test behavior of the person to be checked can include: whether the examinee has cheating behaviors during the examination period or not.
The information processing system provided by the embodiment can acquire a video stream containing the facial image of a person to be examined during examination; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and then, the test-taking behavior of the person to be examined is determined according to the posture of the facial image of the person to be examined in the examination period, so that the automatic monitoring and identification of the test-taking behavior of the person to be examined are realized, and the invigilation cost is favorably reduced.
In the embodiment of the present application, a specific implementation of the server device 12 for determining the posture of the face of the person to be examined during the examination is not limited. In some embodiments, the server device 12 may determine the face contour data of the person to be checked from the face image in the video stream containing the face image of the person to be checked; and determining the posture of the face of the person to be checked during the checking period according to the face contour data of the person to be checked.
Alternatively, server device 12 may locate facial feature points in facial images in video frames in the video stream. The facial feature points may be corresponding positions of five sense organs (ears, eyes, nose, mouth, etc.) of the person to be assessed in the video frame. And calculating the face contour coordinates of the person to be checked in the corresponding video frame according to the coordinates of the face feature points in the face image in each video frame in the corresponding video frame. For example, for a first video frame in the video stream, the server device may locate facial feature points in a facial image in the first video frame; and calculating the face contour coordinates of the person to be checked in the first video frame according to the coordinates of the face feature points in the first video frame. The first video frame may be any video frame in the video stream.
Further, the server device 12 may calculate, according to the face contour coordinates of the person to be checked in the first video frame, a deflection angle of the face of the person to be checked in the first video frame acquisition period with respect to the front face of the person to be checked, as the pose of the face of the person to be checked in the first video frame acquisition period. The front face image of the person to be examined, which may also be referred to as the front face image of the person to be assessed, may be collected by the terminal device 11 before the assessment is started, and the front face image of the person to be examined is provided to the server device 12. The server device 12 can perform identity verification and the like on the person to be checked according to the front face image of the person to be checked. Or, the server device 12 may also obtain a certificate image of the person to be checked from the registration information or registration information of the person to be checked; and identifying the face image of the person to be checked from the certificate image of the person to be checked.
Or, the server device 12 may determine whether the person to be assessed has 2 left and right facial edges in the currently received video frame according to the facial contour coordinates of the currently received video frame acquired by the terminal device 11 in real time. And if the judgment result is negative, determining that the face image contained in the currently received video frame is not the front face image. If the judgment result is yes, the coordinates of the chin of the person to be checked in the currently received video frame and the coordinates of the face edges on the left side and the right side of the person to be checked in the currently received video frame can be obtained from the face contour coordinates of the currently received video frame; and determining the coordinates of the central line of the face of the person to be examined in the currently received video frame according to the coordinates of the chin of the person to be examined in the currently received video frame. As shown in fig. 1b and 1c, the central line of the face of the person to be checked in the currently received video frame refers to: in the currently received video frame, a straight line perpendicular to the chin edge of the person to be examined, that is, a straight line passing through the chin of the person to be examined and perpendicular to the horizontal direction of the currently received video frame. Further, the server device 12 may determine whether the face image included in the currently received video frame is a front face image according to coordinates of the left and right side face edges of the person to be checked in the currently received video frame and coordinates of the face center line of the person to be checked in the currently received video frame. Optionally, the server device 12 may determine whether the facial edges of the left and right sides of the person to be checked in the currently received video frame are symmetric about the facial centerline according to the coordinates of the facial edges of the left and right sides of the person to be checked in the currently received video frame and the coordinates of the facial centerline of the person to be checked in the currently received video frame; if the judgment result is yes, determining that the face image contained in the currently received video frame is a front face image; correspondingly, if the judgment result is negative, the face image contained in the currently received video frame is determined not to be the front face image.
Or, the server device 12 may also calculate, according to the coordinates of the left and right side face edges of the person to be checked in the currently received video frame and the coordinates of the face center line of the person to be checked in the currently received video frame, the included angles between the left and right side face edges and the face center line in the currently received video frame respectively; if the difference value of the included angles between the edges of the left side surface and the right side surface in the currently received video frame and the center line of the face is smaller than or equal to the set angle difference threshold value, determining that the face image contained in the currently received video frame is a front face image; correspondingly, if the difference value of the included angles between the edges of the left side surface and the right side surface in the currently received video frame and the center line of the face is larger than the set angle difference threshold value, the face image contained in the currently received video frame is determined not to be the front face image. The included angle between the face edge and the face center line in the currently received video frame may be: and the tangent line of the face edge forms an included angle with the face center line, wherein the tangent point of the tangent line is the point of intersection of the face edge and the face center line. For example, as shown in fig. 1b and 1c, the face edges are at angles θ 1 and θ 2, respectively, to the face centerline.
Further, the server device 12 may obtain coordinates of the chin and the facial edge of the person to be assessed in the first video frame according to the facial contour coordinates of the person to be assessed in the first video frame; and determining the coordinates of the facial midline of the person to be examined in the first video frame according to the chin of the person to be examined. Then, calculating an included angle between the face edge and the face center line in the first video frame according to the coordinates of the face edge and the face center line in the first video frame; further, the server device 12 may calculate a deflection angle of the face of the person to be checked in the first video frame acquisition period compared with the face of the person to be checked in the first video frame acquisition period according to an included angle between the face edge and the face center line in the face image of the person to be checked in the front face of the person to be checked and an included angle between the face edge and the face center line in the first video frame, and use the deflection angle as a deflection angle of the face of the person to be checked in the first video frame acquisition period. For example, as shown in FIGS. 1b and 1c, the deflection angle is | θ 1- θ 2 |. In the embodiment, the deflection angle of the face of the person to be examined during the first video frame acquisition is used as the posture of the face of the person to be examined during the first video frame acquisition.
Further, it is considered that in an actual scene, the person to be examined may have normal limb activities such as shaking, moving or turning around briefly in a small amplitude during the examination, so that the face image of the person to be examined in the video frame acquired by the terminal device 11 is not a front face image. If the deflection of the facial image caused by the normal limb movement is directly judged as cheating, the misjudgment of corresponding test behavior is undoubtedly caused. Based on this, in the present embodiment, the server device 12 may obtain the deflection angle of the face of the person to be examined during the acquisition of the M frames of video frames continuously when determining the trial-taking behavior of the person to be examined. Wherein M is not less than 2 and is an integer. The specific value of M can be determined by the time of the person normally moving the neck and the acquisition frame rate of the terminal device 11. Preferably, M.gtoreq.3. The deflection angle of the face image of the person to be checked in each frame of the video frame is the deflection angle of the face of the person to be checked in comparison with the front face of the person to be checked in the video frame acquisition period.
Further, the server device 12 may determine whether there are N target video frames in the M video frames. The target video frame refers to a video frame of which the deflection angle of the face of the person to be checked during the acquisition of the target video frame is greater than or equal to a set angle threshold. M is more than or equal to N and more than or equal to 2, and N is an integer. The specific value of N may be determined by the time of the person normally moving the neck and the frame rate of the terminal device 11, and preferably, the acquisition time corresponding to N frames of video frames is longer than the time of the person normally moving the neck. If the judgment result is yes, determining that the cheating behavior exists in the to-be-examined personnel; if the judgment result is negative, the deflection angle of the face of the person to be checked during the continuous acquisition of the M frames of video frames is determined, and whether cheating behaviors exist in the person to be checked cannot be determined.
In consideration of practical application, when a person to be examined uses the terminal device 11 for examination, the person to be examined needs to answer questions on an examination page in the examination process, and when the person to be examined opens other applications to search for data, the terminal device 11 leaves the examination page. On the basis, in order to further judge whether the cheating behavior exists in the to-be-examined personnel, the terminal device 11 can also monitor whether the to-be-examined personnel leaves the examination page in real time during the examination period; if the situation that the person to be assessed leaves the assessment page is monitored in the assessment period, timing the time of leaving the assessment page; and if the time of leaving the assessment page is longer than or equal to the set time, determining that the cheating behavior exists in the personnel to be assessed. The set duration can be determined according to the duration of returning to the assessment page again after the user leaves the assessment page due to misoperation.
How to automatically identify the test-taking behavior of the person to be examined is described in the above embodiment under the condition that the person to be examined performs on-line pen-taking. In some practical assessment processes, the person to be examined needs to perform practical business operations besides on-line examination and answer. For example, in the examination of the Yuesao level, besides the theoretical knowledge of online examination, the method can be used for online examination of practical business operations such as actual bringing of children and nursing. Based on this, in the embodiment of the present application, when the examinee performs examination, the examinee may also perform video with the examiner (also referred to as a test officer). And the assessment personnel assess the business operation of the personnel to be assessed through the real-time video. The terminal equipment 11 can respond to the examination starting event and call a camera corresponding to the terminal equipment; the camera is utilized to carry out video communication with the assessment personnel. Therefore, the personnel to be checked can carry out business operation in the video communication process, and the assessment personnel can assess the business operation of the personnel to be checked in the video communication process.
Furthermore, the assessment personnel can perform grade assessment and the like on the personnel to be assessed by combining the assessment results of the online written examination and the assessment results of the business operation of the personnel to be assessed.
Of course, the monitoring of the test-taking behavior of the person to be assessed may also be performed by the terminal device 11, and for a specific implementation, reference may be made to the related content of the monitoring of the test-taking behavior of the person to be assessed by the server device 12, which is not described herein again.
In addition to the system embodiment, an embodiment of the present application further provides an information processing method, where the information processing method is applicable to the terminal device or the server device. The following provides an exemplary description of an information processing method provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of an information processing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
201. a video stream containing facial images of the person to be examined is acquired during the assessment.
202. And determining the posture of the face of the person to be checked during the checking according to the face image of the person to be checked in the video stream.
203. And determining the test behavior of the person to be examined according to the posture of the face of the person to be examined during the examination.
The information processing method provided by the embodiment is suitable for terminal equipment of personnel to be assessed or server-side equipment for providing assessment services. For the description of the terminal device and the server device, reference may be made to the related contents of the above system embodiments, and details are not described herein again.
In this embodiment, when the person to be examined starts examination, the person to be examined can log in the examination application software in the terminal device, and an examination is performed by using the online examination function module in the examination application software. Accordingly, the terminal device may request assessment information from the server device in response to the assessment start event. The assessment information mainly refers to assessment questions and the like. Optionally, the assessment application class software can provide an initial assessment control. The person to be examined can trigger the examination starting control to obtain examination questions. Accordingly, the examination starting event is an examination starting event generated by the trigger operation aiming at the examination starting control.
Correspondingly, the server-side equipment can respond to the request of the terminal equipment and send the assessment information to the terminal equipment. Or, in some embodiments, the assessment organization department may set assessment start time and automatically issue assessment information to the terminal device when the assessment start time is reached.
And for the terminal equipment, the assessment information issued by the service end equipment can be received and displayed. The person to be checked can answer the question on line. The terminal equipment can respond to the answering operation of the examination information by the to-be-examined person and store the answering information. After the person to be checked finishes answering, the answer sheet can be submitted. Optionally, the online assessment application software may provide a submission answer sheet control, and the person to be examined may click the submission answer sheet control to submit the response information. Correspondingly, the terminal device can respond to submission operation of the examination information by the to-be-examined person and provide the answering information for the server device. The server-side equipment can receive the response information of the personnel to be assessed and calculate the assessment result of the personnel to be assessed according to the response information. Optionally, the server-side device may match the answering information with the standard answers, calculate the score of the person to be assessed, and use the score as the assessment result of the person to be assessed. Or the server-side equipment can determine the assessment grade according to the score of the personnel to be examined and takes the assessment grade as the assessment result of the personnel to be examined; or the server-side equipment can determine whether the personnel to be examined pass the examination according to the score of the personnel to be examined and take the result of whether the personnel to be examined pass the examination as the examination result of the personnel to be examined.
However, during the examination, because the examination-waiting personnel is in an unsupervised state, the examination-waiting personnel may have some improper examination-waiting behaviors, such as cheating, examination surrogates, etc., which affect the fairness and effectiveness of the examination. In order to improve the fairness of the examination and the validity of the examination result of the person to be examined and prevent the person to be examined from cheating during the examination, in the embodiment, the terminal device can be connected with a camera. For the implementation forms of the terminal device and the camera, reference may be made to the relevant contents of the above system embodiments, which are not described herein again. The terminal equipment can respond to the examination starting event, a camera of the terminal equipment is started, and the terminal equipment starts to enter an examination stage. The examination stage mainly refers to an on-line pen test answering stage.
During the examination, the terminal device can collect the video stream of the facial image of the person to be examined during the examination. Specifically, a camera of the terminal device collects a video stream of a facial image of a person to be assessed during assessment. Correspondingly, if the information processing method in fig. 2 is executed by a terminal device of a person to be checked, an optional implementation manner of step 201 is: during the assessment, a video stream of facial images of a person to be assessed during the assessment is collected. Specifically, a camera of the terminal device collects a video stream of a facial image of a person to be assessed during assessment. If the information processing method in fig. 2 is executed by a server device of a person to be checked, another optional implementation manner of step 201 is: receiving a video stream which is provided by a terminal device operated by a person to be checked and contains a face image of the person to be checked; the video stream is collected by a camera of the terminal device during the examination period.
In practical application, during the on-line answering period of the person to be checked by using the terminal equipment, the person to be checked generally faces a camera of the terminal equipment, and the posture of the face of the person to be checked, which is acquired by the camera, is relatively fixed. Based on this, in step 202, the pose of the face of the person to be examined during the examination can be determined according to the face image of the person to be examined in the video stream. Further, in step 203, the behavior to be tested of the person to be tested can be determined according to the posture of the person to be tested during the test. The test behavior of the person to be checked can include: whether the examinee has cheating behaviors during the examination period or not.
In the embodiment, a video stream containing a facial image of a person to be examined can be acquired during examination; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and then, the test-taking behavior of the person to be examined is determined according to the posture of the facial image of the person to be examined in the examination period, so that the automatic monitoring and identification of the test-taking behavior of the person to be examined are realized, and the invigilation cost is favorably reduced.
In the embodiment of the present application, a specific embodiment of determining the posture of the face of the person to be assessed during the assessment is not limited. In some embodiments, the face contour data of the person to be checked may be determined from a face image in a video stream containing the face image of the person to be checked; and determining the posture of the face of the person to be checked during the checking period according to the face contour data of the person to be checked.
Alternatively, facial feature points in facial images in video frames in a video stream may be located. The facial feature points may be corresponding positions of five sense organs (ears, eyes, nose, mouth, etc.) of the person to be assessed in the video frame. And calculating the face contour coordinates of the person to be checked in the corresponding video frame according to the coordinates of the face feature points in the face image in each video frame in the corresponding video frame. For example, for a first video frame in the video stream, the server device may locate facial feature points in a facial image in the first video frame; and calculating the face contour coordinates of the person to be checked in the first video frame according to the coordinates of the face feature points in the first video frame. The first video frame may be any video frame in the video stream.
Further, a deflection angle of the face of the person to be checked in comparison with the front face of the person to be checked in the first video frame acquisition period can be calculated according to the face contour coordinates of the person to be checked in the first video frame, and the deflection angle is used as the posture of the face of the person to be checked in the first video frame acquisition period. The source of the front face image of the person to be checked can refer to the relevant content of the above system embodiment, and is not described herein again.
Further, the coordinates of the chin and the facial edge of the person to be assessed in the first video frame can be obtained according to the facial contour coordinates of the person to be assessed in the first video frame; and determining the coordinates of the facial midline of the person to be examined in the first video frame according to the chin of the person to be examined. Then, calculating an included angle between the face edge and the face center line in the first video frame according to the coordinates of the face edge and the face center line in the first video frame; further, the deflection angle of the face of the person to be checked in the first video frame acquisition period relative to the face of the person to be checked in the first video frame acquisition period can be calculated according to the included angle between the face edge and the face center line in the image of the face of the person to be checked in the front face of the person to be checked and the included angle between the face edge and the face center line in the first video frame, and the deflection angle is used as the deflection angle of the face of the person to be checked in the first video frame acquisition period. In the embodiment, the deflection angle of the face of the person to be examined during the first video frame acquisition is used as the posture of the face of the person to be examined during the first video frame acquisition.
Further, in an actual scene, the person to be examined may shake, move or turn around briefly in a small range during the examination, so that the face image of the person to be examined in the video frame acquired by the terminal device is not the front face image. If the deflection of the facial image caused by the normal limb movement is directly judged as cheating, the misjudgment of corresponding test behavior is undoubtedly caused. Based on this, in the embodiment, when the trial behavior of the person to be examined is determined, the deflection angle of the face of the person to be examined during the acquisition of the M frames of video frames continuously can be acquired. Wherein M is not less than 2 and is an integer. The specific value of M can be determined by the time of the person normally moving the neck and the acquisition frame rate of the terminal device 11. Preferably, M.gtoreq.3. The deflection angle of the face image of the person to be checked in each frame of the video frame is the deflection angle of the face of the person to be checked in comparison with the front face of the person to be checked in the video frame acquisition period.
Further, it can be determined whether N target video frames exist in the M video frames. The target video frame refers to a video frame of which the deflection angle of the face of the person to be checked during the acquisition of the target video frame is greater than or equal to a set angle threshold. M is more than or equal to N and more than or equal to 2, and N is an integer. The specific value of N can be determined by the time of the person normally moving the neck and the acquisition frame rate of the terminal device, and preferably, the acquisition time corresponding to N frames of video frames is longer than the time of the person normally moving the neck. If the judgment result is yes, determining that the cheating behavior exists in the to-be-examined personnel; if the judgment result is negative, the deflection angle of the face of the person to be checked during the continuous acquisition of the M frames of video frames is determined, and whether cheating behaviors exist in the person to be checked cannot be determined.
In consideration of practical application, when a person to be examined uses the terminal device for examination, the person to be examined needs to answer questions on an examination page in the examination process, and when the person to be examined opens other applications to search for data, the terminal device leaves the examination page. On the basis, in order to further judge whether the cheating behavior exists in the to-be-examined personnel, the terminal equipment can also monitor whether the to-be-examined personnel leaves the examination page in real time during the examination period; if the situation that the person to be assessed leaves the assessment page is monitored in the assessment period, timing the time of leaving the assessment page; and if the time of leaving the assessment page is longer than or equal to the set time, determining that the cheating behavior exists in the personnel to be assessed. The set duration can be determined according to the duration of returning to the assessment page again after the user leaves the assessment page due to misoperation.
How to automatically identify the test-taking behavior of the person to be examined is described in the above embodiment under the condition that the person to be examined performs on-line pen-taking. In some practical assessment processes, the person to be examined needs to perform practical business operations besides on-line examination and answer. For example, in the examination of the Yuesao level, besides the theoretical knowledge of online examination, the method can be used for online examination of practical business operations such as actual bringing of children and nursing. Based on this, in the embodiment of the present application, when the examinee performs examination, the examinee may also perform video with the examiner (also referred to as a test officer). And the assessment personnel assess the business operation of the personnel to be assessed through the real-time video. For the terminal equipment, a camera corresponding to the terminal equipment can be called in response to the examination starting event; the camera is utilized to carry out video communication with the assessment personnel. Therefore, the personnel to be checked can carry out business operation in the video communication process, and the assessment personnel can assess the business operation of the personnel to be checked in the video communication process.
Furthermore, the assessment personnel can perform grade assessment and the like on the personnel to be assessed by combining the assessment results of the online written examination and the assessment results of the business operation of the personnel to be assessed.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 201 and 202 may be device a; for another example, the execution subject of step 201 may be device a, and the execution subject of step 202 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps of the information processing method.
Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 3, the computer apparatus includes: a memory 30a and a processor 30 b. The memory 30a is used for storing computer programs.
The processor 30b is coupled to the memory 30a for executing a computer program for: during assessment, acquiring a video stream containing a facial image of a person to be assessed; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and determining the test behavior of the person to be examined according to the posture of the face of the person to be examined during the examination.
In some embodiments, the processor 30b, in determining the pose of the face of the person under examination during the examination, is configured to: determining face contour data of a person to be checked according to the face image of the video stream; and determining the posture of the face of the person to be checked during the checking according to the face contour data of the person to be checked.
Further, the processor 30b, when determining the facial contour data of the person to be assessed, is specifically configured to: for a first video frame in the video stream, locating facial feature points in a facial image in the first video frame; and calculating the face contour coordinates of the person to be checked in the first video frame according to the coordinates of the face feature points in the first video frame.
Optionally, the processor 30b, when determining the pose of the face of the person to be examined during the examination, is specifically configured to: and calculating the deflection angle of the face of the person to be checked in comparison with the front face of the person to be checked in the first video frame acquisition period according to the face contour coordinates of the person to be checked in the first video frame, and taking the deflection angle as the posture of the face of the person to be checked in the first video frame acquisition period.
Further, the processor 30b, when calculating the deflection angle of the face of the person to be examined compared to the front face of the person to be examined during the first video frame acquisition, is specifically configured to: acquiring coordinates of the chin and the face edge of the person to be checked in the first video frame from the face contour coordinates of the person to be checked in the first video frame; determining the coordinates of the center line of the face of the person to be checked in the first video frame according to the coordinates of the chin of the person to be checked in the first video frame; calculating an included angle between the face edge and the face center line in the first video frame according to the coordinates of the face edge and the face center line in the first video frame; and calculating the deflection angle of the face of the person to be checked in the first video frame acquisition period compared with the front face of the person to be checked as the deflection angle of the face of the person to be checked in the first video frame acquisition period according to the included angle between the face edge and the face center line in the first video frame and the included angle between the face edge and the face center line in the front face image of the person to be checked.
Accordingly, the processor 30b, when determining the test taking behavior of the person to be assessed, is specifically configured to: acquiring the deflection angle of the face of a person to be checked during the acquisition of continuous M frames of video frames; judging whether N target video frames exist in the M video frames; the target video frame is a video frame of which the deflection angle of the face of the person to be checked in the target video frame acquisition period is greater than or equal to a set angle threshold; if the judgment result is yes, determining that the cheating behavior exists in the to-be-examined personnel; wherein M is more than or equal to N is more than or equal to 2, and M and N are integers.
In some embodiments, the computer device is a terminal device of the person to be checked, and the processor 30b is specifically configured to, when acquiring the video stream including the face image of the person to be checked: during the examination, a video stream containing facial images of the person to be examined is acquired by using a camera (not shown in fig. 3) on the terminal device operated by the person to be examined.
In other embodiments, the computer device is a server device, and the processor 30b is specifically configured to, when acquiring the video stream containing the face image of the person to be checked: receiving a video stream containing a facial image of the person to be examined, which is provided by a terminal device operated by the person to be examined, through the communication component 30 c; the video stream is collected by a camera of the terminal device during the examination.
In the case that the computer device is a terminal device of a person to be checked, the processor 30b is further configured to: monitoring whether the personnel to be examined leave the examination page or not during the examination; if the situation that the person to be assessed leaves the assessment page is monitored in the assessment period, timing the time of leaving the assessment page; and if the time of leaving the assessment page is longer than or equal to the set time, determining that the cheating behavior exists in the personnel to be assessed.
Optionally, the processor 30b is further configured to: the assessment information sent by the server equipment is received through the communication component 30c, and the assessment information is displayed on the display screen 30 d; responding to the answering operation aiming at the assessment information, and storing the answering information; and responding to submission operation aiming at the assessment information, and providing the response information to the server-side equipment so that the server-side equipment can calculate the assessment result of the person to be assessed according to the response information.
Optionally, the processor 30b is further configured to: responding to an examination starting event, and calling a camera of the terminal equipment corresponding to the personnel to be examined; the camera is used for carrying out video communication with the assessment personnel so that the assessment personnel can assess the business operation of the personnel to be assessed in the video communication process.
In some optional embodiments, as shown in fig. 3, the computer device may further include: power supply component 30e, audio component 30f, and the like. Only some of the components shown in fig. 3 are schematically depicted, and it is not meant that the computer device must include all of the components shown in fig. 3, nor that the computer device only includes the components shown in fig. 3.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chips (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In the embodiment of the present application, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for devices with language interaction functionality, voice interaction with a user may be enabled through an audio component, and so forth.
The computer device provided by the embodiment can acquire a video stream containing a facial image of a person to be examined during examination; determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and then, the test-taking behavior of the person to be examined is determined according to the posture of the facial image of the person to be examined in the examination period, so that the automatic monitoring and identification of the test-taking behavior of the person to be examined are realized, and the invigilation cost is favorably reduced.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (14)
1. An information processing method characterized by comprising:
during assessment, acquiring a video stream containing a facial image of a person to be assessed;
determining the posture of the face of the person to be checked during the checking according to the face image in the video stream;
and determining the behavior to be tested of the person to be tested according to the posture of the face of the person to be tested during the examination.
2. The method of claim 1, wherein the determining the pose of the face of the person to be examined during the examination from the face image in the video stream comprises:
determining the face contour data of the person to be checked according to the face image of the video stream;
and determining the posture of the face of the person to be checked during the checking according to the face contour data of the person to be checked.
3. The method according to claim 2, wherein the determining the face contour data of the person to be checked from the face image of the video stream comprises:
for a first video frame in the video stream, locating facial feature points in a facial image in the first video frame;
and calculating the facial contour coordinates of the person to be assessed in the first video frame according to the coordinates of the facial feature points in the first video frame.
4. The method according to claim 3, wherein the determining the pose of the face of the person to be examined during the examination according to the face contour data of the person to be examined comprises:
and calculating the deflection angle of the face of the person to be examined in the first video frame acquisition period compared with the front face of the person to be examined according to the face contour coordinates of the person to be examined in the first video frame, and taking the deflection angle as the posture of the face of the person to be examined in the first video frame acquisition period.
5. The method according to claim 4, wherein the calculating of the deflection angle of the face of the person to be examined during the acquisition of the first video frame compared with the face of the person to be examined in the first video frame according to the face contour coordinates of the person to be examined in the first video frame comprises:
acquiring coordinates of the chin and the face edge of the person to be checked in the first video frame from the face contour coordinates of the person to be checked in the first video frame;
determining the coordinates of the center line of the face of the person to be checked in the first video frame according to the coordinates of the chin of the person to be checked in the first video frame;
calculating an included angle between the face edge and the face center line in the first video frame according to the coordinates of the face edge and the face center line in the first video frame;
and calculating a deflection angle of the face of the person to be assessed in the first video frame acquisition period compared with the front face of the person to be assessed in the first video frame acquisition period according to an included angle between the face edge and the face center line in the first video frame and an included angle between the face edge and the face center line in the front face image of the person to be assessed, and taking the deflection angle as the deflection angle of the face of the person to be assessed in the first video frame acquisition period.
6. The method according to claim 4, wherein the determining the trial behavior of the person under examination according to the posture of the face of the person under examination during the examination comprises:
acquiring the deflection angle of the face of the person to be checked in the continuous M-frame video frame acquisition period;
judging whether N target video frames exist in the M video frames; the target video frame refers to a video frame of which the deflection angle of the face of the person to be checked during the acquisition of the target video frame is greater than or equal to a set angle threshold;
if the judgment result is yes, determining that the cheating behavior exists in the to-be-examined personnel; wherein M is more than or equal to N is more than or equal to 2, and M and N are integers.
7. The method according to any one of claims 1 to 6, wherein the acquiring, during the assessment, a video stream containing facial images of the person to be assessed comprises:
during the examination, a camera on the terminal equipment operated by the person to be examined is used for collecting the video stream containing the facial image of the person to be examined;
or,
receiving a video stream which is provided by a terminal device operated by the person to be checked and contains a face image of the person to be checked; the video stream is acquired by a camera of the terminal device during the examination period.
8. The method of any one of claims 1-6, further comprising:
monitoring whether the personnel to be examined leave the examination page or not during the examination;
if the situation that the person to be assessed leaves the assessment page is monitored in the assessment period, timing the time of leaving the assessment page;
and if the time for leaving the examination page is greater than or equal to the set time length, determining that the cheating behavior exists in the personnel to be examined.
9. The method of any one of claims 1-6, further comprising:
receiving assessment information sent by server equipment, and displaying the assessment information;
responding to the answering operation aiming at the assessment information, and storing the answering information;
and responding to submission operation aiming at the assessment information, and providing the answering information to the server-side equipment so that the server-side equipment can calculate the assessment result of the personnel to be assessed according to the answering information.
10. The method of claim 9, further comprising:
responding to an examination starting event, and calling a camera of the terminal equipment corresponding to the personnel to be examined;
and carrying out video communication with the assessment personnel by utilizing the camera so that the assessment personnel can assess the business operation of the personnel to be assessed in the video communication process.
11. An information processing system, comprising: the system comprises terminal equipment and server-side equipment;
the terminal equipment is used for acquiring a video stream containing a facial image of a person to be assessed during the assessment period; and providing the video stream to the server device;
the server-side equipment is used for determining the posture of the face of the person to be checked during the checking period according to the face image in the video stream; and determining the behavior to be tested of the person to be tested according to the posture of the face of the person to be tested during the examination.
12. The system of claim 11, wherein the server device is further configured to: the assessment information is sent to the terminal equipment;
the terminal equipment is used for displaying the assessment information; responding to the answering operation aiming at the assessment information, and storing the answering information; responding to submission operation aiming at assessment information, and providing the answering information to the server-side equipment;
the server-side equipment is used for calculating the assessment result of the personnel to be assessed according to the answering information.
13. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the method of any of claims 1-10.
14. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010832158.0A CN112215056A (en) | 2020-08-18 | 2020-08-18 | Information processing method, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010832158.0A CN112215056A (en) | 2020-08-18 | 2020-08-18 | Information processing method, device, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112215056A true CN112215056A (en) | 2021-01-12 |
Family
ID=74058854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010832158.0A Pending CN112215056A (en) | 2020-08-18 | 2020-08-18 | Information processing method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112215056A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090003535A (en) * | 2007-06-15 | 2009-01-12 | 에스케이 텔레콤주식회사 | Method for preventing cheating act by detecting eye line angle of examinee, system, sever and computer-readable recording medium with program therefor |
JP2015159405A (en) * | 2014-02-24 | 2015-09-03 | キヤノン株式会社 | image processing apparatus, imaging device, control method, program, and storage medium |
CN105791299A (en) * | 2016-03-11 | 2016-07-20 | 南通职业大学 | Unattended monitoring type intelligent on-line examination system |
US20170039869A1 (en) * | 2015-08-07 | 2017-02-09 | Gleim Conferencing, Llc | System and method for validating honest test taking |
CN106713856A (en) * | 2016-12-15 | 2017-05-24 | 重庆凯泽科技股份有限公司 | Intelligent examination monitoring system and method |
WO2017152425A1 (en) * | 2016-03-11 | 2017-09-14 | 深圳市大疆创新科技有限公司 | Method, system and device for preventing cheating in network exam, and storage medium |
CN207166656U (en) * | 2017-07-28 | 2018-03-30 | 湖南强视信息科技有限公司 | Towards the supervising device of unmanned invigilator |
KR20180050968A (en) * | 2016-11-07 | 2018-05-16 | 주식회사 조인트리 | on-line test management method |
KR20200049262A (en) * | 2018-10-31 | 2020-05-08 | (주)포세듀 | System for providing online blinded employment examination and a method thereof |
-
2020
- 2020-08-18 CN CN202010832158.0A patent/CN112215056A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090003535A (en) * | 2007-06-15 | 2009-01-12 | 에스케이 텔레콤주식회사 | Method for preventing cheating act by detecting eye line angle of examinee, system, sever and computer-readable recording medium with program therefor |
JP2015159405A (en) * | 2014-02-24 | 2015-09-03 | キヤノン株式会社 | image processing apparatus, imaging device, control method, program, and storage medium |
US20170039869A1 (en) * | 2015-08-07 | 2017-02-09 | Gleim Conferencing, Llc | System and method for validating honest test taking |
CN105791299A (en) * | 2016-03-11 | 2016-07-20 | 南通职业大学 | Unattended monitoring type intelligent on-line examination system |
WO2017152425A1 (en) * | 2016-03-11 | 2017-09-14 | 深圳市大疆创新科技有限公司 | Method, system and device for preventing cheating in network exam, and storage medium |
KR20180050968A (en) * | 2016-11-07 | 2018-05-16 | 주식회사 조인트리 | on-line test management method |
CN106713856A (en) * | 2016-12-15 | 2017-05-24 | 重庆凯泽科技股份有限公司 | Intelligent examination monitoring system and method |
CN207166656U (en) * | 2017-07-28 | 2018-03-30 | 湖南强视信息科技有限公司 | Towards the supervising device of unmanned invigilator |
KR20200049262A (en) * | 2018-10-31 | 2020-05-08 | (주)포세듀 | System for providing online blinded employment examination and a method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6900516B2 (en) | Gaze point determination method and devices, electronic devices and computer storage media | |
US9754503B2 (en) | Systems and methods for automated scoring of a user's performance | |
US20150006281A1 (en) | Information processor, information processing method, and computer-readable medium | |
US20190026606A1 (en) | To-be-detected information generating method and apparatus, living body detecting method and apparatus, device and storage medium | |
CN105488957A (en) | Fatigue driving detection method and apparatus | |
CN105930247B (en) | Processing method, device and the mobile terminal of system reboot problem | |
US20130293467A1 (en) | User input processing with eye tracking | |
CN112233690B (en) | Double recording method, device, terminal and storage medium | |
US10824890B2 (en) | Living body detecting method and apparatus, device and storage medium | |
US20160018909A1 (en) | Method and apparatus of controlling a smart device | |
WO2020007191A1 (en) | Method and apparatus for living body recognition and detection, and medium and electronic device | |
CN109670444A (en) | Generation, attitude detecting method, device, equipment and the medium of attitude detection model | |
JP2023089080A (en) | Learning system, learning class providing method and program | |
CN110767005A (en) | Data processing method and system based on intelligent equipment special for children | |
CN114926889B (en) | Job submission method and device, electronic equipment and storage medium | |
CN109922311A (en) | Monitoring method, device, terminal and storage medium based on audio/video linkage | |
CN112286364A (en) | Man-machine interaction method and device | |
US8976197B1 (en) | Solution generating devices and methods | |
CN110852196A (en) | Face recognition information display method and device | |
CN112215056A (en) | Information processing method, device, system and storage medium | |
CN111325160B (en) | Method and device for generating information | |
KR20220017329A (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour by using a front camera of examinee terminal installed audible video recording program and a auxiliary camera and method thereof | |
Mehrubeoglu et al. | Capturing reading patterns through a real-time smart camera iris tracking system | |
CN110059576A (en) | Screening technique, device and the electronic equipment of picture | |
CN112315463B (en) | Infant hearing test method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |