CN108876677B - Teaching effect evaluation method based on big data and artificial intelligence and robot system - Google Patents
Teaching effect evaluation method based on big data and artificial intelligence and robot system Download PDFInfo
- Publication number
- CN108876677B CN108876677B CN201810632878.5A CN201810632878A CN108876677B CN 108876677 B CN108876677 B CN 108876677B CN 201810632878 A CN201810632878 A CN 201810632878A CN 108876677 B CN108876677 B CN 108876677B
- Authority
- CN
- China
- Prior art keywords
- evaluation
- teaching
- teaching effect
- teacher
- actions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 278
- 230000000694 effects Effects 0.000 title claims abstract description 204
- 238000013473 artificial intelligence Methods 0.000 title abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 33
- 230000009471 action Effects 0.000 claims description 147
- 230000000875 corresponding effect Effects 0.000 claims description 38
- 230000008569 process Effects 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000002474 experimental method Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims 1
- 238000013500 data storage Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 239000008678 sanqi Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A teaching effect evaluation method and a robot system based on big data and artificial intelligence comprise the following steps: searching and acquiring the teaching effect portraits of the teacher to be queried from a teaching effect portraits knowledge base, and acquiring the values of all the evaluation unit labels belonging to the evaluation units to be queried from the teaching effect portraits of the teacher to be queried. According to the method and the system, the teaching effect of the teacher is evaluated based on the big data and the artificial intelligence teaching effect image, so that the teaching effect of the teacher is evaluated more truly and objectively, and the objectivity and accuracy of the teaching image and the teaching evaluation can be greatly improved.
Description
Technical Field
The invention relates to the technical field of information, in particular to a teaching effect evaluation method and a robot system based on big data and artificial intelligence.
Background
The existing teaching effect evaluation is formed by scoring teachers at the end of a period.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art: the evaluation of the teacher by the student depends not only on how the teacher speaks but also on the preference of the student, and the student gives higher evaluation to the teacher master with preference, for example, some students like the teacher, some teachers with preference strictly, some teachers with preference loosely, and these preference have no direct relation with the teaching effect, and some students not having ideal score or criticized by the teacher have malignant report phenomenon, and the teacher is deliberately bad in the teaching effect evaluation. Therefore, the existing teaching effect evaluation cannot objectively evaluate the teaching effect, but is subjectively influenced by students, so that the accuracy of the teaching effect evaluation is low.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
Based on the above, it is necessary to provide a teaching effect evaluation method and a robot system based on big data and artificial intelligence to solve the defects of strong subjectivity and low accuracy of teaching effect evaluation in the prior art.
In a first aspect, a teaching effect evaluation method is provided, and the method includes:
a portrait obtaining step, searching and obtaining the teaching effect portrait of the teacher to be inquired from a teaching effect portrait knowledge base;
And acquiring evaluation step, namely acquiring the values of all the evaluation unit labels belonging to the evaluation unit to be queried from the teaching effect portrait of the teacher to be queried.
Preferably, before the step of obtaining the image, the method further comprises:
and receiving a query step, and obtaining a teacher to be queried and an evaluation unit to be queried.
Preferably, the step of obtaining the evaluation further includes:
and an effect calculation step, namely acquiring weights of all evaluation units belonging to the evaluation units to be queried, and taking a value obtained by carrying out weighted average on the values of the labels of all the evaluation units according to the weights of all the evaluation units as a teaching effect of the evaluation units of the teacher to be queried.
Preferably, before the step of obtaining the image, the method further comprises:
A data acquisition step, namely acquiring large teaching process data, wherein the large teaching process data comprise teaching videos corresponding to each evaluation unit of each teacher; preferably, the video has time information and time period information.
A preset action step of acquiring a preset action of carefully listening to a class as a first preset action;
And in the effect image step, each evaluation unit of each teacher is used as one evaluation unit label of the teaching effect image of each teacher, the proportion of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit is used as the value of the one evaluation unit label of the teaching effect image of each teacher, and the value is stored in a teaching effect image knowledge base.
Preferably, the evaluation unit includes a course for a preset period of time; the first preset action comprises the student looking forward with head-up eyes or/and taking notes by hands.
In a second aspect, a teaching effect evaluation system is provided, the system including:
the portrait acquisition module is used for searching and acquiring the teaching effect portrait of the teacher to be inquired from a teaching effect portrait knowledge base;
And the acquisition evaluation module is used for acquiring the values of all the evaluation unit labels belonging to the evaluation unit to be queried from the teaching effect portrait of the teacher to be queried.
Preferably, the system further comprises:
and the query receiving module is used for acquiring a teacher to be queried and an evaluation unit to be queried.
The search evaluation further comprises the following steps:
And the effect calculation module is used for acquiring the weights of all the evaluation units belonging to the evaluation units to be queried, and taking the value obtained by carrying out weighted average on the values of the labels of all the evaluation units according to the weights of all the evaluation units as the teaching effect of the evaluation units of the teacher to be queried.
Preferably, the system further comprises:
the data acquisition module is used for acquiring large teaching process data, wherein the large teaching process data comprise teaching videos corresponding to each evaluation unit of each teacher;
the preset action module is used for acquiring preset action of carefully listening to the lesson and taking the action as a first preset action;
The effect portrait module is used for taking each evaluation unit of each teacher as one evaluation unit label of the teaching effect portrait of each teacher, taking the ratio of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit as the value of the one evaluation unit label of the teaching effect portrait of each teacher, and storing the value into a teaching effect portrait knowledge base.
Preferably, the evaluation unit includes a course for a preset period of time; the first preset action comprises the student looking forward with head-up eyes or/and taking notes by hands.
In a third aspect, there is provided a teaching effect evaluation robot system in which the teaching effect evaluation systems according to the second aspect are respectively arranged.
The embodiment of the invention has the following advantages and beneficial effects:
According to the teaching effect evaluation method and the robot system based on the big data and the artificial intelligence, each evaluation unit of each teacher is used as one evaluation unit label of the teaching effect image of each teacher, the proportion of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit is used as the value of the one evaluation unit label of the teaching effect image of each teacher, so that the teaching effect of a teacher is evaluated more truly and objectively, and the objectivity and accuracy of the teaching image and the teaching evaluation can be greatly improved.
Drawings
FIG. 1 is a flow chart of a teaching effect evaluation method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a teaching effect evaluation method provided by a preferred embodiment of the present invention;
FIG. 3 is a schematic block diagram of a teaching effect evaluation system provided by an embodiment of the present invention;
fig. 4 is a schematic block diagram of a teaching effect evaluation system according to a preferred embodiment of the present invention.
Detailed Description
The following describes the technical scheme in the embodiment of the present invention in detail in connection with the implementation mode of the present invention.
The embodiment of the invention provides a teaching effect evaluation method and a robot system based on big data and artificial intelligence, wherein the big data technology comprises a technology for acquiring and processing big data in a teaching process, and the artificial intelligence technology comprises an identification technology and a teaching effect portrait technology.
Teaching effect evaluation method based on big data and artificial intelligence
As shown in fig. 1, an embodiment provides a teaching effect evaluation method, which includes:
And a step S500 of obtaining the portrait, wherein the teacher' S teaching effect portrait to be inquired is searched and obtained from the teaching effect portrait knowledge base. Preferably, the teaching effect representation is a user representation. Wherein, user portrayal is the core technology of artificial intelligence.
And an acquisition and evaluation step S600, wherein values of all evaluation unit labels belonging to the evaluation unit to be queried are acquired from the teaching effect portrait of the teacher to be queried.
According to the teaching effect evaluation method, the label value of the evaluation unit of the teacher to be inquired is searched from the image of the teaching effect to obtain the teaching effect of the evaluation unit of the teacher to be inquired, so that the teaching evaluation is performed based on the image of the teaching effect, and the image of the teaching effect is performed based on big data of the teaching process, the teaching effect in the teaching process can be objectively reflected by the teaching evaluation based on the embodiment, and the traditional teaching evaluation is only scored by students at the end of the learning period, so that the traditional teaching evaluation is too subjective on one hand, and the teaching process is ignored on the other hand.
1. Image acquisition step
In a preferred embodiment, the step S500 of acquiring an image includes:
s501, searching and acquiring the teaching effect portrait (for example, zhang San teaching effect portrait) of which the teacher to be queried comprises names and numbers (for example, zhang San and 2018002) from a teaching effect portrait knowledge base.
The step S500 of obtaining the image obtains the image of the teacher to be inquired from the knowledge base of the teaching effect image, so that the evaluation of the teaching effect can be performed based on the objective image.
2. Acquisition and evaluation step
In a preferred embodiment, the step S600 of obtaining the evaluation includes:
S601, acquiring each evaluation unit label (Zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12; zhang San, 2018002, english, 2018 school year; and the like) from the teaching effect image (such as Zhang San teaching effect image) of the teacher to be queried, and then selecting all the evaluation unit labels (Zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12; zhang San, 2018002, english, 2018 school year) belonging to the evaluation units to be queried (higher mathematics, 2018-5-23 to 2018-12; in example 1, higher mathematics, 2018-5-23 to 2018-8-12; zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12) from all the teaching effect images of the teacher to be queried (in example 1, all courses and 2018 years).
S602, retrieving and acquiring values of all evaluation unit labels belonging to the evaluation units to be queried in the teaching effect image of the teacher to be queried from a teaching effect image knowledge base (in example 1, values of evaluation unit labels of the teaching effect image of Zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12 "are 40%, values of evaluation unit labels of the teaching effect image of Zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12" are 40%, and values of evaluation unit labels of the teaching effect image of Zhang San, 2018002, english and 2018 school year are 80%).
The step S600 of acquiring and evaluating acquires the label value of the evaluation unit of the teacher to be queried from the teaching effect portrait, so that the portrait based on big data and artificial intelligence can be used for objectively evaluating the teaching effect.
3. After the acquisition and evaluation step
In a preferred embodiment, the step S600 of obtaining the evaluation further includes:
And an effect calculation step S700, wherein the weights of all the evaluation units belonging to the evaluation units to be queried are obtained, and the values obtained by carrying out weighted average on the values of the labels of all the evaluation units according to the weights of all the evaluation units are used as teaching effects of the evaluation units of the teacher to be queried. And then outputting the teaching effect of the evaluation unit of the teacher to be queried to a user.
In the effect calculation step S700, the higher the value obtained after the weighted average is, the better the teaching effect of the evaluation unit of the teacher to be queried is. The lower the value obtained after the weighted average is, the poorer the teaching effect of the evaluation unit of the teacher to be queried is. By comparing the values obtained after different weighted averages, the relative merits of the teaching effects of the evaluation units of different teachers to be queried can be judged. For example, when the weighted average value of the first teacher a evaluation unit is 70%, the weighted average value of the first teacher B evaluation unit is 30%, the weighted average value of the second teacher B evaluation unit is 50%, and the weighted average value of the second teacher C evaluation unit is 10%, the teaching effect is ranked from good to bad as first teacher a evaluation unit > second teacher B evaluation unit > first teacher B evaluation unit > second teacher C evaluation unit.
After the step S600 of obtaining the evaluation, the weighted average value is calculated by integrating the label values of all the evaluation units belonging to the evaluation units to be queried, so that not only the teaching effect corresponding to the existing evaluation unit in the image, but also the teaching effect corresponding to the evaluation unit formed by combining the plurality of evaluation units in the image can be evaluated, and the application range of the teaching effect evaluation is improved.
(1) In a further preferred embodiment, the effect calculation step S700 includes:
S701, acquiring the scores corresponding to courses of all the evaluation units belonging to the evaluation units to be queried as weights (in example 1, the scores of courses of higher mathematics, 2018-5-23 to 2018-8-12 are 1 score, the weights of the corresponding evaluation units "Zhang three, 2018002, higher mathematics, 2018-5-23 to 2018-8-12" are set to 1, the scores of courses of higher mathematics, 2018-5-23 to 2018-12 are 1 score, the weights of the corresponding evaluation units "Zhang three, 2018002, higher mathematics, 2018-5-23 to 2018-8-12" are set to 1, the scores of courses of english, 2018 are set to 3 scores, and the weights of the corresponding evaluation units "Zhang three, 2018002, english, 2018" are set to 3).
S702, the values of the labels of all the evaluation units are weighted and averaged according to the weights of all the evaluation units (in example 1, the value of the label is 40% and the corresponding weight is 1, the weighted average is 40% x1, in example 2, the values of the label are 40% and 80%, respectively, the corresponding weights are 1 and 3, and the weighted average is (40% x 1+80% x 3)/4=70%).
S703, the weighted average value (40% in example 1 and 70% in example 2) is used as the teaching effect of the evaluation unit of the teacher to be queried.
4. Before the step of obtaining the image
As shown in fig. 2, in a preferred embodiment, the step S500 of acquiring an image further includes:
A step S100 of acquiring data, wherein the large teaching process data comprises teaching videos corresponding to each evaluation unit of each teacher; preferably, the teaching video includes video of the course of class teaching such as listening to students, doing experiments, practicing, taking notes, answering questions, reading aloud, etc.
A preset action step S200, wherein a preset action of carefully listening to the lesson is obtained and is used as a first preset action;
And an effect image step S300, wherein each evaluation unit of each teacher is used as one evaluation unit label of the teaching effect image of each teacher, the proportion of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit is used as the value of the one evaluation unit label of the teaching effect image of each teacher, and the value is stored in a teaching effect image knowledge base.
And receiving a query step S400, and acquiring a teacher to be queried and an evaluation unit to be queried.
The step before the step S500 of obtaining the portrait identifies through video recording in the teaching process to obtain the portrait of the teaching effect, rather than just carrying out the portrait of the teaching effect by subjective scoring of students, or active scoring of commentary or examination results of students, so that the portrait of the teaching effect can objectively reflect the actual effect of the teaching process.
(1) In a further preferred embodiment, the step S100 of acquiring data comprises:
s101, each teacher is obtained to include a name and a number (such as Zhang San, 2018002, li IV, 2018003, wang Wu, 2018005, and the like) and stored in a big data storage (such as Hbase).
S102, acquiring the name of each evaluation unit and the start and stop time (such as higher mathematics, 2018-5-23 to 2018-8-12, english, 2018 school year, chemistry, 2017 upper school period, chemistry, 2017 lower school period, art, three weeks before 2016 upper school period, and the like) of each evaluation unit, and storing the names and the start and stop times into a big data storage library;
S103, each evaluation unit (for example, zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12; zhang San, 2018002, english, 2018 academic years; liqu, 2018003, chemistry, 2017 academic periods; and the like) of each teacher is acquired and stored in a big data storage.
S104, acquiring teaching videos of each evaluation unit of each teacher (for example, zhang Sanqi all teaching videos of higher mathematics in 2018-5-23 to 2018-8-12, zhang Sanqi all teaching videos of English in 2018, lisi four all teaching videos of chemistry in 2017, etc.) and storing the teaching videos in a big data storage (for example, hdfs).
(2) In a further preferred embodiment, the preset action step S200 comprises:
S201, the user is prompted to perform a preset action for carefully listening to the lesson, including the name of the action, the characteristics of the action (e.g., speak, move head forward and mouth; take notes, lower head and hold the pen, etc.).
S202, prompting the user to preset actions of not carefully listening to the lesson, including names of the actions and characteristics of the actions (for example, sleeping, closing eyes and taking more than 1 minute; playing a mobile phone, looking down at the mobile phone and taking more than 1 minute; etc.).
S203, accepting user input, adding the preset set of actions for carefully listening to lessons and the complement of the preset set of actions for inappropriately listening to lessons into the first preset set of actions, and storing the first preset set of actions into the teaching effect recognition knowledge base.
(3) In a further preferred embodiment, the effect portrait step S300 includes:
s301, each evaluation unit of each teacher is read from the big data storage system (e.g., zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12; zhang San, 2018002, english, 2018 academic year; litetra, 2018003, chemistry, 2017 academic period; etc.).
S302, a teaching effect image (for example, a Zhang Sanzhan teaching effect image; a Liqu teaching effect image; etc.) is built for each teacher.
S303, taking each evaluation unit of each teacher as one evaluation unit label of the teaching effect image of each teacher (for example, zhang San, 2018002, higher mathematics, 2018-5-23 to 2018-8-12 as one evaluation unit label of the teaching effect image of Zhang San, 2018002, english, 2018 school year as one evaluation unit label of the teaching effect image of Zhang San, lifour, 2018003, chemistry, 2017 school year as one evaluation unit label of the teaching effect image of Lifour, etc.).
S304, identifying each student from the teaching video corresponding to each evaluation unit of each teacher through a face recognition technology, and encoding the student.
S305, acquiring a preset first action set from a teaching effect recognition knowledge base, and acquiring a preset serious class action set and a preset non-serious class action set from the set.
S306, identifying the action of each student in the teaching video corresponding to each evaluation unit of each teacher and matching each action in a preset set of actions of a earnest class (if the characteristics of the preset actions of a earnest class contain time periods, matching is needed to be carried out by combining corresponding actions in video frames or photos adjacent to the identified action), obtaining at least one first matching degree (for example, 2 actions in the set of actions of a earnest class can be obtained, if the first matching degree is greater than or equal to the first preset matching degree, the identified action is the first preset action, if the first matching degree is less than the first preset matching degree, matching is carried out by combining the identified action with each action in the preset set of actions of a earnest class (if the characteristics of the preset actions of a earnest class contain time periods, matching is needed to be carried out by combining corresponding actions in the video frames or photos adjacent to the identified action), and if the first matching degree is greater than or equal to the first preset matching degree, the identified action is the first preset matching degree is the second matching degree is less than the first preset matching degree. For example, the video of the teaching video or the snap photo set of Zhang three, 2018002, higher mathematics, 2018-5-23 to 2018-8-12 identifies each student from left to right and from top to bottom, and matches the actions of each student in each frame of video or each photo with the actions of the carefully listening class preset for speaking, taking notes, and the like, and if one matching degree, for example, the matching degree with the speaking is 0.7 and greater than the first preset matching degree, for example, 0.6, then the identified actions can be determined to be the actions of carefully listening class. For another example, the videos or the snap shots of the teaching videos of the school years of Zhang three, 2018002, english and 2018 are concentrated from left to right and from top to bottom, each student is identified, the actions of each student in each frame of video or each photo are matched with actions of speaking, taking notes and the like of a preset serious class, all the matching degrees are smaller than a first preset matching degree, for example, 0.6, then the identified actions are matched with actions of sleeping, playing mobile phones and the like of a preset serious class, all the matching degrees are smaller than a second preset matching degree, for example, 0.8, and then the identified actions are the first preset actions. For another example, the videos or the snap shots of the teaching videos in the learning period of four, 2018003, chemistry and 2017 are concentrated, each student is identified from left to right and from top to bottom, the actions of each student in each frame of video or each photo are matched with the actions of speaking, taking notes and the like of a preset serious class, all the matching degrees are smaller than a first preset matching degree, for example, 0.6, the identified actions are matched with the actions of sleeping, playing mobile phones and the like of a preset serious class, and one matching degree, for example, the matching degree with playing mobile phones is 0.82 and is larger than a second preset matching degree, for example, 0.8, and then the identified actions are not the first preset actions.
S307, counting the time length or the number of frames or photos of the first preset action of each student (for example, the time length of taking notes for the 001-th student is 150 minutes, the speaking time length is 50 minutes, the time length of playing a mobile phone is 1000 minutes, and the rest time length is 600 minutes) of the time length or the number of frames or photos of the first preset action of each student (for example, the time length of taking notes for the 001-th student is 150 minutes, the speaking time length is 50 minutes, the time length of playing a mobile phone is 1000 minutes, and the rest time length is 1000 minutes) in the teaching video corresponding to each evaluation unit of each teacher (for example, the time length of taking the first preset action of the 001-th student is 1000 minutes) to obtain the proportion (for example, 50%) of the total time length or the number of frames or photos of each evaluation unit (for example, the time length of playing the teaching video is 2000 minutes).
S308, adding and averaging the ratio of the duration or the number of frames of video or the number of photos of the first preset action of each student in the teaching video corresponding to each evaluation unit of each teacher (for example, video or photo album of the teaching video of higher mathematics, 2018-5-23 to 2018-8-12) to the total duration of each evaluation unit (for example, there are 5 students in the teaching video, the ratio is 50%, 20%, 30%, 60%, 40%, and the sum average is (50% +20% +30% +60% +40%)/5=40%), where the value of the one evaluation unit tag of the teaching effect image of each teacher (for example, the value of the evaluation unit tag "Zhang three", 2018002, higher mathematics, 2018-5-23 to 2018-8-12 "of the teaching effect image of Zhang) is 40%.
S309, the value of the one evaluation unit label of the teaching effect image of each teacher is stored in a teaching effect image knowledge base (for example, the values of the evaluation unit labels of the teaching effect images of Zhang three, 2018002, higher mathematics, 2018-5-23 to 2018-8-12 are 40%, the values of the evaluation unit labels of the teaching effect images of Zhang three, 2018002, english, 2018 school year are 80%, the values of the evaluation unit labels of the teaching effect images of Lifour, 2018003, chemistry, school period in 2017 are 30%, and the like).
(4) In a further preferred embodiment, the step of accepting a query S400 comprises:
s401, obtaining the teacher to be queried to include names and numbers (for example Zhang San and 2018002).
S402, acquiring an evaluation unit to be queried, wherein the evaluation unit comprises course names and start and stop times (example 1, higher mathematics, 2018-5-23 to 2018-8-12; example 2, all courses and 2018 years).
5. Evaluation unit and preset action
In a preferred embodiment, the evaluation unit comprises a course for a preset period of time; the first preset action comprises the student looking forward with head-up eyes or/and taking notes by hands.
(1) In a further preferred embodiment, the lesson for the predetermined period of time comprises: course name, start time and end time, or course name, subject school year, or course name, subject school period.
(2) In a further preferred embodiment, the courses for the predetermined period of time also include informal courses, such as lectures, salons, experiments, and the like.
(3) In a further preferred embodiment, the actions of the preset class include actions other than the actions of the preset class, and the elimination method is used in the recognition, and if the actions are not the actions of the preset class, the actions are determined to be the actions of the preset class.
(4) In a further preferred embodiment, the actions of the preset carefully listening class further comprise changes in expression, sound, mouth shape, pupil, etc.
The evaluation unit can be set in a personalized way according to the needs by covering courses and time periods thereof, can be used for evaluating various types of courses and informal courses, and can be popularized to occasions similar to the courses for evaluation. The preset actions are set by a user and can be updated at any time, so that the embodiment can adopt actions capable of judging teaching effects; meanwhile, the preset actions improve the accuracy and precision of judging the teaching effect through the action of listening to the lessons through the combination of the actions of carefully listening to the lessons and the actions of inappropriately listening to the lessons.
(II) teaching effect image system based on big data and artificial intelligence
As shown in fig. 3, an embodiment provides a teaching effect evaluation system, which includes:
the image acquisition module 500 is used for searching and acquiring the teaching effect image of the teacher to be queried from the teaching effect image knowledge base.
And the acquisition evaluation module 600 is used for acquiring the values of all the evaluation unit labels belonging to the evaluation unit to be queried from the teaching effect portrait of the teacher to be queried.
The teaching effect evaluation system has the same beneficial effects as the teaching effect evaluation method described above, and will not be described in detail here.
1. Image acquisition module
In a preferred embodiment, the captured representation module 500 includes a unit 501. The unit 501 corresponds to the step S501 in the foregoing preferred embodiment, and a detailed description is not repeated here. The unit 501 is configured to perform the step S501.
The image acquisition module 500 has the same advantages as those of the image acquisition step S500, and will not be described herein.
2. Acquisition evaluation module
In a preferred embodiment, the acquisition and evaluation module 600 includes units 601, 602. The units 601 and 602 correspond to the steps S601 and S602 in the foregoing preferred embodiment, respectively, and the detailed description is omitted herein. The units 601, 602 are for executing said S601, S602, respectively.
The acquiring and evaluating module 600 has the same advantages as those of the acquiring and evaluating step S600, and will not be described herein.
3. After the evaluation module is acquired
In a preferred embodiment, the obtaining evaluation module 600 further includes:
The effect calculation module 700 is configured to obtain weights of all evaluation units belonging to the evaluation units to be queried, and take a value obtained by weighted average of the values of the labels of all the evaluation units according to the weights of all the evaluation units as a teaching effect of the evaluation units of the teacher to be queried.
The effect calculation unit 700 in turn comprises units 701, 702, 703. The units 701, 702, 703 correspond to the steps S701, S702, S703 in the foregoing preferred embodiment one by one, and the detailed description is omitted herein. The units 701, 702, 703 are used to perform the S701, S702, S703, respectively.
The modules after the acquisition and evaluation module 600 have the same advantages as those after the step S600 of the acquisition and evaluation step, and are not described herein.
4. Before the image module is acquired
As shown in FIG. 4, in a preferred embodiment, the image acquisition module 500 further comprises:
the data acquisition module 100 is configured to acquire big data of a teaching process, where the big data of the teaching process includes a teaching video corresponding to each evaluation unit of each teacher;
The preset action module 200 is configured to obtain a preset action of carefully listening to a class as a first preset action;
The effect portrait module 300 is configured to take each evaluation unit of each teacher as one evaluation unit label of the teaching effect portrait of each teacher, store the ratio of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit as the value of the one evaluation unit label of the teaching effect portrait of each teacher in the teaching effect portrait knowledge base.
The query receiving module 400 is configured to obtain a teacher to be queried and an evaluation unit to be queried.
The module before the image acquisition module 500 has the same advantages as those after the step before the image acquisition step S500, and will not be described in detail herein.
(1) In a further preferred embodiment, the acquisition data module 100 comprises units 101, 102, 103, 104. The units 101, 102, 103, 104 correspond to the steps S101, S102, S103, S104 in the foregoing preferred embodiment one by one, and the detailed description is omitted here. Units 101, 102, 103, 104 are for executing said S101, S102, S103, S104, respectively.
(2) In a further preferred embodiment, the preset action module 200 comprises units 201, 202, 203. The units 201, 202, 203 correspond to the steps S201, S202, S203 in the foregoing preferred embodiment, respectively, and are not repeated here. The units 201, 202, 203 are for executing said S201, S202, S203, respectively.
(3) In a further preferred embodiment, the effect portrait module 300 further includes units 301, 302, 303, 304, 305, 306, 307, 308, 309. The units 301, 302, 303, 304, 305, 306, 307, 308, 309 correspond to the steps S301, S302, S303, S304, S305, S306, S307, S308, S309 in the foregoing preferred embodiments, respectively, and the detailed description is omitted herein. Units 301, 302, 303, 304, 305, 306, 307, 308, 309 are used to perform said S301, S302, S303, S304, S305, S306, S307, S308, S309, respectively.
(4) In a further preferred embodiment, the accept query module 400 comprises units 401, 402. The units 401 and 402 correspond to the steps S401 and S402 in the foregoing preferred embodiment one by one, and the detailed description is not repeated here. The units 401, 402 are for executing said S401, S402, respectively.
6. Evaluation unit and preset action
In a preferred embodiment, the evaluation unit comprises a course for a preset period of time; the first preset action comprises the student looking forward with head-up eyes or/and taking notes by hands.
The advantageous effects of the evaluation unit and the preset actions are as described before.
(III) teaching effect evaluation robot system based on big data and artificial intelligence
An embodiment provides a teaching effect evaluation robot system, configured with the teaching effect evaluation system.
The teaching effect portrait robot system has the same beneficial effects as the teaching effect portrait system described above, and will not be described here again.
The teaching effect image method and the robot system provided by the embodiment take the teaching effect image based on the process big data as the standard of teaching effect evaluation, and use the teaching effect image for the evaluation of the teaching effect, thereby reducing or getting rid of the subjectivity of the evaluation by the artificial commenter. On the one hand, the method can be used for full-automatic teaching evaluation; on the other hand, the method and the device can be used for assisting a commentary person in carrying out teaching evaluation, for example, the teaching effect portrait or the teaching evaluation result provided by the embodiment of the invention is provided for the commentary person to refer.
According to the teaching effect evaluation method and the robot system based on the big data and the artificial intelligence, each evaluation unit of each teacher is used as one evaluation unit label of the teaching effect image of each teacher, the proportion of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit is used as the value of the one evaluation unit label of the teaching effect image of each teacher, so that the teaching effect of a teacher is evaluated more truly and objectively, and the objectivity and accuracy of the teaching image and the teaching evaluation can be greatly improved.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
Claims (9)
1. A teaching effect evaluation method is characterized in that a teaching effect portrait is a user portrait; the method comprises the steps of searching a label value of an evaluation unit of a teacher to be inquired from a teaching effect portrait to obtain the teaching effect of the evaluation unit of the teacher to be inquired; the method comprises the following steps:
a preset action step of acquiring a preset action of carefully listening to a class as a first preset action;
the preset actions of carefully listening to the class further comprise actions other than the preset actions of not carefully listening to the class, and when the actions are identified, a removal method is adopted, and if the actions are not the preset actions of not carefully listening to the class, the actions are judged to be the preset actions of carefully listening to the class; if the preset feature of the action of carefully listening to the lesson contains a duration, matching is needed by combining the corresponding actions in the video frames or the photos adjacent to the identified action;
The preset action step further comprises: prompting a user to perform a carefully-attended course action, wherein the carefully-attended course action comprises the name of the action and the characteristic of the action; prompting a user to perform actions of inappropriately listening to the lessons, wherein the actions comprise names of the actions and characteristics of the actions; receiving input of a user, adding a preset set of actions for carefully listening to classes and a complement set of the preset set of actions for inappropriately listening to classes into a first preset set of actions, and storing the first preset set of actions into a teaching effect recognition knowledge base;
A teaching effect image step of taking each evaluation unit of each teacher as one evaluation unit label of the teaching effect image of each teacher, and storing the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher in a teaching effect image knowledge base as the value of the one evaluation unit label of the teaching effect image of each teacher; the teaching video comprises video of students in class, doing experiments, practicing, taking notes, answering questions and reading the teaching process conditions in the class;
a step of obtaining a teaching effect portrait, in which the teaching effect portrait of the teacher to be inquired is searched and obtained from a teaching effect portrait knowledge base;
a teaching evaluation step of acquiring values of all evaluation unit labels belonging to the evaluation unit to be queried from the teaching effect portrait of the teacher to be queried;
An effect calculation step of obtaining weights of all evaluation units belonging to the evaluation units to be queried, and taking a value obtained by carrying out weighted average on the values of the labels of all the evaluation units according to the weights of all the evaluation units as a teaching effect of the evaluation units of the teacher to be queried; specifically, the corresponding academic scores of courses of all the evaluation units belonging to the evaluation unit to be queried are obtained as weights; the values of all the evaluation unit labels are weighted and averaged according to the weights of all the evaluation units; taking the value obtained after weighted averaging as the teaching effect of the evaluation unit of the teacher to be inquired;
The higher the value obtained after the weighted average is, the better the teaching effect of the evaluation unit of the teacher to be inquired is judged; the lower the value obtained after the weighted average is, the worse the teaching effect of the evaluation unit of the teacher to be inquired is judged;
the relative advantages and disadvantages of the teaching effects of the evaluation units of the teachers to be queried can be judged by comparing the values obtained after different weighted averages; sequencing teaching units of a plurality of different teachers according to the teaching effect from good to bad;
and calculating a weighted average value by integrating label values of all the evaluation units belonging to the evaluation units to be queried, evaluating teaching effects corresponding to the existing evaluation units in the image, and evaluating teaching effects corresponding to the evaluation units formed by combining a plurality of evaluation units in the image.
2. The teaching effect evaluation method according to claim 1, wherein the step of obtaining a teaching effect image further comprises:
and receiving a query step, and obtaining a teacher to be queried and an evaluation unit to be queried.
3. The teaching effect evaluation method according to any one of claims 1 and 2, characterized in that before the step of obtaining an image, further comprising:
And a data acquisition step, namely acquiring teaching process big data, wherein the teaching process big data comprise teaching videos corresponding to each evaluation unit of each teacher.
4. The teaching effect evaluation method according to claim 3, wherein the evaluation unit includes courses for a preset period; the first preset action comprises the student looking forward with head-up eyes or/and taking notes by hands.
5. A teaching effect evaluation system is characterized in that a teaching effect portrait is a user portrait; the system searches the label value of the evaluation unit of the teacher to be inquired from the teaching effect portrait to obtain the teaching effect of the evaluation unit of the teacher to be inquired; the weighted average value is calculated by integrating the label values of all the evaluation units belonging to the evaluation units to be queried, so that not only the teaching effect corresponding to the existing evaluation units in the teaching effect image can be evaluated, but also the teaching effect corresponding to the evaluation units formed by combining a plurality of evaluation units in the teaching effect image can be evaluated;
The system comprises:
the preset action module is used for acquiring preset action of carefully listening to the lesson and taking the action as a first preset action;
the preset actions of carefully listening to the class further comprise actions other than the preset actions of not carefully listening to the class, and when the actions are identified, a removal method is adopted, and if the actions are not the preset actions of not carefully listening to the class, the actions are judged to be the preset actions of carefully listening to the class; if the preset feature of the action of carefully listening to the lesson contains a duration, matching is needed by combining the corresponding actions in the video frames or the photos adjacent to the identified action;
The preset action module further comprises: prompting a user to perform a carefully-attended course action, wherein the carefully-attended course action comprises the name of the action and the characteristic of the action; prompting a user to perform actions of inappropriately listening to the lessons, wherein the actions comprise names of the actions and characteristics of the actions; receiving input of a user, adding a preset set of actions for carefully listening to classes and a complement set of the preset set of actions for inappropriately listening to classes into a first preset set of actions, and storing the first preset set of actions into a teaching effect recognition knowledge base;
The teaching effect image module is used for taking each evaluation unit of each teacher as one evaluation unit label of the teaching effect image of each teacher, taking the ratio of the total duration of the first preset actions of all students identified in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit as the value of the one evaluation unit label of the teaching effect image of each teacher, and storing the value into a teaching effect image knowledge base; the teaching video comprises video of students in class, doing experiments, practicing, taking notes, answering questions and reading the teaching process conditions in the class;
the teaching effect portrait acquisition module is used for searching and acquiring the teaching effect portrait of the teacher to be inquired from a teaching effect portrait knowledge base;
the teaching evaluation module is used for acquiring the values of all the evaluation unit labels belonging to the evaluation unit to be queried from the teaching effect portrait of the teacher to be queried;
The effect calculation module is used for obtaining the weights of all the evaluation units belonging to the evaluation units to be queried, and taking the values of the labels of all the evaluation units as the teaching effect of the evaluation units of the teacher to be queried, wherein the values are obtained by carrying out weighted average on the values of all the evaluation units according to the weights of all the evaluation units; specifically, the corresponding academic scores of courses of all the evaluation units belonging to the evaluation unit to be queried are obtained as weights; the values of all the evaluation unit labels are weighted and averaged according to the weights of all the evaluation units; taking the value obtained after weighted averaging as the teaching effect of the evaluation unit of the teacher to be inquired;
The higher the value obtained after the weighted average is, the better the teaching effect of the evaluation unit of the teacher to be inquired is judged; the lower the value obtained after the weighted average is, the worse the teaching effect of the evaluation unit of the teacher to be inquired is judged;
The relative advantages and disadvantages of the teaching effects of the evaluation units of the teachers to be queried can be judged by comparing the values obtained after different weighted averages;
the weighted average value is calculated by integrating the label values of all the evaluation units belonging to the evaluation units to be queried, so that not only the teaching effect corresponding to the existing evaluation units in the image, but also the teaching effect corresponding to the evaluation units formed by combining a plurality of evaluation units in the image can be evaluated.
6. The teaching effect evaluation system according to claim 5, characterized in that the system further comprises:
and the query receiving module is used for acquiring a teacher to be queried and an evaluation unit to be queried.
7. The teaching effect evaluation system according to claim 5, characterized in that the system further comprises:
The data acquisition module is used for acquiring large teaching process data, wherein the large teaching process data comprise teaching videos corresponding to each evaluation unit of each teacher.
8. The teaching effect evaluation system according to claim 7, wherein the evaluation unit includes courses for a preset period of time; the first preset action comprises the student looking forward with head-up eyes or/and taking notes by hands.
9. A teaching effect evaluation robot system, wherein the teaching effect evaluation systems according to any one of claims 5 to 8 are respectively provided in the robot systems.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810632878.5A CN108876677B (en) | 2018-06-20 | 2018-06-20 | Teaching effect evaluation method based on big data and artificial intelligence and robot system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810632878.5A CN108876677B (en) | 2018-06-20 | 2018-06-20 | Teaching effect evaluation method based on big data and artificial intelligence and robot system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108876677A CN108876677A (en) | 2018-11-23 |
CN108876677B true CN108876677B (en) | 2024-09-13 |
Family
ID=64339984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810632878.5A Active CN108876677B (en) | 2018-06-20 | 2018-06-20 | Teaching effect evaluation method based on big data and artificial intelligence and robot system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108876677B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765229B (en) * | 2018-06-20 | 2023-11-24 | 大国创新智能科技(东莞)有限公司 | Learning performance evaluation method based on big data and artificial intelligence and robot system |
CN109711263B (en) * | 2018-11-29 | 2021-06-04 | 国政通科技有限公司 | Examination system and processing method thereof |
CN116757524B (en) * | 2023-05-08 | 2024-02-06 | 广东保伦电子股份有限公司 | Teacher teaching quality evaluation method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485964A (en) * | 2016-10-19 | 2017-03-08 | 深圳市鹰硕技术有限公司 | A kind of recording of classroom instruction and the method and system of program request |
CN107085721A (en) * | 2017-06-26 | 2017-08-22 | 厦门劢联科技有限公司 | A kind of intelligence based on Identification of Images patrols class management system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100833097B1 (en) * | 2007-03-27 | 2008-06-10 | 에이치에스베어링 주식회사 | New generation educator rating method and system thereof |
TWI614716B (en) * | 2015-03-19 | 2018-02-11 | 宏鼎資訊股份有限公司 | Interactive teacher and student service platform |
CN106203811A (en) * | 2016-07-05 | 2016-12-07 | 上海电力学院 | A kind of Evaluation Method of Teaching Quality |
CN107124653B (en) * | 2017-05-16 | 2020-09-29 | 四川长虹电器股份有限公司 | Method for constructing television user portrait |
CN107895244A (en) * | 2017-12-26 | 2018-04-10 | 重庆大争科技有限公司 | Classroom teaching quality assessment method |
-
2018
- 2018-06-20 CN CN201810632878.5A patent/CN108876677B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485964A (en) * | 2016-10-19 | 2017-03-08 | 深圳市鹰硕技术有限公司 | A kind of recording of classroom instruction and the method and system of program request |
CN107085721A (en) * | 2017-06-26 | 2017-08-22 | 厦门劢联科技有限公司 | A kind of intelligence based on Identification of Images patrols class management system |
Also Published As
Publication number | Publication date |
---|---|
CN108876677A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359215B (en) | Video intelligent pushing method and system | |
CN108765229B (en) | Learning performance evaluation method based on big data and artificial intelligence and robot system | |
CN108829842B (en) | Learning expression image method and robot system based on big data and artificial intelligence | |
CN109215632A (en) | A kind of speech evaluating method, device, equipment and readable storage medium storing program for executing | |
CN112395403B (en) | Knowledge graph-based question and answer method, system, electronic equipment and medium | |
CN108876677B (en) | Teaching effect evaluation method based on big data and artificial intelligence and robot system | |
Papadopoulos et al. | The dimensionality of phonological abilities in Greek | |
CN111027865A (en) | Classroom teaching analysis and quality assessment system and method based on intelligent behavior and expression recognition | |
CN104063443A (en) | Method and device for providing search result | |
CN110753256A (en) | Video playback method and device, storage medium and computer equipment | |
CN114021962A (en) | Teaching evaluation method, evaluation device and related equipment and storage medium | |
CN108629715A (en) | Accurate teaching method and robot system based on big data and artificial intelligence | |
CN108804705B (en) | Review recommendation method based on big data and artificial intelligence and education robot system | |
Jong et al. | Dynamic grouping strategies based on a conceptual graph for cooperative learning | |
CN114493944A (en) | Method, device and equipment for determining learning path and storage medium | |
JP2015219247A (en) | Nursing learning system, nursing learning server, and program | |
CN108921405A (en) | Accurate learning evaluation method and robot system based on big data and artificial intelligence | |
CN108805770A (en) | Content of courses portrait method based on big data and artificial intelligence and robot system | |
CN110826796A (en) | Score prediction method | |
Moseley et al. | Exploring Mental Models of Science Teachers Using Digital Storytelling. | |
CN108764757A (en) | Accurate Method of Teaching Appraisal and robot system based on big data and artificial intelligence | |
CN108776794B (en) | Teaching effect image drawing method based on big data and artificial intelligence and robot system | |
CN111667128A (en) | Teaching quality assessment method, device and system | |
CN113254752B (en) | Lesson preparation method and device based on big data and storage medium | |
KR101023901B1 (en) | System and method for learning management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |