[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020209171A1 - Iinformation processing device, information processing system, information processing method, and information processing program - Google Patents

Iinformation processing device, information processing system, information processing method, and information processing program Download PDF

Info

Publication number
WO2020209171A1
WO2020209171A1 PCT/JP2020/015187 JP2020015187W WO2020209171A1 WO 2020209171 A1 WO2020209171 A1 WO 2020209171A1 JP 2020015187 W JP2020015187 W JP 2020015187W WO 2020209171 A1 WO2020209171 A1 WO 2020209171A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
unit
importance
discrimination
series
Prior art date
Application number
PCT/JP2020/015187
Other languages
French (fr)
Japanese (ja)
Inventor
慎 江上
竜 米谷
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020032229A external-priority patent/JP2020173787A/en
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2020209171A1 publication Critical patent/WO2020209171A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Definitions

  • the present invention relates to an information processing device, an information processing system, an information processing method, and an information processing program.
  • Patent Document 1 describes a technique of comparing a result of monitoring a learner's reading frequency and gaze retention with another learner and presenting the comparison result to a learner and an instructor.
  • the information processing apparatus includes a sequence information acquisition unit that acquires annotated sequence information in which annotations are added to each element included in the sequence information related to the target person.
  • the importance determination unit for determining the importance of each element included in the annotated series information of the target person by referring to the annotated series information regarding the target person, and the determination result by the importance determination unit for a plurality of target persons.
  • a discrimination integration unit that generates discrimination integration information by referring to, a presentation information generation unit that generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit and the discrimination integration information. It has.
  • an annotation series is added by annotating a series information acquisition unit that acquires series information and each element included in the acquired series information. It includes an annotation addition unit that generates information, a presentation information acquisition unit that acquires presentation information generated by referring to the annotation series information, and a presentation unit that presents the presentation information.
  • annotated series information is added by adding an annotation to an acquisition unit that acquires series information and each element included in the acquired series information.
  • the importance determination unit that determines the importance of each element included in the annotated series information of the target person by referring to the annotated series information about a certain target person, and the plurality of target persons.
  • the presentation information is generated by referring to at least one of the discrimination integration unit that refers to the discrimination result by the importance discrimination unit and generates the discrimination integration information, and the discrimination result by the importance discrimination unit and the discrimination integration information. It is equipped with a presentation information generation unit.
  • an annotation adding step of generating annotated series information by adding an annotation to each element included in the series information about a target person With reference to the annotated series information, the importance determination step of determining the importance of each element included in the annotated series information of the target person and the determination result in the importance determination step for a plurality of target persons are referred to.
  • the present invention includes a discrimination integration step of generating discrimination integration information, and a presentation information generation step of generating presentation information by referring to at least one of the discrimination result in the importance discrimination step and the discrimination integration information.
  • important information can be suitably discriminated and presented among the information having sequence.
  • FIG. 1 is a block diagram schematically illustrating a configuration example of the information processing system 1 according to the embodiment of the present invention.
  • the information processing system 1 is, for example, an information processing system used in a cram school or the like.
  • the information processing system 1 includes a server 100 which is an information processing device, learner terminal devices 200A, 200B, 200C, and a lecturer terminal device 1000.
  • the server 100, the learner terminal devices 200A, 200B, 200C, and the instructor terminal device 1000 are connected via a network and can communicate with each other.
  • the type of network may be any, such as the Internet, a telephone network, and a dedicated network.
  • the learner terminal devices 200A, 200B, and 200C may be collectively referred to as the learner terminal device 200.
  • the learner terminal devices 200A, 200B, and 200C are, for example, computers assigned to each learner (hereinafter, also referred to as a target person) A, B, C in a cram school.
  • Each learner's terminal device 200A, 200B, and 200C is equipped with a camera 241, a microphone 242, and the like, and can record and record the state and utterance of each subject during a lecture or a mock test. ..
  • the recording and the recorded moving image are examples of the series information in the present embodiment.
  • the series information means general information in which the order of each element included in the information is meaningful.
  • the series information may include moving image data, audio data, text data, data indicating changes in numerical values with time, and the like.
  • each terminal device 200 can generate annotated sequence information by adding annotations to the sequence information and record it in each storage unit 230.
  • the annotation is meta information given to each element included in the series information.
  • the annotation includes meta information regarding the state of the subject at each time point or each period during the recording of the series information.
  • the type of annotation is not limited to this embodiment, but as an example, an index indicating the degree of concentration of the target person at that time can be used as an annotation.
  • each learner terminal device 200A, 200B, and 200C transmits the series information about each target person with this annotation to the server 100.
  • the server 100 determines the importance of each element included in the series information based on the series information with annotations transmitted from each terminal device 200. Then, the determination result of the importance determined based on the annotated series information transmitted from each terminal device 200 is integrated to generate the presentation information.
  • the server 100 stores the generated presentation information in the storage unit 130 and transmits it to each terminal device 200 or 1000.
  • Each terminal device 200 or 1000 presents the presentation information transmitted from the server 100 and transmits the feedback information to the server 100.
  • the target person confirms the presentation information together with the instructor, and feeds back information regarding the update of the importance determination logic or the discrimination logic used in the discrimination integration process described later via each terminal device 200 or 1000. It may be transmitted to 100. That is, the feedback information as a result of correcting and confirming the presentation information from each terminal device 200 or 1000 may be transmitted to the server 100.
  • the server 100 updates the importance determination logic based on the feedback information transmitted from each terminal device 200 or 1000.
  • the information that can be used for determining the importance increases, so that the importance determination logic, And the discrimination logic used in the discrimination integration process is improved.
  • the importance of the series information can be determined more appropriately.
  • the instructor can look back on the video by focusing on the information that is likely to lead to improvement of the instruction content, and use time efficiently to improve the instruction. Can be done.
  • the learner terminal devices 200A, 200B, and 200C according to the embodiment have the same configuration as an example.
  • the learner terminal device 200 includes a control unit 210, a communication unit 220, a storage unit 230, a camera 241 and a microphone 242, an operation reception unit 243, a display unit (presentation unit) 244, and a speaker 245.
  • the communication unit 220 performs communication processing with an external device such as the server 100.
  • the storage unit 230 is a storage device that stores various types of data.
  • the operation reception unit 243 is an interface for receiving an input operation of a target person or the like, and is a button such as a keyboard.
  • the display unit 244 is a display panel for displaying a moving image.
  • the operation reception unit 243 and the display unit 244 may be configured to be realized as a touch panel that accepts input operations of the target person or the like and displays a moving image.
  • the control unit 210 is a control device that controls the entire learner terminal device 200, and for the sequence information acquisition unit 212 that acquires sequence information via the camera 241 and the like, and for each element included in the acquired sequence information.
  • Annotation addition unit 214 that generates annotation series information by adding annotations
  • presentation information acquisition unit 216 that acquires presentation information generated by referring to annotation series information via communication unit 220. It is provided with a discriminant integrated information acquisition unit (not shown) that acquires discriminant integrated information generated by referring to the discriminant result regarding the annotated series information of one or a plurality of target persons.
  • the series information acquisition unit 212 acquires series information about the target person.
  • An example of the series information acquired by the series information acquisition unit 212 is as follows. As an example, these data are acquired as information during a mock test or a lecture, but this is not limited to this embodiment, and may be acquired in other situations.
  • -Subject imaged by camera 241 may include the subject's face
  • -Voice of the target person collected by the microphone 242-Operation input by the target person received by the operation reception unit 243 (text data may be included)
  • the series information may include information regarding the line of sight of the target person.
  • the series information acquisition unit 212 acquires the face information of the target person from the image acquired from the camera 241.
  • the face information includes position information indicating the position of each part of the subject's face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like, as well as the target.
  • the line of sight of the subject is detected as the state of the person.
  • the line of sight is particularly important as an index showing the degree of concentration of the subject on the task. The detection of the line of sight of the subject will be described later.
  • the annotation unit 214 generates the annotated series information by automatically adding an annotation to each element included in the acquired series information or by adding an annotation based on a user's instruction.
  • Annotations are attached to the elements included in the series information.
  • annotation A is attached to the section from 00:01 to 00:02 of the moving image data
  • annotation B is attached to the section from 00:05 to 00:06.
  • annotation X is attached to the clause AA in the sentence A included in the text data
  • the annotation Y is attached to the clause BB.
  • the annotation unit 214 may add information indicating whether or not the target person has visually recognized a specific area to the series information as annotations.
  • the operation reception unit 243 accepts an operation of adding annotations to the series information and an operation of inputting the importance described later for each part of the series information. For example, when the user gives an instruction to add an annotation to a certain part in the series information via the operation reception unit 243, the annotation unit 214 is instructed to the part of the series information. Annotate.
  • the annotation unit 214 evaluates the degree of concentration of the target person at each time point from the recorded video in five stages, and concentrates as an annotation for each time point of the video which is series information. Give a degree.
  • information indicating which question sentence or question is visually identified by identifying the question sentence or question that the subject is visually recognizing at each time point from the video about the subject during the mock test. As an annotation, is added to each time point of the moving image.
  • adding information indicating which question sentence or question is visually recognized as an annotation is identified as, for example, the coordinates indicating the tip of the subject's line of sight and the coordinates on the question sheet of the mock test. It is possible by that.
  • annotation processing by the annotation unit 214 is not limited to the above example.
  • another image analysis algorithm or voice analysis algorithm is used to determine the points to be annotated, and the determined points. May be configured to annotate.
  • the annotation may be defined by a start time, an end time, a tag, and a reliability.
  • the tag is information indicating the type of annotation, for example, information indicating the degree of concentration and the degree of understanding of the target person.
  • the reliability is information indicating the certainty of the target annotation.
  • the annotation unit 214 can calculate the reliability of each annotation added by the annotation processing, which indicates how reliable the annotation is. Further, in addition to the annotation, the annotation unit 214 may add information indicating the reliability of the annotation to each element in the series information.
  • the degree of concentration is set to a relatively high value when the variation of the visual target of the target person is small (when the variation of the line of sight is small), and when the variation of the visual target of the target person is large (for example). It may be set to a relatively low value (when the variation of the line of sight is large).
  • the annotation unit 214 can specify the degree of concentration of the target person by acquiring and referring to the line-of-sight information of the target person.
  • the comprehension level may be set to a high value when the time for the subject to visually recognize the object to be understood is relatively long (or short).
  • the annotation unit 214 receives the score of the mock test input by the target person after the mock test via the operation reception unit 243.
  • the comprehension level may be set by referring to the information indicating which question was answered correctly.
  • the annotation unit 214 may set a high degree of understanding of the target portion in the series information when the target person satisfies a predetermined condition that a specific operation is performed at a specific timing. ..
  • the information indicating the above-mentioned predetermined conditions may be stored in advance in the storage unit 230 or the like.
  • annotation unit 214 may update the discrimination logic (annotation logic) used when annotating the series information by referring to the information included in the presentation information, for example.
  • the annotation addition logic may be updated by using the annotations attached by the user as feedback information. Further, the annotation unit 214 may update the annotation logic with reference to the discrimination integration information. That is, the annotation addition unit 214 may update the annotation addition logic by referring to at least one of the feedback information from the user and the discrimination integration information.
  • the presentation information acquisition unit 216 acquires the presentation information generated by referring to the annotated series information in the presentation information generation unit 118 of the server 100.
  • the display unit 244 displays the presentation information acquired by the presentation information acquisition unit 216.
  • the presentation information is not limited to the moving image presented via the display unit 244, and may be the sound presented via the speaker 245.
  • the learner terminal device 200 may be configured to receive feedback information from the user via the operation reception unit 243, similarly to the instructor terminal device 1000 described later.
  • the subject and the instructor confirm the presentation information displayed by the display unit 244, and confirm the importance determination result included in the presentation information while having a dialogue. .. And if there is something that needs to be corrected about importance, -Enter information including which part should be corrected and how to correct it in the operation reception unit 243.
  • the point to be corrected regarding the importance is, for example, that the presentation information is not important even though the importance determination result corresponding to a certain presentation information shows a relatively high importance. It is the importance determination result etc. judged by the subject.
  • the operation reception unit 243 may be configured to generate feedback information including the above-mentioned input information and transmit it to the server 100 via the communication unit 220.
  • the server 100 of the information processing system 1 includes a control unit 110, a communication unit 120, and a storage unit 130.
  • the communication unit 120 can communicate with other information processing devices (learner terminal device 200, instructor terminal device 1000) included in the information processing system 1.
  • the storage unit 130 can store information transmitted from other information processing devices, information integrated by the server, and the like.
  • the control unit 110 is a control device that controls the entire server 100, and includes a series information acquisition unit 112, an importance determination unit 114, a discrimination integration unit 116, and a presentation information generation unit 118.
  • the communication unit 120 performs communication processing with an external device such as the learner terminal device 200.
  • the storage unit 130 is a storage device that stores various types of data.
  • the series information acquisition unit 112 acquires annotated series information in which annotations are added to each element included in the series information regarding the target person.
  • the series information acquisition unit 112 acquires information in which annotations are added to the series information about each target person transmitted from each learner terminal device 200 via the communication unit 120.
  • the importance determination unit 114 refers to the annotated series information about the target person, and determines the importance of each element included in the annotated series information of the target person.
  • the importance determination unit 114 may determine the importance based on the degree of concentration regarding the target person. As an example, in the importance determination unit 114, when the concentration level is equal to or higher than a predetermined value or the concentration level is equal to or lower than a predetermined value for a predetermined period or longer, the interval is not 0. Is determined to have.
  • the server 100 acquires the annotated series information as shown in the table below from each terminal device 200.
  • Time 00:01 00:02 00:03 00:04 00:05 Concentration Target A 1 5 5 3 1 Target person B 1 5 5 5 1 Target person C 2 5 5 5 5 5
  • the importance determination unit 114 determines a section in which the concentration level 5 continues for 2 seconds or more as a section having a non-zero importance. For example, since the subject A maintains the state of concentration 5 for 2 seconds from 00:02 to 00:03, the importance determination unit 114 determines that the importance of the section is 1. ..
  • the importance determination unit 114 determines that the importance of the section is 2. .. Further, since the subject C maintains the state of concentration 5 for 4 seconds from 00:02 to 00:05, the importance determination unit 114 determines that the importance of the section is 3. ..
  • the importance determination unit 114 can be configured to determine a section in which the concentration levels 1 and 2 continue for 2 seconds or more as a section having a non-zero importance.
  • the importance determination unit 114 may determine that the lower (or higher) the degree of understanding of the target person is, the higher the importance of the target part is. Alternatively, the importance determination unit 114 may determine that the location where the subject's line of sight begins to disperse (or the location where the line of sight begins to concentrate) is of high importance in a lecture or the like.
  • the importance determination unit 114 performs regression analysis (regression analysis) on the annotated series information, and refers to various parameters obtained by performing the regression analysis to refer to the importance. It may be configured to discriminate. For example, regression analysis is applied to annotated sequence information related to a certain target person, and a parameter indicating a regression curve indicating the annotated sequence information is derived. Then, the value of the parameter may be compared with the value of a predetermined parameter, and the importance may be determined according to the comparison result.
  • regression analysis regression analysis
  • regression analysis is applied to annotated sequence information related to a certain target person, and a parameter indicating a regression curve indicating the annotated sequence information is derived. Then, the value of the parameter may be compared with the value of a predetermined parameter, and the importance may be determined according to the comparison result.
  • the importance determination unit 114 updates the determination logic used for the importance determination process with reference to the feedback information acquired via the communication unit 120. As an example, when the importance determination unit 114 obtains feedback information indicating that the importance should be lowered for the element with the importance 5, the importance is lower for the element with the importance 5. Update the discriminant logic so that the degree is added.
  • the importance determination unit 114 may determine that the higher the importance is, the more the line of sight or movement of the target person in a certain period deviates from that of another target person. Further, even if the importance determination unit 114 determines that the line of sight or movement of the subject during a certain period is more important as the line of sight or movement deviates from the line of sight or movement set as the norm. Good.
  • the above-mentioned information indicating the line of sight or movement that serves as a norm may be stored in advance in the storage unit 130 or the like.
  • the importance determination unit 114 performs the importance determination process on the assumption that the portion having a high degree of contribution to improving the identification logic when the annotation unit 214 annotates the series information has a high importance. You may.
  • the discrimination integration unit 116 refers to the discrimination results of the plurality of target persons A, B, C, ... By the importance discrimination unit 114, and generates discrimination integration information. For example, the discriminating and integrating unit 116 can more appropriately determine the importance of the series information by collecting the series information acquired from each terminal device 200 and integrating the series information.
  • the discrimination integration unit 116 may extract common information from the discrimination results by the importance discrimination unit 114 for each of the plurality of subjects A, B, and C and include it in the discrimination integration information.
  • the importance determination unit 114 determines the importance of the subjects A, B, and C as follows. Time 00:01 00:02 00:03 00:04 00:05 Importance Target person A 0 1 1 0 0 Target person B 0 2 2 2 0 Target person C 0 3 3 3 3 3
  • the discrimination integration unit 116 determines that 00:02, which is a time with a non-zero importance, is important by the subjects A, B, and C, and determines the determination result as discrimination integration information. Include in.
  • the discriminant integration unit 116 obtains information indicating that the line of sight or movement of the target person in a certain period is different from that of another target person, and information indicating the degree of dissociation. May be included in.
  • the discrimination integration unit 116 provides information indicating that the line of sight or movement of the target person in a certain period deviates from the line of sight or movement set as a norm, and information indicating the degree of dissociation. , May be included in the discriminant integrated information.
  • the discrimination integration unit 116 provides meta information indicating matters related to the subject such as past performance records of the subject, or meta information indicating matters related to the subject's environment such as a mock test that the subject is taking. , May be included in the discrimination integrated information. Further, each of the above-mentioned meta information may be included in the determination result by the importance determination unit 114.
  • the discrimination integration unit 116 determines the parameters obtained by the regression analysis on the subjects A, B, and C. Of these, parameters having common properties may be extracted and the parameters may be included in the discrimination integration information. In addition, the discrimination integration unit 116 may generate a regression model that is commonly used by referring to the regression model corresponding to a plurality of subjects. In other words, the discriminant integration unit 116 may use a plurality of discriminant algorithms or parameters referred to by them as input data, and output the discriminant algorithms after integration or the parameters referred to by them.
  • the discrimination integration unit 116 can generate discrimination integration information that reflects, for example, a portion considered to be important by a plurality of lecturers. ..
  • the discrimination integration unit 116 updates the discrimination logic used for the discrimination integration process for generating the discrimination integration process with reference to the feedback information acquired via the communication unit 120. As an example, when the discrimination integration unit 116 obtains feedback information that the importance should be lowered for the element with the importance level 5, the importance level is lower than that of the element with the importance level 5. Update the discriminant logic so that is added.
  • the discrimination integration unit 116 generates information for updating the importance discrimination logic or the updated discrimination logic, and supplies the information to the importance discrimination unit 114.
  • the importance determination unit 114 updates the importance determination logic with reference to the acquired information. Thereby, for example, the result of the importance determination process can be updated to a more suitable one by the series information corresponding to a plurality of target persons.
  • the discrimination integration unit 116 generates information for updating the discrimination logic used for adding annotations or the updated discrimination logic, and transmits the updated discrimination logic to the learner terminal device 200 via the communication unit 120.
  • the annotation unit 214 updates the above-mentioned discrimination logic with reference to the acquired information. Thereby, for example, the process of annotating can be updated to a more suitable one by the series information corresponding to a plurality of target persons.
  • the discriminant integration unit 116 may refer to the discriminant integration information generated by itself when updating each of the above-mentioned discrimination logics.
  • annotation logic by the annotation unit 214 the importance determination logic by the importance determination unit 114, and the integration logic by the identification integration unit 116 are not limited to the above examples. Further, the processing by the annotation unit 214, the importance determination unit 114, and the discrimination integration unit 116 may use rule-based logic, machine learning such as a neural network, or other methods. You may use it.
  • any of the following machine learning methods or a combination thereof can be used.
  • the input data may be processed in advance for input to the neural network.
  • SVM Support Vector Machine
  • IDP Inductive Logic Programming
  • GP Genetic Programming
  • BN Bayesian Network
  • NN Neural network
  • a convolutional neural network (CNN: Convolutional Neural Network) including a convolution process may be used. More specifically, as one or more layers included in the neural network, a convolution layer for performing a convolution operation is provided, and a filter operation (product-sum operation) is performed on the input data input to the layer. It may be configured. Further, when performing the filter calculation, a process such as padding may be used together, or an appropriately set stride width may be adopted.
  • a multi-layered or super-multilayered neural network having tens to thousands of layers may be used.
  • the machine learning described above may be supervised learning or unsupervised learning.
  • the presentation information generation unit 118 generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit 114 and the discrimination integrated information. For example, from the discrimination integration information integrated by the discrimination integration unit 116, the information processed for each target person is transmitted to each terminal device 200. The generated integrated information and presentation information are stored in the storage unit 130.
  • the presentation information generation unit 118 may preferentially include the corresponding information in the presentation information as the importance indicated by the determination result by the importance determination unit 114 becomes higher.
  • the presentation information generation unit 118 indicates that the information included in the discriminant integrated information has a high degree of dissociation between the line of sight and movement of the target person during a certain period and the line of sight and movement of another target person.
  • the corresponding information may be preferentially included in the presentation information.
  • the presentation information generation unit 118 has a high degree of dissociation between the line of sight and movement of the target person in a certain period and the line of sight and movement set as a norm in the information included in the discrimination integrated information. The more it indicates that, the more the corresponding information may be preferentially included in the presentation information.
  • the instructor terminal device 1000 includes a control unit 1010, an operation reception unit 1043, a display unit 1044, a speaker 1045, a communication unit 1020, and a storage unit 1030.
  • the control unit 1010 is a control device that controls the entire instructor terminal device 1000, and includes a presentation information acquisition unit 1014 and a feedback information acquisition unit 1012.
  • the communication unit 1020 performs communication processing with an external device such as the server 100.
  • the storage unit 1030 is a storage device that stores various data.
  • the operation reception unit 1043 is an interface for receiving input operations of a lecturer or the like, and is, for example, a button such as a keyboard.
  • the display unit 1044 is a display panel for displaying a moving image.
  • the operation reception unit 1043 and the display unit 1044 may be realized as a touch panel that accepts input operations of a lecturer or the like and displays a moving image.
  • the presentation information acquisition unit 1014 acquires the presentation information generated by the presentation information generation unit 118 of the server 100 with reference to the annotated sequence information.
  • the display unit 1044 displays the presentation information acquired by the presentation information acquisition unit 1014.
  • the feedback information acquisition unit 1012 acquires feedback information from the user.
  • the instructor confirms the presentation information displayed by the display unit 1044, and confirms the importance determination result included in the presentation information. And if there is something that needs to be corrected about importance, -Enter information including which part should be corrected and how to correct it in the operation reception unit 1043.
  • the part to be corrected with respect to the importance is, for example, that the presentation information is not important even though the importance determination result corresponding to a certain presentation information shows a relatively high importance. These are the results of importance determination judged by the instructor.
  • the operation reception unit 1043 generates feedback information including the input information, and transmits the feedback information to the server 100 via the communication unit 1020.
  • the importance determination unit 114 of the server 100 updates the determination logic with reference to the feedback information from the user.
  • the feedback information acquisition unit 1012 may be provided in the instructor terminal device 1000 or in the learner terminal device 200.
  • the series information acquisition unit 212 acquires the face information of the target person from the moving image taken by the camera 241.
  • the face information of the subject includes, for example, position information indicating the position of each part of the face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like. Is done.
  • position information indicating the position of each part of the face (for example, eyes, nose, mouth, eyebrows, etc.)
  • shape information indicating the shape
  • size information indicating the size
  • the sequence information acquisition unit 212 may appropriately perform correction processing such as noise reduction and edge enhancement on the moving image acquired from the camera 241.
  • the sequence information acquisition unit 212 transmits the extracted face information to the annotation unit 214.
  • the annotation unit 214 detects the state of the target person based on the face information extracted by the series information acquisition unit 212. For example, the state of each part of the subject's face, which is at least one of the subject's line of sight, pupil state, number of blinks, eyebrow movement, cheek movement, eyelid movement, lip movement, and jaw movement. Detect one.
  • the method of detecting the line of sight of the subject is not particularly limited, but for example, by providing a point light source (not shown) in the terminal device 200 and photographing the corneal reflex image of the light from the point light source with the camera 241 for a predetermined time.
  • a method of detecting the movement destination of the line of sight of the subject can be mentioned.
  • the type of the point light source is not particularly limited, and examples thereof include visible light and infrared light. For example, by using an infrared LED, the line of sight can be detected without causing discomfort to the subject. In the detection of the line of sight, if the line of sight does not move for a predetermined time or longer, it can be said that the same place is being watched.
  • the method of detecting the state of the pupil is not particularly limited, and examples thereof include a method of detecting a circular pupil from the image of the eye by using the Hough transform.
  • humans tend to open their pupils when they are concentrated, so it is possible to evaluate the degree of concentration of a subject by detecting the size of the pupil. For example, it can be said that there is a high possibility that the subject is gazing at a certain subject during the time when the pupil size is detected for a predetermined time and the pupil is enlarged within the predetermined time.
  • a threshold value may be set for the pupil size and evaluated as "open" when the pupil size is equal to or larger than the threshold value and as "closed" when the pupil size is less than the threshold value.
  • the method for detecting the number of blinks is not particularly limited, but for example, a method of irradiating an eye to be infrared-lighted and detecting the difference in the amount of infrared light reflected between the eyes when the eyes are opened and when the eyes are closed. And so on.
  • the degree of concentration of a subject can be evaluated by detecting the number of blinks. For example, if the number of blinks is detected for a predetermined time and the blinks are performed at stable intervals within a predetermined time, it can be said that there is a high possibility that the subject is gazing at a certain subject.
  • the annotation unit 214 detects at least one of the subject's line of sight, pupil condition and number of blinks, eyebrow movement, eyelid movement, cheek movement, nose movement, lip movement and jaw movement. However, it is preferable to combine these. By combining the detection methods in this way, the annotation unit 214 can suitably evaluate the degree of concentration of the target person when visually recognizing a certain object.
  • eyebrow movements such as lifting the inside or outside of the eyebrows
  • eyebrow movements such as raising the upper eyelids and tensioning the eyebrows
  • nose movements such as wrinkling the nose.
  • the state of each part of the face such as lip movements such as lifting the upper lip and purging the lips, cheek movements such as lifting the cheeks, and jaw movements such as lowering the nose.
  • the state of a plurality of parts of the face may be combined.
  • the degree of concentration of the subject can be more preferably determined.
  • the pulse data of the subject may be further referred to to determine the degree of concentration of the subject.
  • the annotation unit 214 evaluates the degree of concentration of the determined target person in five stages of, for example, 1 to 5, and adds it to the series information of the target person as an annotation.
  • FIG. 2 is a sequence diagram illustrating an operation example of the information processing system 1 according to the present embodiment. The process of generating presentation information in the information processing system 1 of the present embodiment will be described with reference to FIG.
  • Step S102 the annotation-giving unit 214 generates the annotated series information by adding annotations to each element included in the series information regarding the target person acquired by the series information acquisition unit 212.
  • the communication unit 220 transmits the generated annotated sequence information to the server 100.
  • step S104 the sequence information acquisition unit 112 of the server 100 acquires the sequence information with the annotation series information.
  • Step S106 the importance determination unit 114 refers to the annotated series information about the target person and determines the importance of each element included in the annotated series information of the target person.
  • step S108 the discrimination integration unit 116 refers to the discrimination results of the importance of the plurality of target persons A, B, C ..., and generates the discrimination integration information.
  • step S110 the presentation information generation unit 118 generates the presentation information by referring to at least one of the discrimination result by the importance determination unit 114 and the discrimination integrated information.
  • the process of step S108 does not necessarily have to be executed, and the process of step S106 The process of this step S110 may be executed subsequently.
  • the communication unit 120 transmits the generated presentation information to each learner terminal device 200.
  • Step S112 the presentation information acquisition unit 216 of each learner terminal device 200 acquires the generated presentation information.
  • step S114 the presentation unit (display unit) 244 presents the acquired presentation information.
  • the annotation unit 214 may update the discrimination logic used when annotating the series information by referring to the information included in the presentation information.
  • Step S116 the terminal device 200 (or the instructor terminal device 1000) acquires feedback information.
  • the feedback information is generated as a result of the subject confirming the presentation information together with the instructor and confirming / correcting the determined importance.
  • the communication unit 220 of the terminal device 200 transmits feedback information to the server 100.
  • Step S118 the importance determination unit 114 or the determination integration unit 116 of the server 100 updates each determination logic with reference to the feedback information from the learner terminal device 200. For example, the importance determination unit 114 updates the importance determination logic based on the feedback information including the confirmed / corrected importance from the terminal device 200.
  • Step S120 When the series information acquisition unit 212 of the terminal device 200 acquires the series information again, the annotation unit 214 adds annotations to the series information to generate the annotated series information.
  • the steps of steps S120 to S136 are the same as the steps of steps S102 to S118.
  • step S102 to step S118 are repeated.
  • the server 100 updates each discrimination logic every time the series information is acquired from the target person. As the series information from the target person increases, each discrimination logic is improved, and each discrimination can be performed more preferably.
  • annotation processing such as annotation processing, importance assignment processing, and presentation information generation processing
  • presentation information generation processing and the like by the information processing system 1 according to the present embodiment are not limited to the above examples.
  • annotation processing importance assignment processing, and presentation information generation processing by the information processing system 1 according to the present embodiment will be described.
  • the series information in this example is information that records the visual target of the subject during the mock test.
  • FIG. 3 is a diagram showing an example of a test question to be recorded.
  • regions R1 to R3 are pre-defined regions for distinguishing each range.
  • the marker 301 is a marker for identifying the coordinates indicating the tip of the line of sight of the subject as the coordinates on the question sheet of the mock test.
  • the annotation unit 214 gives information to the series information indicating which part of the test question the subject was visually recognizing during the recording of the series information. For example, the annotation unit 214 adds information or the like indicating that the target person has visually recognized the area R1 to the series information at time 00:05.
  • FIG. 4 is a diagram showing an example of presentation information. Further, FIG. 4A is a time-series diagram showing how to move the line of sight, which is a norm when solving the target mock test. FIG. 4B is a time-series diagram showing how the line of sight of the subject who solved the mock test moves. FIG. 4C is a diagram showing the importance at each time point given to the series information by the importance determination unit 114. In each figure of FIG. 4, the horizontal axis represents time. The vertical axis of FIG. 4C shows the high degree of importance.
  • the importance determination unit 114 may set a high importance in a place where the difference between the information shown in FIG. 4 (A) and the information shown in FIG. 4 (B) is large or small, or the target. When the ratio of the person visually recognizing the area that is the point of the answer is high, a high importance may be set for the corresponding part in the series information.
  • FIG. 4C includes, as the integrated discrimination information generated by the discrimination integration unit 116, information indicating the importance set in the series information corresponding to other users so that the users can compare. May be.
  • FIG. 5 is a diagram showing an example of presentation information displayed on the display unit 244 of the learner terminal device 200 (or the display unit 1044 of the instructor terminal device 1000).
  • the objects 311, 312 and 313 correspond to FIGS. 4 (A), 4 (B) and 4 (C), respectively.
  • the screen 315 is a playback screen of a video recording the visual target of the target person. Further, the marker 301 may not be displayed on the screen.
  • the object 317 and the object 314 in the seek bar 316 indicate which part of the video recording the visual target of the target person is being reproduced on the screen 315. Further, the position where the video is reproduced may be changed by sliding the object 316 or the object 317 left and right.
  • the object 318 is an operation panel for controlling the reproduction of the video.
  • the part where the video is reproduced may be changed by selecting a part of the text on the screen 315. That is, when a part of the sentence is selected, the reproduction of the image may start from the place where the target person visually recognizes the part of the sentence.
  • the display unit 244 may display the screen illustrated in FIG. 5 by including a pie chart or the like showing the ratio of which part of the examination question the subject was looking at. The same applies to the above-mentioned normative pie chart and the like for the line of sight.
  • the operation reception unit 243 accepts input of comments and the like corresponding to each part of the video.
  • the input comment or the like is transmitted to the server 100 as feedback information via the communication unit 220.
  • the series information in this example is information that records the visual target of the target person who learns the customer service work in a restaurant or the like.
  • FIG. 6 is a diagram showing an example of a visual target of the target person during learning of customer service work.
  • the image shown in FIG. 6 shows the inside of the restaurant as seen by a learner who is a clerk, and the image shows a customer, one's own hand, a clerk other than oneself, a table, and the like.
  • FIG. 7 is a table showing an example of data on the learner's visual object, utterance (utterance content and voice tone), movement and concentration (blink, pupil state, line-of-sight dwell time and facial expression). It is a table which shows an example of the annotation which the annotation addition part 214 which concerns on this example gave to the series information at each time point.
  • the annotation unit 214 assigns a visual object, an utterance, an action, and a degree of concentration as annotations at each time point.
  • An event is a set of decisions (utterances, actions, etc.) made within a predetermined time from the timing when a learner or a veteran customer service person visually recognizes an object.
  • an event includes timing, visual object, utterance, action, and concentration, and each event can be identified by an ID.
  • the movement of the eyebrows such as lifting the inside or the outside of the eyebrows
  • the movement of the eyebrows such as raising the upper eyelids and the tension of the eyebrows
  • the nose such as wrinkling the nose.
  • the state of each part of the face such as the movement of the lips, the movement of the lips such as lifting the upper lip and purging the lips, the movement of the cheeks such as lifting the cheeks, and the movement of the nose such as lowering the nose.
  • the states of a plurality of parts of the face may be combined.
  • the degree of concentration of the learner from the facial expression for example, when the eyebrows move upward, it can be judged that the learner is concentrated because he / she is paying more attention to the visual object. Further, for example, when the cheek moves upward while visually recognizing a person's face, it can be determined that the person is concentrating on the visual target as if he / she is making a facial expression for the other party.
  • FIG. 8 is a table showing an example of data of veteran customer servicers stored in advance in the storage unit 130.
  • FIG. 8 shows the timing at which the object is visually recognized, the object to be visually recognized, the content of the utterance, the tone of the voice, the movement, and the degree of concentration as an example of the data of the veteran customer service.
  • FIG. 9 information on “deviation” between the learner data shown in FIG. 7 and the veteran customer service data shown in FIG. 8 (“+/-” or “+/-” in the table was used ( Quantitative information indicated by the value of learner-experienced customer service), or information on whether or not the data matches the data of veteran customer service recipient (qualitative information indicated by " ⁇ / ⁇ " in the table). .. As shown in FIG. 9, when the learner's data and the veteran customer service data do not match, the veteran customer service data (“correct answer” in the table) may be included.
  • the importance determination unit 114 determines the importance of each element included in the annotated series information of the target person with reference to the information shown in FIGS. 7 to 9. For example, the information corresponding to the event ID "4" in FIG. 9 is the place where the veteran customer service person said "I will confirm your order", but the target person did not make such an utterance. It is shown that. The importance determination unit 114 may set a higher importance than the other locations in the series information where the difference in operation between the veteran customer service customer and the target person is large.
  • the information corresponding to the event ID "3" indicates that there is an error of as much as 7 seconds between the veteran customer service person and the target person in the timing of performing the operation corresponding to the information.
  • the importance determination unit 114 may set a higher importance than the other locations in the series information in which the difference in the timing of the operation between the veteran customer service customer and the target person is large. Further, for example, the importance determination unit 114 is higher than other places where the concentration of the target person is lower than a predetermined threshold value, even if the difference from the concentration of the veteran customer service is relatively small. You may set the importance.
  • the presentation information generation unit 118 includes, in the screen 315 of FIG. 5, a text indicating that there is a difference in operation between the veteran customer service person and the target person at the target timing and the content of the difference. Information for presentation may be generated in.
  • the series information in this example is video data that captures the movement of the assembler in FA (factory automation), image data taken by a wearable camera attached to the assembler, assembly sound, and information that records a visual object.
  • the assembler may be a plurality of assemblers who assemble the same product, or may be an assembler who assembles the same product on different lines.
  • FIG. 10 is a table showing an example of data on the target person's visual object, movement, concentration (blinking, line-of-sight retention time, and pupil state), and assembly sound
  • the annotation unit 214 according to this example , Is a table showing an example of annotations given to series information at each time point.
  • the annotation unit 214 assigns a visual object, an utterance, an action, a degree of concentration, and an assembly sound as annotations at each time point.
  • FIG. 11 is a table showing an example of data of a veteran assembler stored in advance in the storage unit 130.
  • FIG. 11 shows, as an example of data of a veteran assembler, the timing of visually recognizing an object, the object to be visually recognized, the movement, the degree of concentration, and the sound of assembly.
  • the data of the veteran assembler (“correct answer” in the table) may be included.
  • the importance determination unit 114 determines the importance of each element included in the annotated series information of the target person with reference to the information shown in FIGS. 10 to 12. For example, in the information corresponding to the event ID "3" in FIG. 10, the veteran assembler picks up the end of the part B with his left hand, while the subject picks up the center of the part B with his right hand. It is shown that.
  • the importance determination unit 114 may set a higher importance than the other locations in the series information in which the difference in operation between the veteran assembler and the target person is large.
  • the information corresponding to the event ID "7" indicates that there is an error of 18 seconds between the veteran assembler and the target person in the process of assembling the part B to the part A. Therefore, it is shown that the time required for the same assembly operation differs by as much as 18 seconds between the veteran assembler and the subject.
  • the importance determination unit 114 may set a higher importance than the other locations for the locations in the series information in which the difference in time required for the same operation is large. Further, for example, the importance determination unit 114 is higher than other places where the concentration of the subject is lower than a predetermined threshold value, even if the difference from the concentration of the veteran assembler is relatively small. You may set the importance. Alternatively, the assembler may specify the importance of each event.
  • the presentation information generation unit may integrate the above importance determination results to generate, for example, the following presentation information.
  • Presentation information that presents points to be improved and points to be noted in each process may be generated for the target person. Based on the above presentation information, a video work manual for new assemblers may be created, in which highly important processes such as difficult processes and processes that are easily mistaken are reproduced in close-up or slow mode.
  • the presentation information generation unit 118 includes, in the screen 315 of FIG. 5, a text indicating that there is a difference in operation between the veteran assembler and the target person at the target timing and the content of the difference. Information for presentation may be generated in.
  • the series information in this example is, for example, video data that captures the movement of the cook in the cooking scene, data captured by the wearable camera attached to the cook, cooking sound, and information that records the visual object.
  • the cook may be multiple cooks who cook the same menu, and may be cooks who want to learn cooking personally at each home, or employees who cook in the kitchen in a restaurant or supermarket. There may be.
  • FIG. 13 is a table showing an example of data on the subject's visual object, movement, concentration (blinking, line-of-sight retention time, and pupil state), and cooking sound, and the annotation unit 214 according to this example , Is a table showing an example of annotations given to series information at each time point.
  • the annotation unit 214 assigns a visual object, an utterance, an action, and a degree of concentration as annotations at each time point as an example.
  • FIG. 14 is a table showing an example of data of a veteran cook stored in advance in the storage unit 130.
  • FIG. 14 shows, as an example of data of a veteran cook, the timing of visually recognizing an object, the object to be visually recognized, the movement, the degree of concentration, and the sound of cooking.
  • the data of the veteran cook (“correct answer” in the table) may be included.
  • the importance determination unit 114 determines the importance of each element included in the annotated series information of the target person with reference to the information shown in FIGS. 13 to 15. For example, in the information corresponding to the event ID “3” in FIG. 13, when a veteran cook holds a kitchen knife, he puts his index finger on the back of the kitchen knife and grabs the kitchen knife with his middle finger and thumb. It shows that the kitchen knife is being held with the thumb and index finger.
  • the importance determination unit 114 may set a higher importance than the other locations in the series information in which the difference in operation between the veteran cook and the target person is large.
  • the information corresponding to the event ID "5" indicates that there is an error of 8 seconds between the veteran cook and the target person in the scene of cutting the ingredients. Therefore, it is shown that the time required for the same cooking operation differs by as much as 8 seconds between the veteran cook and the subject.
  • the importance determination unit 114 may set a higher importance than the other locations for the locations in the series information in which the difference in time required for the same operation is large. Further, for example, the importance determination unit 114 is higher than other places where the concentration of the subject is lower than a predetermined threshold, even if the difference from the concentration of the veteran cook is relatively small. You may set the importance. Alternatively, the cook may specify a high degree of importance for difficult processes or processes that affect taste and quality.
  • the presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
  • presentation information that presents points to be improved and points to be noted in each cooking scene may be generated.
  • a cooking manual may be created by a close-up or slow-playing video of highly important processes such as difficult processes and processes that are easily mistaken.
  • the target person is an employee who cooks in a kitchen in a restaurant or supermarket, the time-series changes in the performance of each cook and the places where multiple cooks make mistakes in order to manage the cooks by the manager. You may generate the presentation information presented about.
  • the presentation information generation unit 118 includes, in the screen 315 of FIG. 5, a text indicating that there is a difference in operation between the veteran assembler and the target person at the target timing and the content of the difference. Information for presentation may be generated in.
  • the target person in this example includes a player in sports.
  • the annotation may be added by the coach, may be added by the annotation unit 214, or may be a combination thereof.
  • one coach may annotate a plurality of players, or a plurality of coaches may annotate one player. Further, a plurality of coaches may annotate a plurality of players.
  • the configuration may include a plurality of separately learned annotation units 214, and a plurality of annotation units may add annotations to one player. Further, a plurality of annotation units may add annotations to the plurality of players.
  • the series information in this example is, for example, player image, voice, and body sensing data.
  • the player may be a player of any sport, but here, for example, a baseball player (batter) will be described as an example.
  • the player may be a professional athlete, a student or a member of society.
  • FIG. 16 is a table showing an example of data on the visual object, movement, and degree of concentration (blinking, line-of-sight retention time, and pupil state) of player A (player with low batting average), and annotating according to this example.
  • Part 214 is a table showing an example of annotations given to the sequence information at each time point.
  • the annotation unit 214 assigns a visual object, an utterance, an action, and a degree of concentration as annotations at each time point.
  • FIG. 17 shows, as an example of the data of player B (player with a high batting average), the timing at which the object is visually recognized, the visual target, the movement, the degree of concentration, and the sound of play.
  • FIG. 18 uses information (“+/-” or “+/-” in the table of “deviation” between the data of player A shown in FIG. 16 and the data of player B shown in FIG. 17 (player). Quantitative information indicated by the value of A-player B), or information on whether or not the data matches the data of player B (qualitative information indicated by " ⁇ / ⁇ " in the table). As shown in FIG. 18, when the data of the player A and the data of the player B do not match, the data of the player B (“correct answer” in the table) may be included.
  • the importance determination unit 114 determines the importance of each element included in the annotated series information of each player with reference to the information shown in FIGS. 16 to 18. For example, the information corresponding to the event ID “1” in FIG. 16 indicates that the player B holds the bat short, while the player A holds the bat long.
  • the importance determination unit 114 may set a higher importance than the other locations for the locations in the series information where the difference in operation between the players is large.
  • the information corresponding to the event ID "3" indicates that the swing speed of the player A is 135 Km / hour, while the swing speed of the player B has an error of 148 Km / hour and 13 km / hour. ing.
  • the importance determination unit 114 may set a higher importance than the other locations for the locations in the series information in which the difference in speed at which the same operation is performed is large. Further, for example, the importance determination unit 114 may set a higher importance than other locations at a location where the player's concentration is lower than a predetermined threshold value. Alternatively, a high degree of importance may be specified for an operation or the like that makes a large difference depending on the player.
  • the presentation information generation unit integrates the above-mentioned importance determination results, for example, integrates information annotated by a plurality of coaches for each player to improve the play of each player, and should be improved. Presentation information that presents a part of play (scene of an image) may be generated.
  • the information annotated by one or more coaches to a plurality of players may be integrated to generate presentation information for coaches and other coaches to present improvement points of guidance. Further, based on the above-mentioned presentation information, a manual for teaching may be created in which scenes important for successful batting are reproduced in close-up or slow-playing.
  • the series information is, for example, a play, an operation, etc. by a plurality of players such as a formation in soccer. May be.
  • the input data may include information for the spectators of the match.
  • information for the spectators of the match For example, moving image data that captures the state of the spectator, line-of-sight information of the spectator, and voice emitted by the spectator may be used as input data.
  • the degree of attention and excitement of the spectators may be added as annotations for each scene during the match.
  • the presentation information generation unit may generate presentation information for players and coaches that presents scenes that attracted a lot of attention from spectators.
  • an example is an electronic sport (e-sports) in which a character operated by a player existing in the real world competes in a virtual space.
  • the above-mentioned player may be a plurality of players who play the same e-sports.
  • the series information in this example includes, as an example, an input from a player to a personal computer, a game operation unit, or the like. Further, the series information in this example may include information regarding the movement of the character in the virtual space generated based on these inputs. In addition, the series information may include player image, voice, and body sensing data. Further, the series information in this example may include information for spectators of e-sports as in the case of the above-mentioned sports. Information on the degree of attention and excitement of the spectator may be included.
  • the annotation unit 214 refers to the series information and, as an example, assigns the movement of the character, the degree of attention of the spectator, and the degree of excitement as annotations at each time point.
  • the importance determination unit 114 refers to the annotated series information of a plurality of players, and determines the importance of each element included in the annotated series information. For example, the importance determination unit 114 may set a higher importance than other points for a place in the series information in which the difference in operation between characters is large.
  • the importance determination unit 114 may set a higher importance than other locations at a location where the player's concentration is lower than a predetermined threshold value.
  • a high degree of importance may be specified for an operation or the like that makes a large difference depending on the player.
  • the presentation information generation unit may generate presentation information for the player to present a scene that attracted a lot of attention from the e-sports spectator.
  • the presentation information generation unit integrates the above-mentioned importance determination results to generate presentation information for each player, for example, presenting points to be improved and parts of play to be improved for each player. You may.
  • the series information in this example is video data that captures the operation of an office worker in office work, data captured by a wearable camera attached to the office worker, work sound, and information that records a visual object. Further, the series information may include an operation history of a terminal device such as a keyboard or a mouse, a usage history of an application, and the like.
  • the annotation unit 214 adds information on a visual object, an utterance, an action, a degree of concentration, a work sound, an operation history of a terminal device, an application usage history, and the like as annotations at each time point.
  • the importance determination unit 114 refers to the annotated series information of the target office worker (target person) and the reference annotated series information as an example, and each element included in the annotated series information of the target person. Determine the importance of.
  • the reference annotated series information may include annotated series information of another office worker such as a veteran.
  • the importance determination unit 114 may set a higher importance when the time required for text input by the target person is longer than that of other office workers. Further, the importance determination unit 114 may set a higher importance when the time required for the spreadsheet of the target person is longer than that of other office workers.
  • the presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
  • ⁇ Presentation information that presents points to be improved and points to be noted in each work situation may be generated for the target person. Based on the above-mentioned presented information, a work manual may be created by a close-up or slow-playing video of highly important work such as difficult work or work that is easily mistaken.
  • the work itself should be improved by changing the work order, separating the work into multiple work, or integrating multiple work for the work that most office workers make a mistake.
  • the presentation information presented about the points may be generated.
  • the series information in this example includes video data that captures the movement of the telephone operator (target person), image data captured by a wearable camera attached to the telephone operator, voice data of communication with the telephone partner, work sound, and so on. This is information that records the visual target. Further, the series information may include the operation history of a terminal device such as a keyboard or a mouse.
  • the annotation unit 214 adds information on the target person's visual target, utterance, movement, concentration, work sound, terminal device operation history, application usage history, etc. as annotations at each time point. Further, the annotation unit 214 may estimate the tension level of the target person at each time point and add an index indicating the tension level as an annotation.
  • a known algorithm can be used as the algorithm for estimating the degree of tension.
  • the importance determination unit 114 determines the importance of each element included in the annotation series information of the target person by referring to the annotation series information of the target person and the annotation series information as a reference.
  • the reference annotated series information may include annotated series information of another telephone operator such as a veteran.
  • the importance determination unit 114 may set a higher importance when the tension of the subject is higher than that of other telephone operators. Further, the importance determination unit 114 may set a higher importance when the tension with respect to a specific telephone partner is higher than the tension with respect to another telephone partner.
  • the presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
  • presentation information that presents points to be improved and points to be noted in each telephone response situation may be generated.
  • a work manual may be created by a close-up or slow-playing video of highly important work such as difficult telephone correspondence and telephone correspondence that is easy to make a mistake.
  • the series information in this example includes video data of the movement of a student (target person) in a driving school, image data of a wearable camera attached to the target person, voice data of communication with an instructor, and driving movement. This is information that records sound and visual objects. Further, the series information may include sensing data of an operation unit such as a steering wheel or a brake.
  • the annotation unit 214 adds information on the visual object, the operation, the degree of concentration, the utterance, the driving operation sound, the operation history of the operation unit, and the like as annotations at each time point. Further, the annotation unit 214 may estimate the tension level of the target person at each time point and add an index indicating the tension level as an annotation.
  • a known algorithm can be used as the algorithm for estimating the degree of tension.
  • the importance determination unit 114 determines the importance of each element included in the annotation series information of the target person by referring to the annotation series information of the target person and the annotation series information as a reference.
  • the reference annotated series information may include annotated series information of a model driver such as an instructor.
  • the importance determination unit 114 sets a higher importance when the timing at which the target person starts to turn the handle and the timing at which the model driver starts to turn the handle are larger than a predetermined value when turning a certain curve. You may. In addition, the importance determination unit 114 may set a higher importance when the tension in a specific course is higher than the tension in other courses.
  • the presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
  • presentation information that presents points to be improved and points to be noted in each driving scene may be generated.
  • a driving manual may be created by a close-up or slow-playing video of highly important driving such as a difficult driving scene.
  • FIG. 10 is a block diagram schematically illustrating a configuration example of the information processing system 1 according to the second embodiment.
  • the information processing system 1 of the second embodiment is different from the information processing system 1 of the first embodiment in that the importance determination unit 215 is not provided in the server 100 but is provided in the learner terminal device 200.
  • Other configurations are the same.
  • FIG. 11 is a sequence diagram illustrating an operation example of the information processing system 1 according to the second embodiment. The process of generating the presentation information in the information processing system 1 of the second embodiment will be described with reference to FIG.
  • Step S202 the annotation-giving unit 214 generates the annotated series information by adding annotations to each element included in the series information regarding the target person acquired by the series information acquisition unit 212.
  • Step S206 the importance determination unit 215 refers to the annotated series information about the target person generated by the annotation unit 214, and determines the importance of each element included in the annotation series information of the target person.
  • the communication unit 220 of the learner terminal device 200 transmits the importance determination result to the server 100.
  • step S207 the series information acquisition unit 112 of the server 100 receives the importance determination result.
  • step S208 the discrimination integration unit 116 of the server 100 refers to the judgment results of the importance of the plurality of target persons A, B, C ..., And generates the discrimination integration information.
  • Step S210 the presentation information generation unit 118 generates presentation information by referring to at least one of the discrimination result by the importance determination unit 215 and the discrimination integration information.
  • the presentation information generation unit 118 generates the presentation information by referring only to the determination result by the importance determination unit 215, the process of step S208 does not necessarily have to be executed, and the process follows step S207.
  • the process of this step S210 may be executed.
  • the communication unit 120 of the server 100 transmits the generated presentation information to each learner terminal device 200.
  • step S212 the presentation information acquisition unit 216 of each learner terminal device 200 acquires the generated presentation information.
  • step S214 the presentation unit (display unit) 244 presents the acquired presentation information.
  • the annotation unit 214 may update the discrimination logic used when annotating the series information by referring to the information included in the presentation information.
  • Step S216 the terminal device 200 acquires feedback information.
  • the feedback information is generated as a result of the subject confirming the presentation information together with the instructor and confirming / correcting the determined importance.
  • the feedback information is transmitted from the terminal device 200 to the server 100.
  • step S218 the importance determination unit 215 of the learner terminal device 200 updates the determination logic based on the feedback information. For example, the importance determination unit 215 updates the importance determination logic based on the feedback information including the confirmed / corrected importance. Further, the discrimination integration unit 116 of the server 100 also updates the discrimination integration logic based on the feedback information.
  • Step S220 When the series information acquisition unit 212 of the terminal device 200 acquires the series information again, in step S220, the annotation unit 214 adds annotations to the series information to generate the annotated series information.
  • the steps of steps S220 to S236 are the same as the steps of steps S202 to S218.
  • step S202 to step S218 every time the learner terminal device 200 acquires the series information, the steps from step S202 to step S218 are repeated.
  • the importance determination unit 215 of the learner terminal device 200 updates the importance determination logic every time the sequence information is acquired from the target person. As the series information from the target person increases, the importance determination logic is improved, and the importance can be determined more preferably. The same applies to the discrimination logic used by the discrimination integration unit 116 for the discrimination integration process.
  • FIG. 12 is a block diagram schematically illustrating a configuration example of the information processing system 1 according to the third embodiment.
  • the information processing system of the third embodiment is different from the information processing system of the first embodiment in that the presentation information generation unit 1013 is not provided in the server 100 but is provided in the instructor terminal device 1000.
  • the configuration is the same.
  • FIG. 13 is a sequence diagram illustrating an operation example of the information processing system 1 according to the third embodiment. The process of generating the presentation information in the information processing system 1 of the third embodiment will be described with reference to FIG.
  • Step S302 the annotation-giving unit 214 generates the annotated series information by adding annotations to each element included in the series information regarding the target person acquired by the series information acquisition unit 212.
  • the communication unit 220 transmits the generated annotated sequence information to the server 100.
  • step S304 the sequence information acquisition unit 112 of the server 100 acquires the annotated sequence information.
  • Step S306 the importance determination unit 114 refers to the annotated series information about the target person and determines the importance of each element included in the annotated series information of the target person.
  • step S308 the discrimination integration unit 116 refers to the discrimination results of the importance of the plurality of subjects A, B, C ..., And generates the discrimination integration information.
  • the generated discriminant integrated information is transmitted to the instructor terminal device 1000.
  • step S310 the presentation information generation unit 1013 of the instructor terminal device 1000 generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit 114 and the discrimination integrated information.
  • the presentation information generation unit 1013 generates presentation information by referring only to the determination result by the importance determination unit 114, the process of step S308 does not necessarily have to be executed, and the determination in step S306 is required.
  • the result may be transmitted to the instructor terminal device 1000, and subsequently the process of this step S310 may be executed.
  • the generated presentation information is transmitted to each learner terminal device 200.
  • Step S312 In step S312, in each learner terminal device 200, the presentation information acquisition unit 216 acquires the generated presentation information.
  • step S314 the presentation unit (display unit) 244 presents the acquired presentation information.
  • the annotation unit 214 may update the discrimination logic used when annotating the series information by referring to the information included in the presentation information.
  • step S316 the instructor terminal device 1000 acquires feedback information.
  • the feedback information is generated as a result of the subject confirming the presentation information together with the instructor and confirming / correcting the determined importance.
  • the feedback information is transmitted from the instructor terminal device 1000 to the server 100.
  • Step S318 the importance determination unit 114 or the determination integration unit 116 of the server 100 updates each determination logic based on the feedback information from the instructor terminal device 1000. For example, the importance determination unit 114 updates the importance determination logic based on the feedback information including the confirmed / corrected importance from the instructor terminal device 1000.
  • Step S320 When the series information acquisition unit 212 of the terminal device 200 acquires the series information again, the annotation unit 214 adds annotations to the series information to generate the annotated series information.
  • the steps of steps S320 to S336 are the same as the steps of steps S302 to S318.
  • step S302 to step S318 are repeated.
  • the server 100 updates each discrimination logic every time the series information is acquired from the target person. As the series information from the target person increases, each discrimination logic is improved, and each discrimination can be performed more preferably.
  • the control block of the information processing device 100 (particularly, the sequence information acquisition unit 112, the importance determination unit 114, the discrimination integration unit 116, the presentation information generation unit 118), and each terminal device 200 (particularly the sequence information acquisition unit 212, annotation addition).
  • the unit 214 and the presentation information acquisition unit 216) may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be realized by software.
  • the information processing device 100 and each terminal device 200 include a computer that executes a program instruction that is software that realizes each function.
  • the computer includes, for example, one or more processors and a computer-readable recording medium that stores the program. Then, in the computer, the processor reads the program from the recording medium and executes it, thereby achieving the object of the present invention.
  • the processor for example, a CPU (Central Processing Unit) can be used.
  • the recording medium in addition to a “non-temporary tangible medium” such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • a RAM RandomAccessMemory
  • the program may be supplied to the computer via an arbitrary transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program.
  • a transmission medium communication network, broadcast wave, etc.
  • one aspect of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
  • the information processing apparatus includes a sequence information acquisition unit that acquires annotated sequence information in which annotations are added to each element included in the sequence information related to the target person.
  • the importance determination unit for determining the importance of each element included in the annotated series information of the target person by referring to the annotated series information regarding the target person, and the determination result by the importance determination unit for a plurality of target persons.
  • a discrimination integration unit that generates discrimination integration information by referring to, a presentation information generation unit that generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit and the discrimination integration information. It has.
  • the importance determination unit may update the determination logic with reference to the feedback information from the user.
  • the discrimination logic in the importance determination unit is updated with reference to the feedback information from the user, it is possible to more accurately determine the importance of the series information each time the importance determination is repeated. it can.
  • the series information may include information regarding the line of sight of the target person.
  • the importance can be more preferably determined by referring to the information regarding the line of sight of the subject.
  • the discrimination integration unit may extract common information from the discrimination results of the importance discrimination unit for each of the plurality of target persons and include it in the discrimination integration information. ..
  • the importance determination unit may determine the degree of concentration regarding the target person as the importance.
  • the importance of the series information can be determined by referring to the degree of concentration regarding the target person, so that the importance can be determined more preferably.
  • an annotation series is added by annotating a series information acquisition unit that acquires series information and each element included in the acquired series information. It includes an annotation addition unit that generates information, a presentation information acquisition unit that acquires presentation information generated by referring to the annotation series information, and a presentation unit that presents the presentation information.
  • the terminal device further includes a discrimination integrated information acquisition unit that acquires discrimination integration information generated by referring to discrimination results related to annotation series information of a plurality of target persons, and the annotation addition unit is a user.
  • the annotation logic may be updated with reference to at least one of the feedback information from the above and the discrimination integration information.
  • annotation can be more preferably performed by updating the annotation logic with reference to each information.
  • the annotation unit may add information indicating the reliability of the annotation to each element in addition to the annotation.
  • each process that refers to the annotation can be executed more preferably.
  • annotated series information is added by adding an annotation to an acquisition unit that acquires series information and each element included in the acquired series information.
  • the importance determination unit that determines the importance of each element included in the annotated series information of the target person by referring to the annotated series information about a certain target person, and the plurality of target persons.
  • the presentation information is generated by referring to at least one of the discrimination integration unit that refers to the discrimination result by the importance discrimination unit and generates the discrimination integration information, and the discrimination result by the importance discrimination unit and the discrimination integration information. It is equipped with a presentation information generation unit.
  • an annotation adding step of generating annotated series information by adding an annotation to each element included in the series information about a target person With reference to the annotated series information, the importance determination step of determining the importance of each element included in the annotated series information of the target person and the determination result in the importance determination step for a plurality of target persons are referred to.
  • the present invention includes a discrimination integration step of generating discrimination integration information, and a presentation information generation step of generating presentation information by referring to at least one of the discrimination result in the importance discrimination step and the discrimination integration information.
  • the information processing program is an information processing program for operating a computer as the information processing device according to any one of the above, and includes the series information acquisition unit, the importance determination unit, and the discrimination integration unit. And make the computer function as a presentation information generator.
  • Information processing system 100 Information processing device 100 Server 110, 210, 1010 Control unit 112, 212 Series information acquisition unit 114, 215 Importance determination unit 116 Discrimination integration unit 118, 1013 Presentation information generation unit 120, 220, 1020 Communication unit 130, 230, 1030 Storage unit 200 Learner terminal device 214 Annotating unit 216, 1014 Information acquisition unit for presentation 241 Camera 242 Microphone 243, 1043 Operation reception unit 244 Display unit (presentation unit) 1044 Display unit 245, 1045 Speaker 1000 Instructor terminal device 1012 Feedback information acquisition unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a technology capable of appropriately determining and presenting important information among a series of information. An information processing device (1) is provided with: a series information acquisition unit (112) that acquires annotation-added series information in which an annotation is added to each of elements included in series information relating to a subject: a degree-of- importance determination unit (114) that, with reference to the annotation-added series information relating to the subject, determines the degree of importance of each of the elements included in the annotation-added series information of the subject; a determination integration unit (116) that, with reference to determination results obtained by the degree-of-importance determination unit regarding a plurality of subjects, generates determination integration information; and a presentation information generation unit (118) that, with reference to the determination results obtained by the degree-of-importance determination unit and/or the determination integration information, generates presentation information.

Description

情報処理装置、情報処理システム、情報処理方法、および情報処理プログラムInformation processing equipment, information processing systems, information processing methods, and information processing programs
 本発明は、情報処理装置、情報処理システム、情報処理方法、および情報処理プログラムに関する。 The present invention relates to an information processing device, an information processing system, an information processing method, and an information processing program.
 習得対象者による学習や作業技術の習得を支援するシステムが従来技術として知られている。例えば、特許文献1には、学習者の読み返し頻度及び視線停留をモニタリングした結果を他の学習者と比較して、その比較結果を学習者及び指導者に提示する技術が記載されている。 A system that supports learning and acquisition of work skills by the person to be learned is known as a conventional technique. For example, Patent Document 1 describes a technique of comparing a result of monitoring a learner's reading frequency and gaze retention with another learner and presenting the comparison result to a learner and an instructor.
日本国公開特許公報「特開2005-338173号公報(2005年12月8日公開)」Japanese Patent Publication "Japanese Unexamined Patent Publication No. 2005-338173 (published on December 8, 2005)"
 また、近年、様々な場面での学習や作業等を録画した動画を振り返り、当該学習や作業等の改善に役立てることが行われている。このような技術では、録画した動画のうち、学習や作業改善に役立つ重要なシーンを自動的に判別できることが好ましい。 Also, in recent years, it has been used to look back on videos recording learning and work in various situations and to improve the learning and work. In such a technique, it is preferable to be able to automatically identify important scenes useful for learning and work improvement from recorded moving images.
 また、学習や作業に留まらず、系列性を有する情報のうち、重要な情報を判別し提示する技術には大きなニーズがある。 In addition to learning and work, there is a great need for technology that identifies and presents important information among information that has a series.
 本発明では、上記課題に鑑み、系列性を有する情報のうち、重要な情報を好適に判別し提示することのできる技術を提供することを目的とする。 In view of the above problems, it is an object of the present invention to provide a technique capable of appropriately discriminating and presenting important information among information having series.
 上記の課題を解決するために、本発明の一態様に係る情報処理装置は、対象者に関する系列情報に含まれる各要素にアノテーションが付与されたアノテーション付系列情報を取得する系列情報取得部と、前記対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別部と、複数の対象者に関する前記重要度判別部による判別結果を参照し、判別統合情報を生成する判別統合部と、前記重要度判別部による判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成部と、を備えている。 In order to solve the above problems, the information processing apparatus according to one aspect of the present invention includes a sequence information acquisition unit that acquires annotated sequence information in which annotations are added to each element included in the sequence information related to the target person. The importance determination unit for determining the importance of each element included in the annotated series information of the target person by referring to the annotated series information regarding the target person, and the determination result by the importance determination unit for a plurality of target persons. A discrimination integration unit that generates discrimination integration information by referring to, a presentation information generation unit that generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit and the discrimination integration information. It has.
 上記の課題を解決するために、本発明の一態様に係る端末装置において、系列情報を取得する系列情報取得部と、取得した系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部と、アノテーション付系列情報を参照して生成された提示用情報を取得する提示用情報取得部と、前記提示用情報を提示する提示部とを備えている。 In order to solve the above problems, in the terminal device according to one aspect of the present invention, an annotation series is added by annotating a series information acquisition unit that acquires series information and each element included in the acquired series information. It includes an annotation addition unit that generates information, a presentation information acquisition unit that acquires presentation information generated by referring to the annotation series information, and a presentation unit that presents the presentation information.
 上記の課題を解決するために、本発明の一態様に係る情報処理システムにおいて、系列情報を取得する取得部と、取得した系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部と、ある対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別部と、複数の対象者に関する重要度判別部による判別結果を参照し、判別統合情報を生成する判別統合部と、前記重要度判別部による判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成部と、を備えている。 In order to solve the above problems, in the information processing system according to one aspect of the present invention, annotated series information is added by adding an annotation to an acquisition unit that acquires series information and each element included in the acquired series information. The importance determination unit that determines the importance of each element included in the annotated series information of the target person by referring to the annotated series information about a certain target person, and the plurality of target persons. The presentation information is generated by referring to at least one of the discrimination integration unit that refers to the discrimination result by the importance discrimination unit and generates the discrimination integration information, and the discrimination result by the importance discrimination unit and the discrimination integration information. It is equipped with a presentation information generation unit.
 上記の課題を解決するために、本発明の一態様に係る情報処理方法において、対象者に関する系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与工程と、前記アノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別工程と、複数の対象者に関する前記重要度判別工程における判別結果を参照し、判別統合情報を生成する判別統合工程と、前記重要度判別工程における判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成工程と、を含む。 In order to solve the above problems, in the information processing method according to one aspect of the present invention, an annotation adding step of generating annotated series information by adding an annotation to each element included in the series information about a target person, With reference to the annotated series information, the importance determination step of determining the importance of each element included in the annotated series information of the target person and the determination result in the importance determination step for a plurality of target persons are referred to. The present invention includes a discrimination integration step of generating discrimination integration information, and a presentation information generation step of generating presentation information by referring to at least one of the discrimination result in the importance discrimination step and the discrimination integration information.
 本発明の一態様によれば、系列性を有する情報のうち、重要な情報を好適に判別し提示することができる。 According to one aspect of the present invention, important information can be suitably discriminated and presented among the information having sequence.
本発明の一実施形態に係る情報処理システムの構成例を模式的に例示するブロック図である。It is a block diagram which schematically exemplifies the configuration example of the information processing system which concerns on one Embodiment of this invention. 本発明の一実施形態に係る情報処理システムの動作例を例示するシーケンス図である。It is a sequence diagram which illustrates the operation example of the information processing system which concerns on one Embodiment of this invention. 本発明の一実施形態に係る録画の対象となる試験問題の一例を示す図である。It is a figure which shows an example of the test question which is the object of the recording which concerns on one Embodiment of this invention. 本発明の一実施形態に係る提示用情報の一例を示す図である。It is a figure which shows an example of the information for presentation which concerns on one Embodiment of this invention. 本発明の一実施形態に係る提示用情報の一例を示す図である。It is a figure which shows an example of the information for presentation which concerns on one Embodiment of this invention. 本発明の一実施形態に係る接客作業の学習中における対象者の視認対象の一例を示す図である。It is a figure which shows an example of the visual object of the subject person in learning of the customer service work which concerns on one Embodiment of this invention. 本発明の一実施形態に係る学習者の視認対象、発話のデータの一例を示す表である。It is a table which shows an example of the data of the learner's visual object, utterance which concerns on one Embodiment of this invention. 本発明の一実施形態に係るベテラン接客者のデータの一例を示す表である。It is a table which shows an example of the data of the veteran customer service person which concerns on one Embodiment of this invention. 本発明の一実施形態に係る学習者とベテラン接客者との差異情報の一例を示す図である。It is a figure which shows an example of the difference information between a learner and a veteran customer service person which concerns on one Embodiment of this invention. 本発明の一実施形態に係る学習者の視認対象、動作のデータの一例を示す表である。It is a table which shows an example of the data of the learner's visual object and operation which concerns on one Embodiment of this invention. 本発明の一実施形態に係るベテラン組立員のデータの一例を示す表である。It is a table which shows an example of the data of the veteran assembler which concerns on one Embodiment of this invention. 本発明の一実施形態に係る学習者とベテラン組立員との差異情報の一例を示す図である。It is a figure which shows an example of the difference information between a learner and a veteran assembler which concerns on one Embodiment of this invention. 本発明の一実施形態に係る学習者の視認対象、動作のデータの一例を示す表である。It is a table which shows an example of the data of the learner's visual object and operation which concerns on one Embodiment of this invention. 本発明の一実施形態に係るベテラン調理人のデータの一例を示す表である。It is a table which shows an example of the data of the veteran cook which concerns on one Embodiment of this invention. 本発明の一実施形態に係る学習者とベテラン調理人との差異情報の一例を示す図である。It is a figure which shows an example of the difference information between a learner and a veteran cook which concerns on one Embodiment of this invention. 本発明の一実施形態に係るプレーヤAの視認対象、動作のデータの一例を示す表である。It is a table which shows an example of the data of the visual object | movement of the player A which concerns on one Embodiment of this invention. 本発明の一実施形態に係るプレーヤBのデータの一例を示す表である。It is a table which shows an example of the data of the player B which concerns on one Embodiment of this invention. 本発明の一実施形態に係るプレーヤAとプレーヤBとの差異情報の一例を示す図である。It is a figure which shows an example of the difference information between the player A and the player B which concerns on one Embodiment of this invention. 本発明の他の実施形態に係る情報処理システムの構成例を模式的に例示するブロック図である。It is a block diagram which schematically exemplifies the configuration example of the information processing system which concerns on other embodiment of this invention. 本発明の他の実施形態に係る情報処理システムの動作例を例示するシーケンス図である。It is a sequence diagram which illustrates the operation example of the information processing system which concerns on other embodiment of this invention. 本発明の更に他の実施形態に係る情報処理システムの構成例を模式的に例示するブロック図である。It is a block diagram which schematically exemplifies the configuration example of the information processing system which concerns on still another Embodiment of this invention. 本発明の更に他の実施形態に係る情報処理システムの動作例を例示するシーケンス図である。It is a sequence diagram which illustrates the operation example of the information processing system which concerns on still another Embodiment of this invention.
 以下、本発明の一側面に係る実施の形態(以下、「本実施形態」とも表記する)を、図面に基づいて説明する。 Hereinafter, an embodiment according to one aspect of the present invention (hereinafter, also referred to as “the present embodiment”) will be described with reference to the drawings.
 <実施形態1>
§1.適用例
 まず、図1を参照して、本発明が適用される場面の一例について説明する。なお、ここでは、本発明の対象者が学習塾等において講義又は模擬試験を受ける学習者である場合を例に挙げるが、これは本実施形態を限定するものではなく、対象者は、系列情報と何らかの関連性を有する者であれば特に限定されない。
<Embodiment 1>
§1. Application Example First, an example of a situation in which the present invention is applied will be described with reference to FIG. Here, the case where the subject of the present invention is a learner who takes a lecture or a mock test at a cram school or the like is taken as an example, but this does not limit the present embodiment, and the subject is the series information. It is not particularly limited as long as it has some relation with.
 図1は、本発明の一実施形態に係る情報処理システム1の構成例を模式的に例示するブロック図である。情報処理システム1は、例えば、学習塾等で使用される情報処理システムである。図1に示す通り、本実施形態に係る情報処理システム1は、情報処理装置であるサーバ100、学習者用端末装置200A、200B、200C、および講師用端末装置1000を備えている。これらのサーバ100および学習者用端末装置200A、200B、200C、および講師用端末装置1000は、ネットワークを介して接続され、互いに通信を行うことができる。ネットワークの種類は、インターネット、電話網、専用網等何でもよい。なお、以下では、学習者用端末装置200A、200B、200Cを総称して、学習者用端末装置200と称することがある。 FIG. 1 is a block diagram schematically illustrating a configuration example of the information processing system 1 according to the embodiment of the present invention. The information processing system 1 is, for example, an information processing system used in a cram school or the like. As shown in FIG. 1, the information processing system 1 according to the present embodiment includes a server 100 which is an information processing device, learner terminal devices 200A, 200B, 200C, and a lecturer terminal device 1000. The server 100, the learner terminal devices 200A, 200B, 200C, and the instructor terminal device 1000 are connected via a network and can communicate with each other. The type of network may be any, such as the Internet, a telephone network, and a dedicated network. In the following, the learner terminal devices 200A, 200B, and 200C may be collectively referred to as the learner terminal device 200.
 本実施形態にかかる学習者用端末装置200A、200B、および200Cは、例えば、学習塾における各学習者(以下、対象者とも呼ぶ)A、B、Cにそれぞれ割り当てられるコンピュータである。各学習者用端末装置200A、200B、および200Cには、カメラ241、マイク242等が備えられており、講義中、あるいは模擬試験中における各対象者の様子、発話を録画、録音することができる。この録音、録画された動画は、本実施形態における系列情報の一例である。ここで、系列情報とは、当該情報に含まれる各要素の順序に意味がある情報一般のことを意味する。また、系列情報には、動画データ、音声データ、テキストデータ、及び数値の経時的変化を示すデータ等が含まれ得る。また、各端末装置200は、系列情報に対して、アノテーションを付与することによってアノテーション付系列情報を生成し、それぞれの記憶部230に記録しておくことができる。ここで、アノテーションとは、系列情報に含まれる各要素に付与されるメタ情報である。一例としてアノテーションは、系列情報の記録中の各時点又は各期間における対象者の状態等に関するメタ情報を含む。なお、アノテーションの種別は本実施形態を限定するものではないが、一例として、対象者の当該時点における集中度を示す指標をアノテーションとして用いることができる。 The learner terminal devices 200A, 200B, and 200C according to the present embodiment are, for example, computers assigned to each learner (hereinafter, also referred to as a target person) A, B, C in a cram school. Each learner's terminal device 200A, 200B, and 200C is equipped with a camera 241, a microphone 242, and the like, and can record and record the state and utterance of each subject during a lecture or a mock test. .. The recording and the recorded moving image are examples of the series information in the present embodiment. Here, the series information means general information in which the order of each element included in the information is meaningful. In addition, the series information may include moving image data, audio data, text data, data indicating changes in numerical values with time, and the like. Further, each terminal device 200 can generate annotated sequence information by adding annotations to the sequence information and record it in each storage unit 230. Here, the annotation is meta information given to each element included in the series information. As an example, the annotation includes meta information regarding the state of the subject at each time point or each period during the recording of the series information. The type of annotation is not limited to this embodiment, but as an example, an index indicating the degree of concentration of the target person at that time can be used as an annotation.
 そして、各学習者用端末装置200A、200B、および200Cは、このアノテーション付きの各対象者に関する系列情報を、サーバ100に送信する。 Then, each learner terminal device 200A, 200B, and 200C transmits the series information about each target person with this annotation to the server 100.
 サーバ100では、各端末装置200から送信されたアノテーション付きの系列情報に基づいて、系列情報に含まれる各要素の重要度を判別する。そして、各端末装置200から送信されたアノテーション付きの系列情報に基づいて判別された重要度の判別結果を統合し、提示用情報を生成する。サーバ100では、生成された提示用情報を、記憶部130に格納するとともに、各端末装置200、又は1000に送信する。 The server 100 determines the importance of each element included in the series information based on the series information with annotations transmitted from each terminal device 200. Then, the determination result of the importance determined based on the annotated series information transmitted from each terminal device 200 is integrated to generate the presentation information. The server 100 stores the generated presentation information in the storage unit 130 and transmits it to each terminal device 200 or 1000.
 各端末装置200、又は1000では、サーバ100から送信された提示用情報を提示して、サーバ100にフィードバック情報を送信する。このとき、対象者が講師とともに提示用情報を確認し、重要度の判別ロジック、又は後述する判別統合処理に用いられる判別ロジックの更新に関するフィードバック情報を、各端末装置200、又は1000を介してサーバ100に送信してもよい。つまり、各端末装置200、又は1000から提示用情報を修正・確認した結果のフィードバック情報がサーバ100に送信されてもよい。サーバ100では、各端末装置200、又は1000から送信されるフィードバック情報に基づいて重要度の判別ロジックを更新する。 Each terminal device 200 or 1000 presents the presentation information transmitted from the server 100 and transmits the feedback information to the server 100. At this time, the target person confirms the presentation information together with the instructor, and feeds back information regarding the update of the importance determination logic or the discrimination logic used in the discrimination integration process described later via each terminal device 200 or 1000. It may be transmitted to 100. That is, the feedback information as a result of correcting and confirming the presentation information from each terminal device 200 or 1000 may be transmitted to the server 100. The server 100 updates the importance determination logic based on the feedback information transmitted from each terminal device 200 or 1000.
 上述のように、サーバ100、および各端末装置200、又は1000間で、系列情報の送受信を繰り返すごとに、重要度の判別等に用いることができる情報が増加するため、重要度の判別ロジック、及び判別統合処理に用いられる判別ロジックが改善される。その結果、より適切に系列情報の重要度等を判別することができる。これにより、講師は、動画を振り返る場合に、指導内容の改善につながる可能性の高い情報に絞って、動画の振り返りを行うことができ、時間を効率的に使って、指導の改善を行うことができる。 As described above, each time the transmission / reception of the sequence information is repeated between the server 100 and each terminal device 200 or 1000, the information that can be used for determining the importance increases, so that the importance determination logic, And the discrimination logic used in the discrimination integration process is improved. As a result, the importance of the series information can be determined more appropriately. As a result, when looking back on the video, the instructor can look back on the video by focusing on the information that is likely to lead to improvement of the instruction content, and use time efficiently to improve the instruction. Can be done.
§2.構成例
 以下に、図1を用いて、本実施形態における、情報処理システム1を構成する情報処理装置であるサーバ100、学習者用端末装置200、および講師用端末装置1000の各構成について詳述する。
§2. Configuration Example Hereinafter, each configuration of the server 100, the learner terminal device 200, and the instructor terminal device 1000, which are the information processing devices constituting the information processing system 1, in the present embodiment will be described in detail with reference to FIG. To do.
 <学習者用端末装置200>
 実施形態にかかる学習者用端末装置200A、200B、および200Cは、一例として、同一の構成を備えている。例えば、学習者用端末装置200は、制御部210、通信部220、記憶部230、カメラ241、マイク242、操作受付部243、表示部(提示部)244、スピーカ245を備える。
<Learner terminal device 200>
The learner terminal devices 200A, 200B, and 200C according to the embodiment have the same configuration as an example. For example, the learner terminal device 200 includes a control unit 210, a communication unit 220, a storage unit 230, a camera 241 and a microphone 242, an operation reception unit 243, a display unit (presentation unit) 244, and a speaker 245.
 通信部220は、サーバ100等の外部の装置との通信処理を行う。記憶部230は、各種データを格納する記憶装置である。操作受付部243は、対象者等の入力操作を受け付けるインターフェースであって、例えばキーボード等のボタンである。表示部244は、動画像を表示する表示パネルである。なお、操作受付部243と表示部244とが、対象者等の入力操作を受け付け、且つ動画像を表示するタッチパネルとして実現される構成でもよい。制御部210は、学習者用端末装置200全体を統括する制御装置であって、カメラ241等を介して系列情報を取得する系列情報取得部212と、取得した系列情報に含まれる各要素に対してアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部214と、アノテーション付系列情報を参照して生成された提示用情報を通信部220を介して取得する提示用情報取得部216と、1又は複数の対象者のアノテーション付系列情報に関する判別結果を参照して生成された判別統合情報を取得する図示しない判別統合情報取得部とを備える。 The communication unit 220 performs communication processing with an external device such as the server 100. The storage unit 230 is a storage device that stores various types of data. The operation reception unit 243 is an interface for receiving an input operation of a target person or the like, and is a button such as a keyboard. The display unit 244 is a display panel for displaying a moving image. The operation reception unit 243 and the display unit 244 may be configured to be realized as a touch panel that accepts input operations of the target person or the like and displays a moving image. The control unit 210 is a control device that controls the entire learner terminal device 200, and for the sequence information acquisition unit 212 that acquires sequence information via the camera 241 and the like, and for each element included in the acquired sequence information. Annotation addition unit 214 that generates annotation series information by adding annotations, and presentation information acquisition unit 216 that acquires presentation information generated by referring to annotation series information via communication unit 220. It is provided with a discriminant integrated information acquisition unit (not shown) that acquires discriminant integrated information generated by referring to the discriminant result regarding the annotated series information of one or a plurality of target persons.
 <系列情報取得部212>
 系列情報取得部212では、対象者に関する系列情報を取得する。系列情報取得部212が取得する系列情報を例示すれば以下の通りである。これらのデータは、一例として、模擬試験中または講義中における情報として取得されるが、これは本実施形態を限定するものではなく、その他の場面において取得されたものであってもよい。
・カメラ241で撮像した対象者(対象者の顔を含んでもよい)
・マイク242で集音した対象者の音声
・操作受付部243で受け付けた対象者による操作入力(テキストデータを含んでもよい)
・通信部220を介して取得した画像、音声、テキスト等のデータ
・記憶部230から読み出した画像、音声、テキスト等のデータ
・図示しない温度センサから取得した温度の経時的変化
・通信部220を介して取得した天候の経時的変化
・図示しないセンサから取得した脈拍データであって、対象者の緊張、ストレス、及び集中度等の度合いを読み取るための脈拍データ。
<Series information acquisition unit 212>
The series information acquisition unit 212 acquires series information about the target person. An example of the series information acquired by the series information acquisition unit 212 is as follows. As an example, these data are acquired as information during a mock test or a lecture, but this is not limited to this embodiment, and may be acquired in other situations.
-Subject imaged by camera 241 (may include the subject's face)
-Voice of the target person collected by the microphone 242-Operation input by the target person received by the operation reception unit 243 (text data may be included)
-Data of images, voices, texts, etc. acquired via the communication unit 220-Data of images, voices, texts, etc. read from the storage unit 230-Changes in temperature acquired from a temperature sensor (not shown) over time-Communication unit 220 Pulse data obtained from a sensor (not shown) that changes over time in the weather acquired through the system, and is used to read the degree of tension, stress, concentration, etc. of the subject.
 また、上記系列情報には、前記対象者の視線に関する情報が含まれていてもよい。例えば、系列情報取得部212は、カメラ241から取得した画像から、対象者の顔情報を取得する。顔情報には、対象者の顔の各部位(例えば、目、鼻、口及び眉等)の位置を示す位置情報、形状を示す形状情報、及び大きさを示す大きさ情報等の他、対象者の状態として、対象者の視線を検出する。視線は対象者の課題などに対する集中度を表す指標として特に重要である。対象者の視線の検出については、後述する。 Further, the series information may include information regarding the line of sight of the target person. For example, the series information acquisition unit 212 acquires the face information of the target person from the image acquired from the camera 241. The face information includes position information indicating the position of each part of the subject's face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like, as well as the target. The line of sight of the subject is detected as the state of the person. The line of sight is particularly important as an index showing the degree of concentration of the subject on the task. The detection of the line of sight of the subject will be described later.
 <アノテーション付与部214>
 アノテーション付与部214は、取得した系列情報に含まれる各要素にアノテーションを自動的に付与することによって、又はユーザの指示に基づいて付与することによって、アノテーション付系列情報を生成する。アノテーションは、系列情報に含まれる要素に対して付される。一例として、動画データの00:01~00:02までの区間に対してアノテーションAが付され、00:05~00:06までの区間に対してアノテーションBが付される。他の例として、テキストデータに含まれる文章Aにおける文節AAに対してアノテーションXが付され、文節BBに対してアノテーションYが付されるといった具体に処理がなされる。
<Annotation section 214>
The annotation unit 214 generates the annotated series information by automatically adding an annotation to each element included in the acquired series information or by adding an annotation based on a user's instruction. Annotations are attached to the elements included in the series information. As an example, annotation A is attached to the section from 00:01 to 00:02 of the moving image data, and annotation B is attached to the section from 00:05 to 00:06. As another example, the annotation X is attached to the clause AA in the sentence A included in the text data, and the annotation Y is attached to the clause BB.
 また、アノテーション付与部214は、対象者が特定の領域を視認したか否かを示す情報をアノテーションとして系列情報に付与してもよい。また、操作受付部243は、系列情報に対してアノテーションを付与する操作、および系列情報の各箇所に対して後述する重要度を入力する操作を受け付ける。例えば系列情報中のある箇所に対して、ユーザが操作受付部243を介してアノテーションを付与する旨の指示を行った場合、アノテーション付与部214は、系列情報の当該箇所に対して、指示されたアノテーションを付与する。 Further, the annotation unit 214 may add information indicating whether or not the target person has visually recognized a specific area to the series information as annotations. Further, the operation reception unit 243 accepts an operation of adding annotations to the series information and an operation of inputting the importance described later for each part of the series information. For example, when the user gives an instruction to add an annotation to a certain part in the series information via the operation reception unit 243, the annotation unit 214 is instructed to the part of the series information. Annotate.
 より具体的な例として、アノテーション付与部214は、録画された動画から、各時点における対象者の集中度を5段階で評価し、系列情報である当該動画の各時点に対して、アノテーションとして集中度を付与する。また、他の例として、模擬試験中の対象者に関する動画から、各時点において対象者が視認している問題文又は設問を特定し、何れの問題文又は設問を視認しているのかを示す情報をアノテーションとして、当該動画の各時点に付与する。 As a more specific example, the annotation unit 214 evaluates the degree of concentration of the target person at each time point from the recorded video in five stages, and concentrates as an annotation for each time point of the video which is series information. Give a degree. In addition, as another example, information indicating which question sentence or question is visually identified by identifying the question sentence or question that the subject is visually recognizing at each time point from the video about the subject during the mock test. As an annotation, is added to each time point of the moving image.
 このように、何れの問題文又は設問を視認しているのかを示す情報をアノテーションとして付すことは、例えば、対象者の視線の先を示す座標と、模擬試験の問題用紙上の座標として同定することにより可能である。 In this way, adding information indicating which question sentence or question is visually recognized as an annotation is identified as, for example, the coordinates indicating the tip of the subject's line of sight and the coordinates on the question sheet of the mock test. It is possible by that.
 なお、アノテーション付与部214によるアノテーション付与処理は、上記の例に限定されるものではなく、例えば、他の画像解析アルゴリズムや音声解析アルゴリズムを用いてアノテーションを付与すべきポイントを判別し、判別した箇所にアノテーションを付与する構成としてもよい。 The annotation processing by the annotation unit 214 is not limited to the above example. For example, another image analysis algorithm or voice analysis algorithm is used to determine the points to be annotated, and the determined points. May be configured to annotate.
 また、一例としてアノテーションは、開始時刻、終了時刻、タグ及び信頼度によって規定されるものであってもよい。ここで、タグとは、アノテーションの種類を示す情報であって、例えば対象者の集中度、および理解度等を示す情報である。また、信頼度とは、対象となるアノテーションの確からしさを示す情報である。アノテーション付与部214は、アノテーション付与処理によって付与した各アノテーションについて、当該アノテーションがどの程度信頼できるものであるのかを示す上記信頼度を、当該アノテーションについて算出することができる。また、アノテーション付与部214は、アノテーションに加え、当該アノテーションの信頼度を示す情報を系列情報中の各要素に付与してもよい。 Further, as an example, the annotation may be defined by a start time, an end time, a tag, and a reliability. Here, the tag is information indicating the type of annotation, for example, information indicating the degree of concentration and the degree of understanding of the target person. The reliability is information indicating the certainty of the target annotation. The annotation unit 214 can calculate the reliability of each annotation added by the annotation processing, which indicates how reliable the annotation is. Further, in addition to the annotation, the annotation unit 214 may add information indicating the reliability of the annotation to each element in the series information.
 また、例えば集中度とは、一例として、対象者の視認対象のばらつきが小さい場合(視線のばらつきが少ない場合)に相対的に高い値に設定され、対象者の視認対象のばらつきが大きい場合(視線のばらつきが大きい場合)に相対的に低い値に設定されるものであってもよい。アノテーション付与部214は、対象者の視線情報を取得し参照することにより、当該対象者の集中度を特定することができる。 Further, for example, the degree of concentration is set to a relatively high value when the variation of the visual target of the target person is small (when the variation of the line of sight is small), and when the variation of the visual target of the target person is large (for example). It may be set to a relatively low value (when the variation of the line of sight is large). The annotation unit 214 can specify the degree of concentration of the target person by acquiring and referring to the line-of-sight information of the target person.
 また、例えば理解度とは、対象者が理解すべき対象を視認している時間が比較的長い場合(或いは短い場合)に高い値に設定されるものであってもよい。 Further, for example, the comprehension level may be set to a high value when the time for the subject to visually recognize the object to be understood is relatively long (or short).
 また、例えば系列情報が模擬試験中における対象者の視認対象を記録した情報である場合、アノテーション付与部214は、対象者が操作受付部243を介して模擬試験後に入力した当該模擬試験の点数、又はどの問題に正答したかを示す情報を参照して理解度を設定してもよい。また、例えばアノテーション付与部214は、特定のタイミングで特定の動作を行うという所定の条件を対象者が満たした場合に、系列情報中の対象となる箇所について、高い理解度を設定してもよい。また、上述した所定の条件を示す情報は、記憶部230等に事前に格納されていてもよい。 Further, for example, when the series information is information that records the visual target of the target person during the mock test, the annotation unit 214 receives the score of the mock test input by the target person after the mock test via the operation reception unit 243. Alternatively, the comprehension level may be set by referring to the information indicating which question was answered correctly. Further, for example, the annotation unit 214 may set a high degree of understanding of the target portion in the series information when the target person satisfies a predetermined condition that a specific operation is performed at a specific timing. .. Further, the information indicating the above-mentioned predetermined conditions may be stored in advance in the storage unit 230 or the like.
 また、アノテーション付与部214は、例えば提示用情報に含まれる情報を参照して、系列情報にアノテーションを付与する場合に用いる判別ロジック(アノテーション付与ロジック)を更新してもよい。 Further, the annotation unit 214 may update the discrimination logic (annotation logic) used when annotating the series information by referring to the information included in the presentation information, for example.
 また、アノテーション付与部214が、ユーザ入力によってアノテーションの付与を行う場合、ユーザによって付されたアノテーションをフィードバック情報として用いることにより、アノテーション付与ロジックを更新する構成としてもよい。また、アノテーション付与部214は、判別統合情報を参照して、アノテーション付与ロジックを更新してもよい。即ちアノテーション付与部214は、ユーザからのフィードバック情報、及び判別統合情報の少なくとも何れかを参照して、アノテーション付与ロジックを更新してもよい。 Further, when the annotation addition unit 214 adds annotations by user input, the annotation addition logic may be updated by using the annotations attached by the user as feedback information. Further, the annotation unit 214 may update the annotation logic with reference to the discrimination integration information. That is, the annotation addition unit 214 may update the annotation addition logic by referring to at least one of the feedback information from the user and the discrimination integration information.
 <提示用情報取得部216、表示部244>
 提示用情報取得部216は、サーバ100の提示用情報生成部118でアノテーション付系列情報を参照して生成された提示用情報を取得する。
<Presentation information acquisition unit 216, display unit 244>
The presentation information acquisition unit 216 acquires the presentation information generated by referring to the annotated series information in the presentation information generation unit 118 of the server 100.
 表示部244では、提示用情報取得部216が取得した提示用情報を表示する。なお、提示用情報は表示部244を介して提示される動画像であることに限定されず、スピーカ245を介して提示される音声であってもよい。また、学習者用端末装置200は、後述する講師用端末装置1000と同様に、操作受付部243を介して、ユーザからのフィードバック情報を受け付ける構成としてもよい。 The display unit 244 displays the presentation information acquired by the presentation information acquisition unit 216. The presentation information is not limited to the moving image presented via the display unit 244, and may be the sound presented via the speaker 245. Further, the learner terminal device 200 may be configured to receive feedback information from the user via the operation reception unit 243, similarly to the instructor terminal device 1000 described later.
 より具体的に言えば、一例として、対象者と講師は、表示部244が表示する提示用情報を確認して、対話をしながら、この提示用情報に含まれる重要度判別結果について確認を行う。そして、重要度について修正すべき点があれば、
・何れの箇所を修正すべきか、及び
・どのように修正すべきか
を含む情報を操作受付部243に入力する。ここで、重要度について修正すべき点とは、例えば、ある提示用情報に対応する重要度判別結果が、比較的高い重要度を示しているにも関わらず、当該提示用情報は重要でないと対象者によって判断された重要度判別結果等である。操作受付部243は、入力された上記情報を含むフィードバック情報を生成し、通信部220を介してサーバ100に送信する構成としてもよい。
More specifically, as an example, the subject and the instructor confirm the presentation information displayed by the display unit 244, and confirm the importance determination result included in the presentation information while having a dialogue. .. And if there is something that needs to be corrected about importance,
-Enter information including which part should be corrected and how to correct it in the operation reception unit 243. Here, the point to be corrected regarding the importance is, for example, that the presentation information is not important even though the importance determination result corresponding to a certain presentation information shows a relatively high importance. It is the importance determination result etc. judged by the subject. The operation reception unit 243 may be configured to generate feedback information including the above-mentioned input information and transmit it to the server 100 via the communication unit 220.
 <サーバ100>
 図1に示すように、上記情報処理システム1のサーバ100は、制御部110、通信部120および記憶部130を備えている。通信部120によって、情報処理システム1に含まれる他の情報処理装置(学習者用端末装置200、講師用端末装置1000)と通信を行うことができる。また、記憶部130では、他の情報処理装置から送信された情報、サーバで統合した情報等を格納しておくことができる。
<Server 100>
As shown in FIG. 1, the server 100 of the information processing system 1 includes a control unit 110, a communication unit 120, and a storage unit 130. The communication unit 120 can communicate with other information processing devices (learner terminal device 200, instructor terminal device 1000) included in the information processing system 1. In addition, the storage unit 130 can store information transmitted from other information processing devices, information integrated by the server, and the like.
 制御部110は、サーバ100全体を統括する制御装置であって、系列情報取得部112と、重要度判別部114と、判別統合部116と、提示用情報生成部118と、を備えている。また、通信部120は、学習者用端末装置200等の外部の装置との通信処理を行う。記憶部130は、各種データを格納する記憶装置である。 The control unit 110 is a control device that controls the entire server 100, and includes a series information acquisition unit 112, an importance determination unit 114, a discrimination integration unit 116, and a presentation information generation unit 118. In addition, the communication unit 120 performs communication processing with an external device such as the learner terminal device 200. The storage unit 130 is a storage device that stores various types of data.
 <系列情報取得部112>
 系列情報取得部112は、対象者に関する系列情報に含まれる各要素にアノテーションが付与されたアノテーション付系列情報を取得する。一例として、系列情報取得部112は、各学習者用端末装置200から送信される各対象者に関する系列情報にアノテーションが付与された情報を、通信部120を介して取得する。
<Series information acquisition unit 112>
The series information acquisition unit 112 acquires annotated series information in which annotations are added to each element included in the series information regarding the target person. As an example, the series information acquisition unit 112 acquires information in which annotations are added to the series information about each target person transmitted from each learner terminal device 200 via the communication unit 120.
 <重要度判別部114>
 重要度判別部114では、前記対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。
<Importance determination unit 114>
The importance determination unit 114 refers to the annotated series information about the target person, and determines the importance of each element included in the annotated series information of the target person.
 重要度判別部114は、重要度を、前記対象者に関する集中度に基づき判別してもよい。一例として、重要度判別部114は、集中度が所定の値以上である状態、又は、集中度が所定の値以下である状態が所定の期間以上続いた場合に、当該区間は0でない重要度を有すると判別する。 The importance determination unit 114 may determine the importance based on the degree of concentration regarding the target person. As an example, in the importance determination unit 114, when the concentration level is equal to or higher than a predetermined value or the concentration level is equal to or lower than a predetermined value for a predetermined period or longer, the interval is not 0. Is determined to have.
 たとえば、サーバ100は、下記表のようなアノテーション付系列情報を、各端末装置200から取得したとする。
時刻  00:01 00:02 00:03 00:04 00:05
集中度
対象者A  1     5     5     3     1
対象者B  1     5     5     5     1
対象者C  2     5     5     5     5
 この場合、一例として、重要度判別部114は、集中度5が2秒以上続いた区間を、0でない重要度を有する区間として判別する。例えば、対象者Aは、00:02~00:03の2秒間の間、集中度5の状態を保っているので、重要度判別部114は、当該区間の重要度は1であると判定する。また、対象者Bは、00:02~00:04の3秒間の間、集中度5の状態を保っているので、重要度判別部114は、当該区間の重要度は2であると判定する。また、対象者Cは、00:02~00:05の4秒間の間、集中度5の状態を保っているので、重要度判別部114は、当該区間の重要度は3であると判定する。
For example, it is assumed that the server 100 acquires the annotated series information as shown in the table below from each terminal device 200.
Time 00:01 00:02 00:03 00:04 00:05
Concentration Target A 1 5 5 3 1
Target person B 1 5 5 5 1
Target person C 2 5 5 5 5
In this case, as an example, the importance determination unit 114 determines a section in which the concentration level 5 continues for 2 seconds or more as a section having a non-zero importance. For example, since the subject A maintains the state of concentration 5 for 2 seconds from 00:02 to 00:03, the importance determination unit 114 determines that the importance of the section is 1. .. Further, since the subject B maintains the state of concentration 5 for 3 seconds from 00:02 to 00:04, the importance determination unit 114 determines that the importance of the section is 2. .. Further, since the subject C maintains the state of concentration 5 for 4 seconds from 00:02 to 00:05, the importance determination unit 114 determines that the importance of the section is 3. ..
 同様に、重要度判別部114は、集中度1や2が2秒以上続いた区間を、0でない重要度を有する区間として判別する構成とすることができる。 Similarly, the importance determination unit 114 can be configured to determine a section in which the concentration levels 1 and 2 continue for 2 seconds or more as a section having a non-zero importance.
 また、例えば重要度判別部114は、対象者の理解度が低い箇所である程(或いは高い程)対象となる箇所の重要度が高いものとして判別してもよい。或いは重要度判別部114は、講義等において対象者の視線が分散し始めた箇所(或いは視線が集中し始めた箇所)の重要度が高いものとして判別してもよい。 Further, for example, the importance determination unit 114 may determine that the lower (or higher) the degree of understanding of the target person is, the higher the importance of the target part is. Alternatively, the importance determination unit 114 may determine that the location where the subject's line of sight begins to disperse (or the location where the line of sight begins to concentrate) is of high importance in a lecture or the like.
 また、他の例として、重要度判別部114は、アノテーション付系列情報に対して回帰解析(回帰分析)を行い、当該回帰解析を行うことによって得られた各種のパラメータを参照して、重要度の判別を行う構成としてもよい。例えば、ある対象者に関するアノテーション付系列情報に対して回帰解析を適用し、当該アノテーション付系列情報を示す回帰曲線を示すパラメータを導出する。そして、当該パラメータの値と予め定められたパラメータの値とを対比し、対比結果に応じた重要度の判別を行う構成としてもよい。 Further, as another example, the importance determination unit 114 performs regression analysis (regression analysis) on the annotated series information, and refers to various parameters obtained by performing the regression analysis to refer to the importance. It may be configured to discriminate. For example, regression analysis is applied to annotated sequence information related to a certain target person, and a parameter indicating a regression curve indicating the annotated sequence information is derived. Then, the value of the parameter may be compared with the value of a predetermined parameter, and the importance may be determined according to the comparison result.
 また、重要度判別部114は、重要度判別処理に用いる判別ロジックを、通信部120を介して取得したフィードバック情報を参照して更新する。一例として、重要度判別部114が重要度5を付した要素に対して、重要度を下げるべきである旨のフィードバック情報を取得した場合、当該重要度5を付した要素に対してより低い重要度が付されるよう、判別ロジックを更新する。 Further, the importance determination unit 114 updates the determination logic used for the importance determination process with reference to the feedback information acquired via the communication unit 120. As an example, when the importance determination unit 114 obtains feedback information indicating that the importance should be lowered for the element with the importance 5, the importance is lower for the element with the importance 5. Update the discriminant logic so that the degree is added.
 なお、重要度判別部114は、対象者のある期間における視線や動作が、他の対象者のものと乖離している箇所であるほど重要度が高いものとして判別してもよい。また、重要度判別部114は、対象者のある期間における視線や動作が、規範となるものとして設定された視線や動作と乖離している箇所であるほど重要度が高いものとして判別してもよい。また、上述した規範となる視線や動作を示す情報は、記憶部130等に事前に格納されていてもよい。 Note that the importance determination unit 114 may determine that the higher the importance is, the more the line of sight or movement of the target person in a certain period deviates from that of another target person. Further, even if the importance determination unit 114 determines that the line of sight or movement of the subject during a certain period is more important as the line of sight or movement deviates from the line of sight or movement set as the norm. Good. In addition, the above-mentioned information indicating the line of sight or movement that serves as a norm may be stored in advance in the storage unit 130 or the like.
 また、重要度判別部114は、アノテーション付与部214が系列情報にアノテーションを付与する場合における判別ロジックを改善することについての寄与度が高い箇所の重要度が高いものとして、重要度判別処理を行ってもよい。 Further, the importance determination unit 114 performs the importance determination process on the assumption that the portion having a high degree of contribution to improving the identification logic when the annotation unit 214 annotates the series information has a high importance. You may.
 <判別統合部116>
 判別統合部116は、複数の対象者A、B、C、・・・に関する前記重要度判別部114による判別結果を参照し、判別統合情報を生成する。例えば、判別統合部116では、各端末装置200から取得した系列情報を収集して、系列情報を統合することにより、より適切に系列情報の重要度を判別することができる。
<Discrimination integration unit 116>
The discrimination integration unit 116 refers to the discrimination results of the plurality of target persons A, B, C, ... By the importance discrimination unit 114, and generates discrimination integration information. For example, the discriminating and integrating unit 116 can more appropriately determine the importance of the series information by collecting the series information acquired from each terminal device 200 and integrating the series information.
 一例として、判別統合部116は、前記複数の対象者A、B、Cの各々に対する前記重要度判別部114による判別結果から、共通する情報を抽出し、前記判別統合情報に含めてもよい。 As an example, the discrimination integration unit 116 may extract common information from the discrimination results by the importance discrimination unit 114 for each of the plurality of subjects A, B, and C and include it in the discrimination integration information.
 例えば、重要度判別部114が、対象者A、B、Cに対して以下のように重要度を判別したとする。
時刻  00:01 00:02 00:03 00:04 00:05
重要度
対象者A  0     1     1     0     0
対象者B  0     2     2     2     0
対象者C  0     3     3     3     3
 この場合、一例として、判別統合部116は、0でない重要度が付された時刻である00:02が、対象者A、B、Cによって重要であると判断し、当該判断結果を判別統合情報に含める。
For example, it is assumed that the importance determination unit 114 determines the importance of the subjects A, B, and C as follows.
Time 00:01 00:02 00:03 00:04 00:05
Importance Target person A 0 1 1 0 0
Target person B 0 2 2 2 0
Target person C 0 3 3 3 3
In this case, as an example, the discrimination integration unit 116 determines that 00:02, which is a time with a non-zero importance, is important by the subjects A, B, and C, and determines the determination result as discrimination integration information. Include in.
 また、例えば判別統合部116は、対象者のある期間における視線や動作が、他の対象者のものと乖離していることを示す情報、および乖離している度合いを示す情報を、判別統合情報に含めてもよい。また、判別統合部116は、対象者のある期間における視線や動作が、規範になるものとして設定された視線や動作と乖離していることを示す情報、および乖離している度合いを示す情報を、判別統合情報に含めてもよい。 Further, for example, the discriminant integration unit 116 obtains information indicating that the line of sight or movement of the target person in a certain period is different from that of another target person, and information indicating the degree of dissociation. May be included in. In addition, the discrimination integration unit 116 provides information indicating that the line of sight or movement of the target person in a certain period deviates from the line of sight or movement set as a norm, and information indicating the degree of dissociation. , May be included in the discriminant integrated information.
 また、例えば判別統合部116は、対象者の過去の成績記録等の対象者に関する事項を示すメタ情報、又は、対象者が受けている模擬試験等の対象者の環境に関する事項を示すメタ情報を、判別統合情報に含めてもよい。また、上述した各メタ情報は、重要度判別部114による判別結果に含まれていてもよい。 Further, for example, the discrimination integration unit 116 provides meta information indicating matters related to the subject such as past performance records of the subject, or meta information indicating matters related to the subject's environment such as a mock test that the subject is taking. , May be included in the discrimination integrated information. Further, each of the above-mentioned meta information may be included in the determination result by the importance determination unit 114.
 また、他の例として、重要度判別部114が、アノテーション付系列情報に対して回帰解析を行う構成の場合、判別統合部116は、対象者A、B、Cに対する回帰解析によって得られたパラメータのうち、共通する性質を有するパラメータを抽出し、当該パラメータを判別統合情報に含める構成としてもよい。なお、判別統合部116は、複数の対象者に対応する回帰モデルを参照して、共通して用いられる回帰モデルを生成してもよい。換言すれば、判別統合部116は、複数の判別アルゴリズム、又はそれらが参照するパラメータ等を入力データとし、統合後の判別アルゴリズム、又はそれらが参照すパラメータを出力してもよい。また、対象者又は講師によって系列情報に重要度が入力可能な構成の場合、判別統合部116は、例えば複数の講師が重要だと考えた箇所が反映された判別統合情報を生成することができる。 Further, as another example, when the importance determination unit 114 performs regression analysis on the annotated series information, the discrimination integration unit 116 determines the parameters obtained by the regression analysis on the subjects A, B, and C. Of these, parameters having common properties may be extracted and the parameters may be included in the discrimination integration information. In addition, the discrimination integration unit 116 may generate a regression model that is commonly used by referring to the regression model corresponding to a plurality of subjects. In other words, the discriminant integration unit 116 may use a plurality of discriminant algorithms or parameters referred to by them as input data, and output the discriminant algorithms after integration or the parameters referred to by them. Further, in the case of a configuration in which the importance can be input to the series information by the target person or the instructor, the discrimination integration unit 116 can generate discrimination integration information that reflects, for example, a portion considered to be important by a plurality of lecturers. ..
 また、判別統合部116は、判別統合処理を生成する判別統合処理に用いる判別ロジックを、通信部120を介して取得したフィードバック情報を参照して更新する。一例として、判別統合部116が重要度5を付した要素に対して、重要度を下げるべきである旨のフィードバック情報を取得した場合、当該重要度5を付した要素に対してより低い重要度が付されるよう、判別ロジックを更新する。 Further, the discrimination integration unit 116 updates the discrimination logic used for the discrimination integration process for generating the discrimination integration process with reference to the feedback information acquired via the communication unit 120. As an example, when the discrimination integration unit 116 obtains feedback information that the importance should be lowered for the element with the importance level 5, the importance level is lower than that of the element with the importance level 5. Update the discriminant logic so that is added.
 また、判別統合部116は、重要度判別ロジックを更新するための情報、または、更新された判別ロジックを生成し、重要度判別部114に供給する。重要度判別部114は、取得した情報を参照して重要度判別ロジックを更新する。これにより、例えば複数の対象者に対応する系列情報によって、重要度判別処理の結果をより好適なものに更新することができる。 Further, the discrimination integration unit 116 generates information for updating the importance discrimination logic or the updated discrimination logic, and supplies the information to the importance discrimination unit 114. The importance determination unit 114 updates the importance determination logic with reference to the acquired information. Thereby, for example, the result of the importance determination process can be updated to a more suitable one by the series information corresponding to a plurality of target persons.
 また、判別統合部116は、アノテーションの付与に用いる判別ロジックを更新するための情報、または、更新された判別ロジックを生成し、通信部120を介して学習者用端末装置200に送信する。アノテーション付与部214は取得した情報を参照して上記判別ロジックを更新する。これにより、例えば複数の対象者に対応する系列情報によって、アノテーションを付与する処理をより好適なものに更新することができる。なお、判別統合部116は、上述した各判別ロジックを更新する場合に、自身が生成した判別統合情報を参照してもよい。 Further, the discrimination integration unit 116 generates information for updating the discrimination logic used for adding annotations or the updated discrimination logic, and transmits the updated discrimination logic to the learner terminal device 200 via the communication unit 120. The annotation unit 214 updates the above-mentioned discrimination logic with reference to the acquired information. Thereby, for example, the process of annotating can be updated to a more suitable one by the series information corresponding to a plurality of target persons. The discriminant integration unit 116 may refer to the discriminant integration information generated by itself when updating each of the above-mentioned discrimination logics.
 なお、アノテーション付与部214によるアノテーション付与ロジック、重要度判別部114による重要度の判別ロジック、及び判別統合部116による統合ロジックは上記の例に限定されるものではない。また、アノテーション付与部214、重要度判別部114、及び判別統合部116による処理は、ルールベースのロジックを用いてもよいし、ニューラルネットワーク等の機械学習を用いてもよいし、その他の手法を用いてもよい。 Note that the annotation logic by the annotation unit 214, the importance determination logic by the importance determination unit 114, and the integration logic by the identification integration unit 116 are not limited to the above examples. Further, the processing by the annotation unit 214, the importance determination unit 114, and the discrimination integration unit 116 may use rule-based logic, machine learning such as a neural network, or other methods. You may use it.
 例えば、以下のような機械学習的手法の何れかまたはそれらの組み合わせを用いることができる。 For example, any of the following machine learning methods or a combination thereof can be used.
 ・サポートベクターマシン(SVM: Support Vector Machine)
 ・クラスタリング(Clustering)
 ・帰納論理プログラミング(ILP: Inductive Logic Programming)
 ・遺伝的アルゴリズム(GP: Genetic Programming)
 ・ベイジアンネットワーク(BN: Baysian Network)
 ・ニューラルネットワーク(NN: Neural Network)
 ニューラルネットワークを用いる場合、ニューラルネットワークへのインプット用に入力データを予め加工して用いてもよい。このような加工には、データの1次元的配列化、または多次元的配列化に加え、例えば、データオーギュメンテーション(Deta Augumentation)等の手法を用いることができる。
・ Support Vector Machine (SVM)
・ Clustering
・ Inductive Logic Programming (ILP)
・ Genetic Programming (GP)
・ Bayesian Network (BN)
・ Neural network (NN)
When a neural network is used, the input data may be processed in advance for input to the neural network. For such processing, in addition to one-dimensional arrangement or multidimensional arrangement of data, for example, a method such as data augmentation can be used.
 また、ニューラルネットワークを用いる場合、畳み込み処理を含む畳み込みニューラルネットワーク(CNN: Convolutional Neural Network)を用いてもよい。より具体的には、ニューラルネットワークに含まれる1又は複数の層(レイヤ)として、畳み込み演算を行う畳み込み層を設け、当該層に入力される入力データに対してフィルタ演算(積和演算)を行う構成としてもよい。またフィルタ演算を行う際には、パディング等の処理を併用したり、適宜設定されたストライド幅を採用したりしてもよい。 When using a neural network, a convolutional neural network (CNN: Convolutional Neural Network) including a convolution process may be used. More specifically, as one or more layers included in the neural network, a convolution layer for performing a convolution operation is provided, and a filter operation (product-sum operation) is performed on the input data input to the layer. It may be configured. Further, when performing the filter calculation, a process such as padding may be used together, or an appropriately set stride width may be adopted.
 また、ニューラルネットワークとして、数十~数千層に至る多層型又は超多層型のニューラルネットワークを用いてもよい。上述の機械学習は、教師あり学習であってもよいし、教師なし学習であってもよい。 Further, as the neural network, a multi-layered or super-multilayered neural network having tens to thousands of layers may be used. The machine learning described above may be supervised learning or unsupervised learning.
 <提示用情報生成部118>
 提示用情報生成部118は、重要度判別部114による判別結果と判別統合情報との少なくとも何れかを参照して提示用情報を生成する。例えば、判別統合部116で統合された判別統合情報から、各対象者向けに加工した情報がそれぞれの端末装置200に送信される。なお、生成された統合情報、提示用情報は、記憶部130に格納される。
<Presentation information generation unit 118>
The presentation information generation unit 118 generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit 114 and the discrimination integrated information. For example, from the discrimination integration information integrated by the discrimination integration unit 116, the information processed for each target person is transmitted to each terminal device 200. The generated integrated information and presentation information are stored in the storage unit 130.
 また、例えば、提示用情報生成部118は、重要度判別部114による判別結果が示す重要度が高い程、対応する情報を優先的に提示用情報に含ませてもよい。また、提示用情報生成部118は、判別統合情報に含まれる情報が、対象者のある期間における視線や動作と、他の対象者の視線や動作との乖離している度合いが高いことを示している程、対応する情報を優先的に提示用情報に含ませてもよい。また、提示用情報生成部118は、判別統合情報に含まれる情報が、対象者のある期間における視線や動作と、規範になるものとして設定された視線や動作との乖離している度合いが高いことを示している程、対応する情報を優先的に提示用情報に含ませてもよい。 Further, for example, the presentation information generation unit 118 may preferentially include the corresponding information in the presentation information as the importance indicated by the determination result by the importance determination unit 114 becomes higher. In addition, the presentation information generation unit 118 indicates that the information included in the discriminant integrated information has a high degree of dissociation between the line of sight and movement of the target person during a certain period and the line of sight and movement of another target person. As such, the corresponding information may be preferentially included in the presentation information. In addition, the presentation information generation unit 118 has a high degree of dissociation between the line of sight and movement of the target person in a certain period and the line of sight and movement set as a norm in the information included in the discrimination integrated information. The more it indicates that, the more the corresponding information may be preferentially included in the presentation information.
 <講師用端末装置1000>
 講師用端末装置1000は、制御部1010、操作受付部1043、表示部1044、スピーカ1045、通信部1020、記憶部1030を備える。
<Terminal device 1000 for lecturers>
The instructor terminal device 1000 includes a control unit 1010, an operation reception unit 1043, a display unit 1044, a speaker 1045, a communication unit 1020, and a storage unit 1030.
 制御部1010は、講師用端末装置1000全体を統括する制御装置であって、提示用情報取得部1014、及び、フィードバック情報取得部1012を備えている。 The control unit 1010 is a control device that controls the entire instructor terminal device 1000, and includes a presentation information acquisition unit 1014 and a feedback information acquisition unit 1012.
 通信部1020は、サーバ100等の外部の装置との通信処理を行う。記憶部1030は、各種データを格納する記憶装置である。操作受付部1043は、講師等の入力操作を受け付けるインターフェースであって、例えばキーボード等のボタンである。表示部1044は、動画像を表示する表示パネルである。なお、操作受付部1043と表示部1044とが、講師等の入力操作を受け付け、且つ動画像を表示するタッチパネルとして実現される構成でもよい。 The communication unit 1020 performs communication processing with an external device such as the server 100. The storage unit 1030 is a storage device that stores various data. The operation reception unit 1043 is an interface for receiving input operations of a lecturer or the like, and is, for example, a button such as a keyboard. The display unit 1044 is a display panel for displaying a moving image. The operation reception unit 1043 and the display unit 1044 may be realized as a touch panel that accepts input operations of a lecturer or the like and displays a moving image.
 <提示用情報取得部1014、表示部1044>
 提示用情報取得部1014は、アノテーション付系列情報を参照してサーバ100の提示用情報生成部118で生成された提示用情報を取得する。
<Presentation information acquisition unit 1014, display unit 1044>
The presentation information acquisition unit 1014 acquires the presentation information generated by the presentation information generation unit 118 of the server 100 with reference to the annotated sequence information.
 表示部1044では、提示用情報取得部1014が取得した提示用情報を表示する。 The display unit 1044 displays the presentation information acquired by the presentation information acquisition unit 1014.
 <フィードバック情報取得部1012>
 フィードバック情報取得部1012は、ユーザからのフィードバック情報を取得する。一例として、講師は、表示部1044が表示する提示用情報を確認して、この提示用情報に含まれる重要度判別結果について確認を行う。そして、重要度について修正すべき点があれば、
・何れの箇所を修正すべきか、及び
・どのように修正すべきか
を含む情報を操作受付部1043に入力する。ここで、重要度について修正すべき箇所とは、例えば、ある提示用情報に対応する重要度判別結果が、比較的高い重要度を示しているにも関わらず、当該提示用情報は重要でないと講師によって判断された重要度判別結果等である。操作受付部1043は、入力された上記情報を含むフィードバック情報を生成し、通信部1020を介してサーバ100に送信する。
<Feedback information acquisition unit 1012>
The feedback information acquisition unit 1012 acquires feedback information from the user. As an example, the instructor confirms the presentation information displayed by the display unit 1044, and confirms the importance determination result included in the presentation information. And if there is something that needs to be corrected about importance,
-Enter information including which part should be corrected and how to correct it in the operation reception unit 1043. Here, the part to be corrected with respect to the importance is, for example, that the presentation information is not important even though the importance determination result corresponding to a certain presentation information shows a relatively high importance. These are the results of importance determination judged by the instructor. The operation reception unit 1043 generates feedback information including the input information, and transmits the feedback information to the server 100 via the communication unit 1020.
 サーバ100の重要度判別部114は、ユーザからのフィードバック情報を参照して、判別ロジックを更新する。 The importance determination unit 114 of the server 100 updates the determination logic with reference to the feedback information from the user.
 なお、フィードバック情報取得部1012は、講師用端末装置1000に備えられていても、学習者用端末装置200に備えられていてもよい。 The feedback information acquisition unit 1012 may be provided in the instructor terminal device 1000 or in the learner terminal device 200.
 <視線の検出>
 以下では、系列情報取得部212、アノテーション付与部214における対象者の視線を検出する仕組みについて説明する。
<Detection of line of sight>
Hereinafter, the mechanism for detecting the line of sight of the target person in the series information acquisition unit 212 and the annotation unit 214 will be described.
 まず、カメラ241が対象者の模擬試験中の様子を撮影する。系列情報取得部212では、カメラ241で撮影された動画から対象者の顔情報を取得する。対象者の顔情報には、例えば、顔の各部位(例えば、目、鼻、口及び眉等)の位置を示す位置情報、形状を示す形状情報、及び大きさを示す大きさ情報等が含まれる。特に、目の情報からは、対象者が注視する対象に対する対象者の集中度を評価することができる。目の情報としては、例えば目頭および目尻の端点、虹彩および瞳孔等のエッジ等が挙げられる。系列情報取得部212は、カメラ241から取得した動画に、ノイズ低減、エッジ強調等の補正処理を適宜行ってもよい。系列情報取得部212は、抽出した顔情報をアノテーション付与部214に送信する。 First, the camera 241 takes a picture of the subject during the mock test. The series information acquisition unit 212 acquires the face information of the target person from the moving image taken by the camera 241. The face information of the subject includes, for example, position information indicating the position of each part of the face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like. Is done. In particular, from the eye information, it is possible to evaluate the degree of concentration of the subject on the subject to be watched by the subject. Examples of eye information include the end points of the inner and outer corners of the eye, the edges of the iris and the pupil, and the like. The sequence information acquisition unit 212 may appropriately perform correction processing such as noise reduction and edge enhancement on the moving image acquired from the camera 241. The sequence information acquisition unit 212 transmits the extracted face information to the annotation unit 214.
 アノテーション付与部214では、系列情報取得部212が抽出した顔情報に基づき、対象者の状態を検出する。例えば、対象者の顔の各部位の状態であり、上記対象者の視線、瞳孔の状態、瞬きの回数、眉の動き、頬の動き、瞼の動き、唇の動きおよび顎の動きのうち少なくとも1つを検出する。 The annotation unit 214 detects the state of the target person based on the face information extracted by the series information acquisition unit 212. For example, the state of each part of the subject's face, which is at least one of the subject's line of sight, pupil state, number of blinks, eyebrow movement, cheek movement, eyelid movement, lip movement, and jaw movement. Detect one.
 対象者の視線の検出方法としては、特に限定されないが、例えば、端末装置200に、点光源(不図示)を設け、点光源からの光の角膜反射像をカメラ241で所定時間撮影することにより、対象者の視線の移動先を検出する方法が挙げられる。点光源の種類は特に限定されず、可視光、赤外光が挙げられるが、例えば赤外線LEDを用いることで、対象者に不快感を与えることなく、視線の検出をすることができる。視線の検出において、視線が所定時間以上移動しない場合は、同じ場所を注視しているといえる。 The method of detecting the line of sight of the subject is not particularly limited, but for example, by providing a point light source (not shown) in the terminal device 200 and photographing the corneal reflex image of the light from the point light source with the camera 241 for a predetermined time. , A method of detecting the movement destination of the line of sight of the subject can be mentioned. The type of the point light source is not particularly limited, and examples thereof include visible light and infrared light. For example, by using an infrared LED, the line of sight can be detected without causing discomfort to the subject. In the detection of the line of sight, if the line of sight does not move for a predetermined time or longer, it can be said that the same place is being watched.
 また、瞳孔の状態を検出する方法としては、特に限定されないが、例えば、ハフ変換を利用して、目の画像から円形の瞳孔を検出する方法等が挙げられる。一般的に、人間は、集中している場合に開瞳する傾向にあるため、瞳孔のサイズを検出することで、対象者の集中度を評価することができる。例えば、瞳孔のサイズを所定時間検出し、所定時間内で瞳孔が大きくなっている時間は、対象者がある対象を注視している可能性が高いといえる。瞳孔のサイズに関して、閾値を設定し、瞳孔のサイズが閾値以上である場合は「開」、瞳孔のサイズが閾値未満である場合は「閉」として評価してもよい。 The method of detecting the state of the pupil is not particularly limited, and examples thereof include a method of detecting a circular pupil from the image of the eye by using the Hough transform. In general, humans tend to open their pupils when they are concentrated, so it is possible to evaluate the degree of concentration of a subject by detecting the size of the pupil. For example, it can be said that there is a high possibility that the subject is gazing at a certain subject during the time when the pupil size is detected for a predetermined time and the pupil is enlarged within the predetermined time. A threshold value may be set for the pupil size and evaluated as "open" when the pupil size is equal to or larger than the threshold value and as "closed" when the pupil size is less than the threshold value.
 また、瞬きの回数を検出する方法としては、特に限定されないが、例えば、赤外光対象の目に対して照射し、開眼時と、閉眼時との赤外光量反射量の差を検出する方法等が挙げられる。一般的に、人間は、集中している場合、低い頻度で安定した間隔で瞬きをする傾向にあるため、瞬きの回数を検出することで、対象者の集中度を評価することができる。例えば、瞬きの回数を所定時間検出し、所定時間内で瞬きが安定した間隔で行われている場合、対象者がある対象を注視している可能性が高いといえる。 The method for detecting the number of blinks is not particularly limited, but for example, a method of irradiating an eye to be infrared-lighted and detecting the difference in the amount of infrared light reflected between the eyes when the eyes are opened and when the eyes are closed. And so on. In general, when humans are concentrated, they tend to blink at low frequency and at stable intervals. Therefore, the degree of concentration of a subject can be evaluated by detecting the number of blinks. For example, if the number of blinks is detected for a predetermined time and the blinks are performed at stable intervals within a predetermined time, it can be said that there is a high possibility that the subject is gazing at a certain subject.
 アノテーション付与部214は、対象者の視線、瞳孔の状態および瞬きの回数、眉の動き、瞼の動き、頬の動き、鼻の動き、唇の動きおよび顎の動きのうち少なくとも1つを検出すればよいが、これらを組み合わせることが好ましい。このように検出方法を組み合わせることで、アノテーション付与部214は、ある対象物を視認しているときの対象者の集中度を好適に評価することができる。 The annotation unit 214 detects at least one of the subject's line of sight, pupil condition and number of blinks, eyebrow movement, eyelid movement, cheek movement, nose movement, lip movement and jaw movement. However, it is preferable to combine these. By combining the detection methods in this way, the annotation unit 214 can suitably evaluate the degree of concentration of the target person when visually recognizing a certain object.
 目の状態以外では、例えば、眉の内側を持ち上げるか、外側を上げるか等の眉の動き、上瞼を上げる、瞼を緊張させる等の瞼の動き、鼻に皺を寄せる等の鼻の動き、上唇を持ち上げる、唇をすぼめる等の唇の動き、頬を持ち上げる等の頬の動き、顎を下げる等の顎の動き等の顔の各部位の状態が挙げられる。対象者の状態として、顔の複数の部位の状態を組み合わせてもよい。 Other than the condition of the eyes, for example, eyebrow movements such as lifting the inside or outside of the eyebrows, eyebrow movements such as raising the upper eyelids and tensioning the eyebrows, and nose movements such as wrinkling the nose. , The state of each part of the face such as lip movements such as lifting the upper lip and purging the lips, cheek movements such as lifting the cheeks, and jaw movements such as lowering the nose. As the state of the subject, the state of a plurality of parts of the face may be combined.
 また、視線の情報に加えて、上述したように、瞳孔の状態、瞬きの回数、眉の動き、瞼の動き、頬の動き、鼻の動き、唇の動きおよび顎の動きの検出結果を参照することで、対象者の集中度をさらに好適に判別することができる。また、対象者の脈拍データを更に参照して対象者の集中度を判別する構成としてもよい。 Also, in addition to the line-of-sight information, as described above, refer to the detection results of pupil condition, number of blinks, eyebrow movement, eyelid movement, cheek movement, nose movement, lip movement and jaw movement. By doing so, the degree of concentration of the subject can be more preferably determined. Further, the pulse data of the subject may be further referred to to determine the degree of concentration of the subject.
 アノテーション付与部214では、判別された対象者の集中度を、例えば、1~5の5段階で評価し、アノテーションとして対象者の系列情報に付与する。 The annotation unit 214 evaluates the degree of concentration of the determined target person in five stages of, for example, 1 to 5, and adds it to the series information of the target person as an annotation.
§3.動作例 <情報処理システム1の動作>
 次に、情報処理システム1の動作について説明する。図2は、本実施形態に係る情報処理システム1の動作例を例示するシーケンス図である。図2を参照しながら、本実施形態の情報処理システム1における提示用情報を生成する処理について説明する。
§3. Operation example <Operation of information processing system 1>
Next, the operation of the information processing system 1 will be described. FIG. 2 is a sequence diagram illustrating an operation example of the information processing system 1 according to the present embodiment. The process of generating presentation information in the information processing system 1 of the present embodiment will be described with reference to FIG.
 (ステップS102)
 ステップS102では、アノテーション付与部214が、系列情報取得部212で取得された対象者に関する系列情報に含まれる各要素にアノテーションを付与することによって、アノテーション付系列情報を生成する。通信部220は、生成されたアノテーション付系列情報を、サーバ100に送信する。
(Step S102)
In step S102, the annotation-giving unit 214 generates the annotated series information by adding annotations to each element included in the series information regarding the target person acquired by the series information acquisition unit 212. The communication unit 220 transmits the generated annotated sequence information to the server 100.
 (ステップS104)
 ステップS104では、サーバ100の系列情報取得部112が、アノテーション付系列情報が系列情報を取得する。
(Step S104)
In step S104, the sequence information acquisition unit 112 of the server 100 acquires the sequence information with the annotation series information.
 (ステップS106)
 ステップS106では、重要度判別部114が、対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。
(Step S106)
In step S106, the importance determination unit 114 refers to the annotated series information about the target person and determines the importance of each element included in the annotated series information of the target person.
 (ステップS108)
 ステップS108では、判別統合部116が、複数の対象者A、B、C・・・に関する重要度の判別結果を参照して、判別統合情報を生成する。
(Step S108)
In step S108, the discrimination integration unit 116 refers to the discrimination results of the importance of the plurality of target persons A, B, C ..., and generates the discrimination integration information.
 (ステップS110)
 ステップS110では、提示用情報生成部118が、重要度判別部114による判別結果と判別統合情報との少なくとも何れかを参照して提示用情報を生成する。なお、提示用情報生成部118が、重要度判別部114による判別結果のみを参照して提示用情報を生成する場合、ステップS108の処理は必ずしも実行されることを要せず、ステップS106の処理に続いて本ステップS110の処理が実行されてもよい。その後、通信部120が、生成された提示用情報を各学習者用端末装置200に送信する。
(Step S110)
In step S110, the presentation information generation unit 118 generates the presentation information by referring to at least one of the discrimination result by the importance determination unit 114 and the discrimination integrated information. When the presentation information generation unit 118 generates the presentation information by referring only to the determination result by the importance determination unit 114, the process of step S108 does not necessarily have to be executed, and the process of step S106 The process of this step S110 may be executed subsequently. After that, the communication unit 120 transmits the generated presentation information to each learner terminal device 200.
 (ステップS112)
 ステップS112では、各学習者用端末装置200の提示用情報取得部216が、生成された提示用情報を取得する。
(Step S112)
In step S112, the presentation information acquisition unit 216 of each learner terminal device 200 acquires the generated presentation information.
 (ステップS114)
 ステップS114では、提示部(表示部)244が、取得した提示用情報を提示する。
(Step S114)
In step S114, the presentation unit (display unit) 244 presents the acquired presentation information.
 また、本ステップS114では、アノテーション付与部214は、提示用情報に含まれる情報を参照して、系列情報にアノテーションを付与する場合に用いる判別ロジックを更新してもよい。 Further, in this step S114, the annotation unit 214 may update the discrimination logic used when annotating the series information by referring to the information included in the presentation information.
 (ステップS116)
 ステップS116では、端末装置200(または講師用端末装置1000)が、フィードバック情報を取得する。フィードバック情報は、上述したように、対象者が講師とともに提示用情報を確認し、判別された重要度の確認・修正を行った結果として生成される。端末装置200の通信部220は、フィードバック情報をサーバ100に送信する。
(Step S116)
In step S116, the terminal device 200 (or the instructor terminal device 1000) acquires feedback information. As described above, the feedback information is generated as a result of the subject confirming the presentation information together with the instructor and confirming / correcting the determined importance. The communication unit 220 of the terminal device 200 transmits feedback information to the server 100.
 (ステップS118)
 ステップS118では、サーバ100の重要度判別部114または判別統合部116が、学習者用端末装置200からのフィードバック情報を参照して、各判別ロジックを更新する。例えば、重要度判別部114は、端末装置200からの、確認・修正された重要度を含むフィードバック情報に基づいて、重要度の判別ロジックを更新する。
(Step S118)
In step S118, the importance determination unit 114 or the determination integration unit 116 of the server 100 updates each determination logic with reference to the feedback information from the learner terminal device 200. For example, the importance determination unit 114 updates the importance determination logic based on the feedback information including the confirmed / corrected importance from the terminal device 200.
 (ステップS120)
 端末装置200の系列情報取得部212が、再び、系列情報を取得すると、アノテーション付与部214が、系列情報にアノテーションを付与することによって、アノテーション付系列情報を生成する。以下、ステップS120~ステップS136の工程は、上記ステップS102~ステップS118の工程と同じである。
(Step S120)
When the series information acquisition unit 212 of the terminal device 200 acquires the series information again, the annotation unit 214 adds annotations to the series information to generate the annotated series information. Hereinafter, the steps of steps S120 to S136 are the same as the steps of steps S102 to S118.
 情報処理システム1では、学習者用端末装置200が系列情報を取得する度に、上記ステップS102~ステップS118までのステップを繰り返す。 In the information processing system 1, every time the learner terminal device 200 acquires the series information, the steps from step S102 to step S118 are repeated.
 したがって、サーバ100では、対象者から系列情報を取得する度に、各判別ロジックを更新する。対象者からの系列情報が増加するほど、各判別ロジックが改善され、より好適に各判別を行うができる。 Therefore, the server 100 updates each discrimination logic every time the series information is acquired from the target person. As the series information from the target person increases, each discrimination logic is improved, and each discrimination can be performed more preferably.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例1>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理等は上述の例に限られるものではない。以下では、本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理等の他の例について説明する。
<Other examples 1 such as annotation processing, importance assignment processing, and presentation information generation processing>
The annotation processing, importance assignment processing, presentation information generation processing, and the like by the information processing system 1 according to the present embodiment are not limited to the above examples. Hereinafter, other examples such as annotation processing, importance assignment processing, and presentation information generation processing by the information processing system 1 according to the present embodiment will be described.
 本例における系列情報は、模擬試験中における対象者の視認対象を記録した情報である。また、図3は、録画の対象となる試験問題の一例を示す図である。図3において、領域R1~R3は、各々の範囲を区別するために事前に規定された領域である。また、マーカー301は、対象者の視線の先を示す座標を、模擬試験の問題用紙上の座標として同定するためのマーカーである。 The series information in this example is information that records the visual target of the subject during the mock test. Further, FIG. 3 is a diagram showing an example of a test question to be recorded. In FIG. 3, regions R1 to R3 are pre-defined regions for distinguishing each range. Further, the marker 301 is a marker for identifying the coordinates indicating the tip of the line of sight of the subject as the coordinates on the question sheet of the mock test.
 本例に係るアノテーション付与部214は、系列情報の記録中において、対象者が試験問題の何れの箇所を視認していたかを示す情報を、系列情報に対して付与する。例えば、アノテーション付与部214は、時刻00:05に対象者が領域R1を視認していたことを示す情報等を系列情報に付与する。 The annotation unit 214 according to this example gives information to the series information indicating which part of the test question the subject was visually recognizing during the recording of the series information. For example, the annotation unit 214 adds information or the like indicating that the target person has visually recognized the area R1 to the series information at time 00:05.
 図4は提示用情報の一例を示す図である。また、図4(A)は、対象となる模擬試験を解く場合において規範となる視線の移動の仕方を時系列上に示した図である。図4(B)は、当該模擬試験を解いた対象者の視線の移動の仕方を時系列上に示した図である。図4(C)は、重要度判別部114によって系列情報に付与された、各時点における重要度を示す図である。図4の各図において横軸は時間を示している。また、図4(C)の縦軸は、重要度の高さを示している。 FIG. 4 is a diagram showing an example of presentation information. Further, FIG. 4A is a time-series diagram showing how to move the line of sight, which is a norm when solving the target mock test. FIG. 4B is a time-series diagram showing how the line of sight of the subject who solved the mock test moves. FIG. 4C is a diagram showing the importance at each time point given to the series information by the importance determination unit 114. In each figure of FIG. 4, the horizontal axis represents time. The vertical axis of FIG. 4C shows the high degree of importance.
 例えば、図4(C)中、箇所303は、対象者が試験問題の全体を一読した際に、視線のばらつきが規範となる視線に比べて大きかったため、重要度判別部114が高い重要度を設定した箇所を示している。また、箇所304は、対象者が解答の検討を行った際に、解答のポイントとなる領域R1を視認した割合が低かったために、対象者の理解度が低いものと判定して、重要度判別部114が高い重要度を設定した箇所を示している。 For example, in FIG. 4C, when the subject read the entire examination question, the variation in the line of sight was larger than that of the standard line of sight, so that the importance determination unit 114 was highly important. Shows the set location. Further, in location 304, when the subject examined the answer, the ratio of visually recognizing the region R1 which is the point of the answer was low, so that the subject was judged to have a low degree of understanding and the importance was determined. Part 114 indicates a place where a high degree of importance is set.
 なお、重要度判別部114は、図4(A)が示す情報と、図4(B)が示す情報との差異が大きい箇所、或いは小さい箇所に高い重要度を設定してもよいし、対象者が解答のポイントとなる領域を視認した割合が高かった場合に、系列情報中の対応する箇所に高い重要度を設定してもよい。また、図4(C)には、判別統合部116が生成した統合判別情報として、他のユーザに対応する系列情報に設定された重要度を示す情報を、ユーザが比較可能なように含める構成としてもよい。 The importance determination unit 114 may set a high importance in a place where the difference between the information shown in FIG. 4 (A) and the information shown in FIG. 4 (B) is large or small, or the target. When the ratio of the person visually recognizing the area that is the point of the answer is high, a high importance may be set for the corresponding part in the series information. Further, FIG. 4C includes, as the integrated discrimination information generated by the discrimination integration unit 116, information indicating the importance set in the series information corresponding to other users so that the users can compare. May be.
 図5は、学習者用端末装置200の表示部244(或いは講師用端末装置1000の表示部1044)に表示される提示用情報の一例を示す図である。図5において、オブジェクト311、312及び313は、それぞれ図4(A)、図4(B)及び図4(C)に対応する。 FIG. 5 is a diagram showing an example of presentation information displayed on the display unit 244 of the learner terminal device 200 (or the display unit 1044 of the instructor terminal device 1000). In FIG. 5, the objects 311, 312 and 313 correspond to FIGS. 4 (A), 4 (B) and 4 (C), respectively.
 画面315は、対象者の視認対象を記録した映像の再生画面である。また、当該画面においては、マーカー301は表示されずともよい。 The screen 315 is a playback screen of a video recording the visual target of the target person. Further, the marker 301 may not be displayed on the screen.
 シークバー316中のオブジェクト317、及びオブジェクト314は、画面315において、対象者の視認対象を記録した映像の何れの箇所を再生しているかを示している。また、映像を再生する箇所は、オブジェクト316又はオブジェクト317を左右にスライドさせることによって変更可能であってもよい。オブジェクト318は、当該映像の再生を制御するための操作パネルである。 The object 317 and the object 314 in the seek bar 316 indicate which part of the video recording the visual target of the target person is being reproduced on the screen 315. Further, the position where the video is reproduced may be changed by sliding the object 316 or the object 317 left and right. The object 318 is an operation panel for controlling the reproduction of the video.
 また、映像を再生する箇所は、画面315中の文章の一部を選択することによって変更可能であってもよい。つまり、文章の一部が選択された場合、対象者が当該文章の一部を視認していた箇所から映像の再生が始まる構成としてもよい。 Further, the part where the video is reproduced may be changed by selecting a part of the text on the screen 315. That is, when a part of the sentence is selected, the reproduction of the image may start from the place where the target person visually recognizes the part of the sentence.
 また、表示部244は、図5に例示される画面に、対象者が試験問題の何れの箇所をどの程度見ていたかの割合を示す円グラフ等を含ませて表示してもよい。また、上述した規範となる視線についての円グラフ等についても同様である。 Further, the display unit 244 may display the screen illustrated in FIG. 5 by including a pie chart or the like showing the ratio of which part of the examination question the subject was looking at. The same applies to the above-mentioned normative pie chart and the like for the line of sight.
 また、操作受付部243は、映像の各箇所に対応するコメント等の入力を受け付ける。入力されたコメント等は、フィードバック情報として通信部220を介してサーバ100に送信される。 In addition, the operation reception unit 243 accepts input of comments and the like corresponding to each part of the video. The input comment or the like is transmitted to the server 100 as feedback information via the communication unit 220.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例2>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の他の例について説明する。なお、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 2 such as annotation processing, importance assignment processing, and presentation information generation processing>
Other examples of annotation processing, importance assignment processing, and presentation information generation processing by the information processing system 1 according to the present embodiment will be described. It should be noted that the duplicate explanation of the matters already explained in the above example will not be repeated.
 本例における系列情報は、飲食店等における接客作業を学習する対象者の視認対象を記録した情報である。また、図6は、接客作業の学習中における対象者の視認対象の一例を示す図である。図6に示す画像は、店員である学習者から見た飲食店内の様子であり、当該画像には、客、自分の手元、自分以外の店員、及びテーブル等が表示されている。 The series information in this example is information that records the visual target of the target person who learns the customer service work in a restaurant or the like. Further, FIG. 6 is a diagram showing an example of a visual target of the target person during learning of customer service work. The image shown in FIG. 6 shows the inside of the restaurant as seen by a learner who is a clerk, and the image shows a customer, one's own hand, a clerk other than oneself, a table, and the like.
 また図7は、学習者の視認対象、発話(発話内容及び声のトーン)、動作及び集中度(瞬き、瞳孔の状態、視線停留時間及び表情)についてのデータの一例を示す表であって、本例に係るアノテーション付与部214が、各時点において系列情報に付与したアノテーションの一例を示す表である。 Further, FIG. 7 is a table showing an example of data on the learner's visual object, utterance (utterance content and voice tone), movement and concentration (blink, pupil state, line-of-sight dwell time and facial expression). It is a table which shows an example of the annotation which the annotation addition part 214 which concerns on this example gave to the series information at each time point.
 図7に示すように、アノテーション付与部214は、一例として、各時点において、視認対象、発話、動作、及び集中度を、アノテーションとして付与する。また、イベントとは、学習者又はベテラン接客者がオブジェクトを視認したタイミングから所定時間内に行われる一連の意思決定(発話及び動作等)のまとまりである。一例において、イベントには、タイミング、視認対象、発話、動作、及び集中度が含まれ、それぞれのイベントはIDによって識別可能である。 As shown in FIG. 7, as an example, the annotation unit 214 assigns a visual object, an utterance, an action, and a degree of concentration as annotations at each time point. An event is a set of decisions (utterances, actions, etc.) made within a predetermined time from the timing when a learner or a veteran customer service person visually recognizes an object. In one example, an event includes timing, visual object, utterance, action, and concentration, and each event can be identified by an ID.
 また、目の状態以外では、例えば、眉の内側を持ち上げるか、外側を上げるか等の眉の動き、上瞼を上げる、瞼を緊張させる等の瞼の動き、鼻に皺を寄せる等の鼻の動き、上唇を持ち上げる、唇をすぼめる等の唇の動き、頬を持ち上げる等の頬の動き、顎を下げる等の顎の動き等の顔の各部位の状態が挙げられる。学習者の状態として、顔の複数の部位の状態を組み合わせてもよい。 In addition to the condition of the eyes, for example, the movement of the eyebrows such as lifting the inside or the outside of the eyebrows, the movement of the eyebrows such as raising the upper eyelids and the tension of the eyebrows, and the nose such as wrinkling the nose. The state of each part of the face such as the movement of the lips, the movement of the lips such as lifting the upper lip and purging the lips, the movement of the cheeks such as lifting the cheeks, and the movement of the nose such as lowering the nose. As the state of the learner, the states of a plurality of parts of the face may be combined.
 学習者の集中度を顔の表情から判断する方法としては、例えば、眉が上方向に動いた場合には、視認対象をより注視しているため集中していると判断することができる。また、例えば、人の顔を視認しているときに頬が上方向に動いた場合には、相手に対して表情を作っているとして、視認対象に集中していると判断することができる。 As a method of judging the degree of concentration of the learner from the facial expression, for example, when the eyebrows move upward, it can be judged that the learner is concentrated because he / she is paying more attention to the visual object. Further, for example, when the cheek moves upward while visually recognizing a person's face, it can be determined that the person is concentrating on the visual target as if he / she is making a facial expression for the other party.
 また、図8は、記憶部130に予め記憶されているベテラン接客者のデータの一例を示す表である。図8は、ベテラン接客者のデータの一例として、オブジェクトを視認したタイミング、視認対象、発話内容、声のトーン、動作及び集中度を示している。 Further, FIG. 8 is a table showing an example of data of veteran customer servicers stored in advance in the storage unit 130. FIG. 8 shows the timing at which the object is visually recognized, the object to be visually recognized, the content of the utterance, the tone of the voice, the movement, and the degree of concentration as an example of the data of the veteran customer service.
 また、図9は、図7に示す学習者のデータと図8に示すベテラン接客者のデータとの「ずれ」の情報(表中の「+/-」又は「+/-」を用いた(学習者-ベテラン接客者)の値によって示される定量的な情報)、又はベテラン接客者のデータに一致するかどうかの情報(表中の「○/×」によって示される定性的な情報)である。図9に示すように、学習者のデータとベテラン接客者とのデータが一致しない場合は、ベテラン接客者のデータ(表中の「正解」)を含むものであってもよい。 Further, in FIG. 9, information on “deviation” between the learner data shown in FIG. 7 and the veteran customer service data shown in FIG. 8 (“+/-” or “+/-” in the table was used ( Quantitative information indicated by the value of learner-experienced customer service), or information on whether or not the data matches the data of veteran customer service recipient (qualitative information indicated by "○ / ×" in the table). .. As shown in FIG. 9, when the learner's data and the veteran customer service data do not match, the veteran customer service data (“correct answer” in the table) may be included.
 重要度判別部114は、図7~図9に示す情報を参照して、対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。例えば、図9のイベントID「4」に対応する情報は、ベテラン接客者が「ご注文を確認いたします」と発話した箇所であるにも関わらず、対象者はそのような発話を行わなかったことを示している。このようなベテラン接客者と対象者との動作の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。 The importance determination unit 114 determines the importance of each element included in the annotated series information of the target person with reference to the information shown in FIGS. 7 to 9. For example, the information corresponding to the event ID "4" in FIG. 9 is the place where the veteran customer service person said "I will confirm your order", but the target person did not make such an utterance. It is shown that. The importance determination unit 114 may set a higher importance than the other locations in the series information where the difference in operation between the veteran customer service customer and the target person is large.
 また、例えばイベントID「3」に対応する情報は、当該情報に対応する動作を行うタイミングについて、ベテラン接客者と対象者とで7秒もの誤差があることを示している。このようなベテラン接客者と対象者との動作のタイミングの差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。また、例えば重要度判別部114は、対象者の集中度が所定の閾値よりも低い箇所には、仮にベテラン接客者の集中度との差異が比較的小さかったとしても、他の箇所よりも高い重要度を設定してもよい。 Further, for example, the information corresponding to the event ID "3" indicates that there is an error of as much as 7 seconds between the veteran customer service person and the target person in the timing of performing the operation corresponding to the information. The importance determination unit 114 may set a higher importance than the other locations in the series information in which the difference in the timing of the operation between the veteran customer service customer and the target person is large. Further, for example, the importance determination unit 114 is higher than other places where the concentration of the target person is lower than a predetermined threshold value, even if the difference from the concentration of the veteran customer service is relatively small. You may set the importance.
 また、図5に例示した提示用情報の再生画面の構成は、本例においても適用可能である。また、提示用情報生成部118は、図5の画面315中に、対象となるタイミングにおいてベテラン接客者と対象者との動作の差異があったこと及び差異の内容を示すテキスト等が含まれるように提示用情報を生成してもよい。 Further, the configuration of the playback screen of the presentation information illustrated in FIG. 5 is also applicable in this example. In addition, the presentation information generation unit 118 includes, in the screen 315 of FIG. 5, a text indicating that there is a difference in operation between the veteran customer service person and the target person at the target timing and the content of the difference. Information for presentation may be generated in.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例3>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。なお、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 3 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. It should be noted that the duplicate explanation of the matters already explained in the above example will not be repeated.
 本例における系列情報は、一例として、FA(factory automation)における組立員の動作を撮像した動画データ、組立員に付されたウェアラブルカメラでの撮像データ、組立の音、視認対象を記録した情報である。なお、組立員は、同一製品を組み立てる複数組立員であってもよいし、別ラインで同一製品を組み立てる組立員であってもよい。 As an example, the series information in this example is video data that captures the movement of the assembler in FA (factory automation), image data taken by a wearable camera attached to the assembler, assembly sound, and information that records a visual object. is there. The assembler may be a plurality of assemblers who assemble the same product, or may be an assembler who assembles the same product on different lines.
 図10は、対象者の視認対象、動作、集中度(瞬き、視線停留時間および瞳孔の状態)、組立の音についてのデータの一例を示す表であって、本例に係るアノテーション付与部214が、各時点において系列情報に付与したアノテーションの一例を示す表である。図10に示すように、アノテーション付与部214は、一例として、各時点において、視認対象、発話、動作、集中度、及び組立の音を、アノテーションとして付与する。 FIG. 10 is a table showing an example of data on the target person's visual object, movement, concentration (blinking, line-of-sight retention time, and pupil state), and assembly sound, and the annotation unit 214 according to this example , Is a table showing an example of annotations given to series information at each time point. As shown in FIG. 10, as an example, the annotation unit 214 assigns a visual object, an utterance, an action, a degree of concentration, and an assembly sound as annotations at each time point.
 また、図11は、記憶部130に予め記憶されているベテラン組立員のデータの一例を示す表である。図11は、ベテラン組立員のデータの一例として、オブジェクトを視認したタイミング、視認対象、動作、集中度、および組立の音を示している。 また、図12は、図10に示す対象者のデータと図11に示すベテラン組立員のデータとの「ずれ」の情報(表中の「+/-」又は「+/-」を用いた(対象者-ベテラン組立員)の値によって示される定量的な情報)、又はベテラン組立員のデータに一致するかどうかの情報(表中の「○/×」によって示される定性的な情報)である。図12に示すように、対象者のデータとベテラン組立員とのデータが一致しない場合は、ベテラン組立員のデータ(表中の「正解」)を含むものであってもよい。 Further, FIG. 11 is a table showing an example of data of a veteran assembler stored in advance in the storage unit 130. FIG. 11 shows, as an example of data of a veteran assembler, the timing of visually recognizing an object, the object to be visually recognized, the movement, the degree of concentration, and the sound of assembly. Further, in FIG. 12, information on the “deviation” between the data of the subject shown in FIG. 10 and the data of the veteran assembler shown in FIG. 11 (“+/-” or “+/-” in the table was used ( Quantitative information indicated by the value of the subject-experienced assembler), or information on whether or not the data matches the data of the veteran assembler (qualitative information indicated by "○ / ×" in the table). .. As shown in FIG. 12, when the data of the subject and the data of the veteran assembler do not match, the data of the veteran assembler (“correct answer” in the table) may be included.
 重要度判別部114は、図10~図12に示す情報を参照して、対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。例えば、図10のイベントID「3」に対応する情報は、ベテラン組立員が部品Bの端部を左手で取っているのに対して、対象者は部品Bの中央部を右手で取っていることを示している。このようなベテラン組立員と対象者との動作の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。 The importance determination unit 114 determines the importance of each element included in the annotated series information of the target person with reference to the information shown in FIGS. 10 to 12. For example, in the information corresponding to the event ID "3" in FIG. 10, the veteran assembler picks up the end of the part B with his left hand, while the subject picks up the center of the part B with his right hand. It is shown that. The importance determination unit 114 may set a higher importance than the other locations in the series information in which the difference in operation between the veteran assembler and the target person is large.
 また、例えばイベントID「7」に対応する情報は、部品Aに部品Bを組み付ける工程であるが、ベテラン組立員と対象者とで18秒もの誤差があることを示している。したがって、同じ組み付け動作に要する時間がベテラン組立員と対象者とで18秒もの差があることを示している。同じ動作に要する時間の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。また、例えば重要度判別部114は、対象者の集中度が所定の閾値よりも低い箇所には、仮にベテラン組立員の集中度との差異が比較的小さかったとしても、他の箇所よりも高い重要度を設定してもよい。あるいは、組立員によって各イベントの重要度を指定してもよい。 Also, for example, the information corresponding to the event ID "7" indicates that there is an error of 18 seconds between the veteran assembler and the target person in the process of assembling the part B to the part A. Therefore, it is shown that the time required for the same assembly operation differs by as much as 18 seconds between the veteran assembler and the subject. The importance determination unit 114 may set a higher importance than the other locations for the locations in the series information in which the difference in time required for the same operation is large. Further, for example, the importance determination unit 114 is higher than other places where the concentration of the subject is lower than a predetermined threshold value, even if the difference from the concentration of the veteran assembler is relatively small. You may set the importance. Alternatively, the assembler may specify the importance of each event.
 提示用情報生成部は、上記重要度判別結果を統合して、例えば、下記のような提示用情報を生成してもよい。 The presentation information generation unit may integrate the above importance determination results to generate, for example, the following presentation information.
 ・対象者向けに、各工程における改善すべき点、注意すべき点を提示する提示情報を生成してもよい。上記提示情報に基づいて、難しい工程、間違えやすい工程等重要度の高い工程をクローズアップやスロー再生した新人組立員向けのビデオによる作業マニュアルを作成してもよい。 ・ Presentation information that presents points to be improved and points to be noted in each process may be generated for the target person. Based on the above presentation information, a video work manual for new assemblers may be created, in which highly important processes such as difficult processes and processes that are easily mistaken are reproduced in close-up or slow mode.
 ・管理者による組立員管理のために、各組立員のパフォーマンスの時系列変化、複数の組立員が間違える箇所について提示した提示情報を生成してもよい。 -For assembler management by the administrator, it is possible to generate presentation information that presents the time-series changes in the performance of each assembler and the points where multiple assemblers make mistakes.
 ・さらに、各工程の改善のために、ほとんどの組立員が間違える箇所について、工程順を変える、工程を複数の工程に分離する、あるいは複数の工程を統合するなど、工程自体における改善すべき点について提示した提示情報を生成してもよい。 -Furthermore, in order to improve each process, points that should be improved in the process itself, such as changing the process order, separating the process into multiple processes, or integrating multiple processes, where most assemblers make mistakes. You may generate the presentation information presented about.
 また、図5に例示した提示用情報の再生画面の構成は、本例においても適用可能である。また、提示用情報生成部118は、図5の画面315中に、対象となるタイミングにおいてベテラン組立員と対象者との動作の差異があったこと及び差異の内容を示すテキスト等が含まれるように提示用情報を生成してもよい。 Further, the configuration of the playback screen of the presentation information illustrated in FIG. 5 is also applicable in this example. In addition, the presentation information generation unit 118 includes, in the screen 315 of FIG. 5, a text indicating that there is a difference in operation between the veteran assembler and the target person at the target timing and the content of the difference. Information for presentation may be generated in.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例4>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。ここでも、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 4 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. Again, the duplicate description of the matters already described in the above example will not be repeated.
 本例における系列情報は、一例として、調理場面における調理人の動作を撮像した動画データ、調理人に付されたウェアラブルカメラでの撮像データ、調理の音、視認対象を記録した情報である。調理人は、同一メニューを調理する複数の調理人であって、各家庭で個人的に調理を習得したい調理人であってもよいし、飲食店、スーパーマーケット内の厨房で調理を行う従業員であってもよい。 The series information in this example is, for example, video data that captures the movement of the cook in the cooking scene, data captured by the wearable camera attached to the cook, cooking sound, and information that records the visual object. The cook may be multiple cooks who cook the same menu, and may be cooks who want to learn cooking personally at each home, or employees who cook in the kitchen in a restaurant or supermarket. There may be.
 図13は、対象者の視認対象、動作、集中度(瞬き、視線停留時間および瞳孔の状態)、調理の音についてのデータの一例を示す表であって、本例に係るアノテーション付与部214が、各時点において系列情報に付与したアノテーションの一例を示す表である。図13に示すように、アノテーション付与部214は、一例として、各時点において、視認対象、発話、動作、及び集中度を、アノテーションとして付与する。 FIG. 13 is a table showing an example of data on the subject's visual object, movement, concentration (blinking, line-of-sight retention time, and pupil state), and cooking sound, and the annotation unit 214 according to this example , Is a table showing an example of annotations given to series information at each time point. As shown in FIG. 13, the annotation unit 214 assigns a visual object, an utterance, an action, and a degree of concentration as annotations at each time point as an example.
 また、図14は、記憶部130に予め記憶されているベテラン調理人のデータの一例を示す表である。図14は、ベテラン調理人のデータの一例として、オブジェクトを視認したタイミング、視認対象、動作、集中度、および調理の音を示している。 また、図15は、図13に示す対象者のデータと図14に示すベテラン調理人のデータとの「ずれ」の情報(表中の「+/-」又は「+/-」を用いた(対象者-ベテラン調理人)の値によって示される定量的な情報)、又はベテラン調理人のデータに一致するかどうかの情報(表中の「○/×」によって示される定性的な情報)である。図15に示すように、対象者のデータとベテラン調理人とのデータが一致しない場合は、ベテラン調理人のデータ(表中の「正解」)を含むものであってもよい。 Further, FIG. 14 is a table showing an example of data of a veteran cook stored in advance in the storage unit 130. FIG. 14 shows, as an example of data of a veteran cook, the timing of visually recognizing an object, the object to be visually recognized, the movement, the degree of concentration, and the sound of cooking. Further, in FIG. 15, information on “deviation” between the data of the subject shown in FIG. 13 and the data of the veteran cook shown in FIG. 14 (“+/-” or “+/-” in the table was used ( Quantitative information indicated by the value of the subject-experienced cook), or information on whether or not the data matches the data of the veteran cook (qualitative information indicated by "○ / ×" in the table). .. As shown in FIG. 15, when the data of the subject and the data of the veteran cook do not match, the data of the veteran cook (“correct answer” in the table) may be included.
 重要度判別部114は、図13~図15に示す情報を参照して、対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。例えば、図13のイベントID「3」に対応する情報では、ベテラン調理人が、包丁を持つ際に、人差し指を包丁の背に乗せ、中指と親指で包丁を掴むのに対して、対象者は包丁を親指と人差し指で掴んでいることを示している。このようなベテラン調理人と対象者との動作の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。 The importance determination unit 114 determines the importance of each element included in the annotated series information of the target person with reference to the information shown in FIGS. 13 to 15. For example, in the information corresponding to the event ID “3” in FIG. 13, when a veteran cook holds a kitchen knife, he puts his index finger on the back of the kitchen knife and grabs the kitchen knife with his middle finger and thumb. It shows that the kitchen knife is being held with the thumb and index finger. The importance determination unit 114 may set a higher importance than the other locations in the series information in which the difference in operation between the veteran cook and the target person is large.
 また、例えばイベントID「5」に対応する情報は、食材を切る場面において、ベテラン調理人と対象者とで8秒もの誤差があることを示している。したがって、同じ調理動作に要する時間がベテラン調理人と対象者とで8秒もの差があることを示している。同じ動作に要する時間の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。また、例えば重要度判別部114は、対象者の集中度が所定の閾値よりも低い箇所には、仮にベテラン調理人の集中度との差異が比較的小さかったとしても、他の箇所よりも高い重要度を設定してもよい。あるいは、調理人によって、難しい工程、または味、品質に影響する工程等に高い重要度を指定してもよい。 Also, for example, the information corresponding to the event ID "5" indicates that there is an error of 8 seconds between the veteran cook and the target person in the scene of cutting the ingredients. Therefore, it is shown that the time required for the same cooking operation differs by as much as 8 seconds between the veteran cook and the subject. The importance determination unit 114 may set a higher importance than the other locations for the locations in the series information in which the difference in time required for the same operation is large. Further, for example, the importance determination unit 114 is higher than other places where the concentration of the subject is lower than a predetermined threshold, even if the difference from the concentration of the veteran cook is relatively small. You may set the importance. Alternatively, the cook may specify a high degree of importance for difficult processes or processes that affect taste and quality.
 提示用情報生成部は、上記重要度判別結果を統合して、例えば、下記のような提示用情報を提示してもよい。 The presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
 ・対象者向けに、各調理場面における改善すべき点、注意すべき点を提示する提示情報を生成してもよい。上記提示情報に基づいて、難しい工程、間違えやすい工程等重要度の高い工程をクローズアップやスロー再生したビデオによる調理マニュアルを作成してもよい。 ・ For the target person, presentation information that presents points to be improved and points to be noted in each cooking scene may be generated. Based on the above-mentioned presentation information, a cooking manual may be created by a close-up or slow-playing video of highly important processes such as difficult processes and processes that are easily mistaken.
 ・対象者が飲食店、スーパーマーケット内の厨房で調理する従業員である場合には、管理者による調理人の管理のために、各調理人のパフォーマンスの時系列変化、複数の調理人が間違える箇所について提示した提示情報を生成してもよい。 ・ If the target person is an employee who cooks in a kitchen in a restaurant or supermarket, the time-series changes in the performance of each cook and the places where multiple cooks make mistakes in order to manage the cooks by the manager. You may generate the presentation information presented about.
 ・さらに、各工程の改善のために、ほとんどの調理人が間違える箇所について、工程順を変える、工程を複数の工程に分離する、あるいは複数の工程を統合するなど、工程自体における改善すべき点について提示した提示情報を生成してもよい。 -Furthermore, in order to improve each process, points that should be improved in the process itself, such as changing the process order, separating the process into multiple processes, or integrating multiple processes, for the parts that most cooks make mistakes. You may generate the presentation information presented about.
 また、図5に例示した提示用情報の再生画面の構成は、本例においても適用可能である。また、提示用情報生成部118は、図5の画面315中に、対象となるタイミングにおいてベテラン組立員と対象者との動作の差異があったこと及び差異の内容を示すテキスト等が含まれるように提示用情報を生成してもよい。 Further, the configuration of the playback screen of the presentation information illustrated in FIG. 5 is also applicable in this example. In addition, the presentation information generation unit 118 includes, in the screen 315 of FIG. 5, a text indicating that there is a difference in operation between the veteran assembler and the target person at the target timing and the content of the difference. Information for presentation may be generated in.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例5>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。ここでも、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 5 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. Again, the duplicate description of the matters already described in the above example will not be repeated.
 本例における対象者には、スポーツにおけるプレーヤが含まれる。アノテーションは、コーチが付与してもよいし、アノテーション付与部214が付与してもよいし、その組み合わせであってもよい。 The target person in this example includes a player in sports. The annotation may be added by the coach, may be added by the annotation unit 214, or may be a combination thereof.
 また、複数のプレーヤに対して、一人のコーチがアノテーションを付与してもよいし、一人のプレーヤに対して、複数のコーチがアノテーションを付与してもよい。さらに、複数のプレーヤに対して、複数のコーチがアノテーションを付与してもよい。 Further, one coach may annotate a plurality of players, or a plurality of coaches may annotate one player. Further, a plurality of coaches may annotate a plurality of players.
 或いは、別々に学習された複数のアノテーション付与部214を備える構成とし、一人のプレーヤに対して、複数のアノテーション付与部がアノテーションを付与してもよい。さらに、複数のプレーヤに対して、複数のアノテーション付与部がアノテーションを付与してもよい。 Alternatively, the configuration may include a plurality of separately learned annotation units 214, and a plurality of annotation units may add annotations to one player. Further, a plurality of annotation units may add annotations to the plurality of players.
 本例における系列情報は、一例として、プレーヤの画像、音声、体のセンシングデータである。プレーヤは、どのスポーツのプレーヤであってもよいが、ここでは、例えば、野球のプレーヤ(バッター)を例に説明する。また、プレーヤは、プロスポーツ選手であってもよいし、学生または社会人プレーヤであってもよい。 The series information in this example is, for example, player image, voice, and body sensing data. The player may be a player of any sport, but here, for example, a baseball player (batter) will be described as an example. In addition, the player may be a professional athlete, a student or a member of society.
 図16は、プレーヤA(打率の低いプレーヤ)の視認対象、動作、および集中度(瞬き、視線停留時間および瞳孔の状態)についてのデータの一例を示す表であって、本例に係るアノテーション付与部214が、各時点において系列情報に付与したアノテーションの一例を示す表である。アノテーション付与部214は、一例として、各時点において、視認対象、発話、動作、及び集中度を、アノテーションとして付与する。 FIG. 16 is a table showing an example of data on the visual object, movement, and degree of concentration (blinking, line-of-sight retention time, and pupil state) of player A (player with low batting average), and annotating according to this example. Part 214 is a table showing an example of annotations given to the sequence information at each time point. As an example, the annotation unit 214 assigns a visual object, an utterance, an action, and a degree of concentration as annotations at each time point.
 また、図17は、プレーヤB(打率の高いプレーヤ)のデータの一例として、オブジェクトを視認したタイミング、視認対象、動作、集中度、およびプレイの音を示している。 Further, FIG. 17 shows, as an example of the data of player B (player with a high batting average), the timing at which the object is visually recognized, the visual target, the movement, the degree of concentration, and the sound of play.
 また、図18は、図16に示すプレーヤAのデータと図17に示すプレーヤBのデータとの「ずれ」の情報(表中の「+/-」又は「+/-」を用いた(プレーヤA-プレーヤB)の値によって示される定量的な情報)、又はプレーヤBのデータに一致するかどうかの情報(表中の「○/×」によって示される定性的な情報)である。図18に示すように、プレーヤAのデータとプレーヤBとのデータが一致しない場合は、プレーヤBのデータ(表中の「正解」)を含むものであってもよい。 Further, FIG. 18 uses information (“+/-” or “+/-” in the table of “deviation” between the data of player A shown in FIG. 16 and the data of player B shown in FIG. 17 (player). Quantitative information indicated by the value of A-player B), or information on whether or not the data matches the data of player B (qualitative information indicated by "○ / ×" in the table). As shown in FIG. 18, when the data of the player A and the data of the player B do not match, the data of the player B (“correct answer” in the table) may be included.
 重要度判別部114は、図16~図18に示す情報を参照して、各プレーヤのアノテーション付系列情報に含まれる各要素の重要度を判別する。例えば、図16のイベントID「1」に対応する情報では、プレーヤBはバットを短めに持っているのに対して、プレーヤAはバットを長めに持っていることを示している。このようなプレーヤ間で動作の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。 The importance determination unit 114 determines the importance of each element included in the annotated series information of each player with reference to the information shown in FIGS. 16 to 18. For example, the information corresponding to the event ID “1” in FIG. 16 indicates that the player B holds the bat short, while the player A holds the bat long. The importance determination unit 114 may set a higher importance than the other locations for the locations in the series information where the difference in operation between the players is large.
 また、例えばイベントID「3」に対応する情報は、プレーヤAのスイング速度は135Km/時であるのに対して、プレーヤBのスイング速度は148Km/時と13km/時もの誤差があることを示している。同じ動作を行う速度の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。また、例えば重要度判別部114は、プレーヤの集中度が所定の閾値よりも低い箇所には、他の箇所よりも高い重要度を設定してもよい。あるいは、プレーヤによって差異が大きく出る動作等に高い重要度を指定してもよい。 Further, for example, the information corresponding to the event ID "3" indicates that the swing speed of the player A is 135 Km / hour, while the swing speed of the player B has an error of 148 Km / hour and 13 km / hour. ing. The importance determination unit 114 may set a higher importance than the other locations for the locations in the series information in which the difference in speed at which the same operation is performed is large. Further, for example, the importance determination unit 114 may set a higher importance than other locations at a location where the player's concentration is lower than a predetermined threshold value. Alternatively, a high degree of importance may be specified for an operation or the like that makes a large difference depending on the player.
 提示用情報生成部は、上記重要度判別結果を統合して、例えば、各プレーヤ向けに、複数のコーチによってアノテーションを付与した情報を統合して各プレーヤのプレイの改善すべき点、改善すべきプレイの部分(画像のシーン)を提示する提示情報を生成してもよい。 The presentation information generation unit integrates the above-mentioned importance determination results, for example, integrates information annotated by a plurality of coaches for each player to improve the play of each player, and should be improved. Presentation information that presents a part of play (scene of an image) may be generated.
 また、複数のプレーヤに対して一人または複数のコーチがアノテーションを付与した情報を統合して、コーチ等指導者向けに、指導の改善点を提示する提示情報を生成してもよい。更に、上記提示情報に基づいて、バッティングを成功させるために重要なシーンをクローズアップやスロー再生した指導用のビデオによるマニュアルを作成してもよい。 Further, the information annotated by one or more coaches to a plurality of players may be integrated to generate presentation information for coaches and other coaches to present improvement points of guidance. Further, based on the above-mentioned presentation information, a manual for teaching may be created in which scenes important for successful batting are reproduced in close-up or slow-playing.
 上記の例では、系列情報としてバッティングのような個々のプレーヤによるプレイを入力する場合を例にして説明したが、系列情報としては、たとえば、サッカーにおけるフォーメーションなど複数プレーヤのよるプレイ、動作等を対象としてもよい。 In the above example, the case where the play by each player such as batting is input as the series information is described as an example, but the series information is, for example, a play, an operation, etc. by a plurality of players such as a formation in soccer. May be.
 更に、入力データに、試合の観戦者に対する情報を含んでもよい。例えば、観戦者の様子を撮像した動画データ、観戦者の視線情報、観戦者が発する音声を入力データとしてもよい。 Furthermore, the input data may include information for the spectators of the match. For example, moving image data that captures the state of the spectator, line-of-sight information of the spectator, and voice emitted by the spectator may be used as input data.
 これらの観戦者に関するデータに基づいて、試合中の各シーンについて観戦者の注目度、興奮度をアノテーションとして付与してもよい。 Based on the data related to these spectators, the degree of attention and excitement of the spectators may be added as annotations for each scene during the match.
 提示情報生成部は、プレーヤ、コーチ向けの、観戦者の注目度が高かったシーンを提示する提示情報を生成してもよい。 The presentation information generation unit may generate presentation information for players and coaches that presents scenes that attracted a lot of attention from spectators.
 以下では、アノテーション付与処理、重要度付与処理、及び提示情報生成処理等のその他の例について説明する。 In the following, other examples such as annotation processing, importance assignment processing, and presentation information generation processing will be described.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例6>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。ここでも、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 6 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. Again, the duplicate description of the matters already described in the above example will not be repeated.
 本例では、実世界に存在するプレーヤが操作するキャラクタが仮想空間上で対戦等を行うエレクトロニックスポーツ(eスポーツ)を例に挙げる。なお、上記のプレーヤは、同一のeスポーツをプレイする複数のプレーヤであってもよい。 In this example, an example is an electronic sport (e-sports) in which a character operated by a player existing in the real world competes in a virtual space. The above-mentioned player may be a plurality of players who play the same e-sports.
 本例における系列情報は、一例として、プレーヤによるパソコン、ゲーム操作部等へのインプットを含む。また、本例における系列情報は、これらのインプットに基づき生成される仮想空間上のキャラクタの動作に関する情報を含んでもよい。また、系列情報は、プレーヤの画像、音声、体のセンシングデータを含んでもよい。更に、本例における系列情報は、上記スポーツの場合と同様、eスポーツの観戦者に対する情報を含んでもよい。観戦者の注目度、興奮度に対する情報を含んでもよい。 The series information in this example includes, as an example, an input from a player to a personal computer, a game operation unit, or the like. Further, the series information in this example may include information regarding the movement of the character in the virtual space generated based on these inputs. In addition, the series information may include player image, voice, and body sensing data. Further, the series information in this example may include information for spectators of e-sports as in the case of the above-mentioned sports. Information on the degree of attention and excitement of the spectator may be included.
 アノテーション付与部214は、系列情報を参照し、一例として、各時点において、キャラクタの動作、観戦者の注目度、及び興奮度を、アノテーションとして付与する。 The annotation unit 214 refers to the series information and, as an example, assigns the movement of the character, the degree of attention of the spectator, and the degree of excitement as annotations at each time point.
 重要度判別部114は、複数のプレーヤのアノテーション付系列情報を参照し、これらのアノテーション付系列情報に含まれる各要素の重要度を判別する。例えば、キャラクタ間で動作の差異が大きい系列情報中の箇所に対して、重要度判別部114は、他の箇所よりも高い重要度を設定してもよい。 The importance determination unit 114 refers to the annotated series information of a plurality of players, and determines the importance of each element included in the annotated series information. For example, the importance determination unit 114 may set a higher importance than other points for a place in the series information in which the difference in operation between characters is large.
 また、例えば重要度判別部114は、プレーヤの集中度が所定の閾値よりも低い箇所には、他の箇所よりも高い重要度を設定してもよい。あるいは、プレーヤによって差異が大きく出る動作等に高い重要度を指定してもよい。 Further, for example, the importance determination unit 114 may set a higher importance than other locations at a location where the player's concentration is lower than a predetermined threshold value. Alternatively, a high degree of importance may be specified for an operation or the like that makes a large difference depending on the player.
 そして、提示情報生成部は、プレーヤ向けに、eスポーツ観戦者の注目度が高かったシーンを提示する提示情報を生成してもよい。 Then, the presentation information generation unit may generate presentation information for the player to present a scene that attracted a lot of attention from the e-sports spectator.
 また、提示用情報生成部は、上記重要度判別結果を統合して、例えば、各プレーヤ向けに、各プレーヤのプレイの改善すべき点、改善すべきプレイの部分を提示する提示情報を生成してもよい。 In addition, the presentation information generation unit integrates the above-mentioned importance determination results to generate presentation information for each player, for example, presenting points to be improved and parts of play to be improved for each player. You may.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例7>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。ここでも、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 7 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. Again, the duplicate description of the matters already described in the above example will not be repeated.
 本例における系列情報は、オフィスワークにおけるオフィスワーカの動作を撮像した動画データ、オフィスワーカに付されたウェアラブルカメラでの撮像データ、作業の音、視認対象を記録した情報である。また、系列情報には、キーボードやマウス等の端末装置の操作履歴、アプリケーションの使用履歴等を含んでもよい。 The series information in this example is video data that captures the operation of an office worker in office work, data captured by a wearable camera attached to the office worker, work sound, and information that records a visual object. Further, the series information may include an operation history of a terminal device such as a keyboard or a mouse, a usage history of an application, and the like.
 アノテーション付与部214は、一例として、各時点において、視認対象、発話、動作、集中度、作業音、端末装置の操作履歴、及びアプリケーションの使用履歴等に関する情報を、アノテーションとして付与する。 As an example, the annotation unit 214 adds information on a visual object, an utterance, an action, a degree of concentration, a work sound, an operation history of a terminal device, an application usage history, and the like as annotations at each time point.
 重要度判別部114は、一例として対象のオフィスワーカ(対象者)のアノテーション付系列情報と、基準となるアノテーション付系列情報とを参照することによって、対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。ここで、基準となるアノテーション付系列情報には、ベテラン等の他のオフィスワーカのアノテーション付系列情報が含まれてもよい。 The importance determination unit 114 refers to the annotated series information of the target office worker (target person) and the reference annotated series information as an example, and each element included in the annotated series information of the target person. Determine the importance of. Here, the reference annotated series information may include annotated series information of another office worker such as a veteran.
 例えば、重要度判別部114は、対象者のテキスト入力に係る時間が、他のオフィスワーカに比べて長い場合に、より高い重要度を設定してもよい。また、重要度判別部114は、対象者の表計算に係る時間が、他のオフィスワーカに比べて長い場合に、より高い重要度を設定してもよい。 For example, the importance determination unit 114 may set a higher importance when the time required for text input by the target person is longer than that of other office workers. Further, the importance determination unit 114 may set a higher importance when the time required for the spreadsheet of the target person is longer than that of other office workers.
 提示用情報生成部は、上記重要度判別結果を統合して、例えば、下記のような提示用情報を提示してもよい。 The presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
 ・対象者向けに、各作業場面における改善すべき点、注意すべき点を提示する提示情報を生成してもよい。上記提示情報に基づいて、難しい作業、間違えやすい作業等重要度の高い作業をクローズアップやスロー再生したビデオによる作業マニュアルを作成してもよい。 ・ Presentation information that presents points to be improved and points to be noted in each work situation may be generated for the target person. Based on the above-mentioned presented information, a work manual may be created by a close-up or slow-playing video of highly important work such as difficult work or work that is easily mistaken.
 ・さらに、各作業内容の改善のために、ほとんどのオフィスワーカが間違える作業について、作業順を変える、作業を複数の作業に分離する、あるいは複数の作業を統合するなど、作業自体における改善すべき点について提示した提示情報を生成してもよい。 -Furthermore, in order to improve each work content, the work itself should be improved by changing the work order, separating the work into multiple work, or integrating multiple work for the work that most office workers make a mistake. The presentation information presented about the points may be generated.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例8>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。ここでも、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 8 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. Again, the duplicate description of the matters already described in the above example will not be repeated.
 本例における系列情報は、一例として、電話オペレータ(対象者)の動作を撮像した動画データ、電話オペレータに付されたウェアラブルカメラでの撮像データ、電話相手とのやり取りの音声データ、作業の音、視認対象を記録した情報である。また、系列情報には、キーボードやマウス等の端末装置の操作履歴を含んでもよい。 As an example, the series information in this example includes video data that captures the movement of the telephone operator (target person), image data captured by a wearable camera attached to the telephone operator, voice data of communication with the telephone partner, work sound, and so on. This is information that records the visual target. Further, the series information may include the operation history of a terminal device such as a keyboard or a mouse.
 アノテーション付与部214は、一例として、各時点において、対象者の視認対象、発話、動作、集中度、作業音、端末装置の操作履歴、及びアプリケーションの使用履歴等に関する情報を、アノテーションとして付与する。また、アノテーション付与部214は、各時点における対象者の緊張度を推定し、当該緊張度を示す指標をアノテーションとして付与してもよい。なお、緊張度の推定アルゴリズムについては公知のアルゴリズムを用いることができる。 As an example, the annotation unit 214 adds information on the target person's visual target, utterance, movement, concentration, work sound, terminal device operation history, application usage history, etc. as annotations at each time point. Further, the annotation unit 214 may estimate the tension level of the target person at each time point and add an index indicating the tension level as an annotation. A known algorithm can be used as the algorithm for estimating the degree of tension.
 重要度判別部114は、一例として、対象者のアノテーション付系列情報と、基準となるアノテーション付系列情報とを参照することによって、対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。ここで、基準となるアノテーション付系列情報には、ベテラン等の他の電話オペレータのアノテーション付系列情報が含まれてもよい。 As an example, the importance determination unit 114 determines the importance of each element included in the annotation series information of the target person by referring to the annotation series information of the target person and the annotation series information as a reference. To do. Here, the reference annotated series information may include annotated series information of another telephone operator such as a veteran.
 例えば、重要度判別部114は、対象者の緊張度が、他の電話オペレータに比べて高い場合に、より高い重要度を設定してもよい。また、重要度判別部114は、特定の電話相手に対する緊張度が、他の電話相手に対する緊張度よりも高い場合に、より高い重要度を設定してもよい。 For example, the importance determination unit 114 may set a higher importance when the tension of the subject is higher than that of other telephone operators. Further, the importance determination unit 114 may set a higher importance when the tension with respect to a specific telephone partner is higher than the tension with respect to another telephone partner.
 提示用情報生成部は、上記重要度判別結果を統合して、例えば、下記のような提示用情報を提示してもよい。 The presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
 ・対象者向けに、各電話対応場面における改善すべき点、注意すべき点を提示する提示情報を生成してもよい。上記提示情報に基づいて、難しい電話対応、間違えやすい電話対応等の重要度の高い作業をクローズアップやスロー再生したビデオによる作業マニュアルを作成してもよい。 ・ For the target person, presentation information that presents points to be improved and points to be noted in each telephone response situation may be generated. Based on the above-mentioned presented information, a work manual may be created by a close-up or slow-playing video of highly important work such as difficult telephone correspondence and telephone correspondence that is easy to make a mistake.
 ・さらに、管理者向けに、ほとんどの電話対応者の緊張度が高い電話対応内容や電話相手を示す提示情報を生成してもよい。 -Furthermore, for the administrator, it is possible to generate presentation information indicating the contents of the telephone correspondence and the telephone party with high tension of most telephone responders.
 <アノテーション付与処理、重要度付与処理、及び提示情報生成処理等の他の例9>
 本実施形態に係る情報処理システム1によるアノテーション付与処理、重要度付与処理、及び提示用情報生成処理の更に他の例について説明する。ここでも、既に上記の例において説明した事項についての重複する説明を繰り返さない。
<Other examples 9 such as annotation processing, importance assignment processing, and presentation information generation processing>
Further examples of the annotation processing, the importance assignment processing, and the presentation information generation processing by the information processing system 1 according to the present embodiment will be described. Again, the duplicate description of the matters already described in the above example will not be repeated.
 本例における系列情報は、一例として、自動車学校における生徒(対象者)の動作を撮像した動画データ、対象者に付されたウェアラブルカメラでの撮像データ、教官とのやり取りの音声データ、運転動作の音、視認対象を記録した情報である。また、系列情報には、ハンドルやブレーキといった操作部のセンシングデータを含んでもよい。 As an example, the series information in this example includes video data of the movement of a student (target person) in a driving school, image data of a wearable camera attached to the target person, voice data of communication with an instructor, and driving movement. This is information that records sound and visual objects. Further, the series information may include sensing data of an operation unit such as a steering wheel or a brake.
 アノテーション付与部214は、一例として、各時点において、視認対象、動作、集中度、発話、運転動作音、及び操作部の操作履歴等に関する情報を、アノテーションとして付与する。また、アノテーション付与部214は、各時点における対象者の緊張度を推定し、当該緊張度を示す指標をアノテーションとして付与してもよい。なお、緊張度の推定アルゴリズムについては公知のアルゴリズムを用いることができる。 As an example, the annotation unit 214 adds information on the visual object, the operation, the degree of concentration, the utterance, the driving operation sound, the operation history of the operation unit, and the like as annotations at each time point. Further, the annotation unit 214 may estimate the tension level of the target person at each time point and add an index indicating the tension level as an annotation. A known algorithm can be used as the algorithm for estimating the degree of tension.
 重要度判別部114は、一例として、対象者のアノテーション付系列情報と、基準となるアノテーション付系列情報とを参照することによって、対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。ここで、基準となるアノテーション付系列情報には、教官等の模範的ドライバーのアノテーション付系列情報が含まれてもよい。 As an example, the importance determination unit 114 determines the importance of each element included in the annotation series information of the target person by referring to the annotation series information of the target person and the annotation series information as a reference. To do. Here, the reference annotated series information may include annotated series information of a model driver such as an instructor.
 例えば、重要度判別部114は、あるカーブを曲がる際に、対象者がハンドルを切り始めるタイミングと模範的ドライバーがハンドルを切り始めるタイミングが所定の値よりも大きい場合に、より高い重要度を設定してもよい。また、重要度判別部114は、特定のコースにおける緊張度が、他のコースにおける緊張度よりも高い場合に、より高い重要度を設定してもよい。 For example, the importance determination unit 114 sets a higher importance when the timing at which the target person starts to turn the handle and the timing at which the model driver starts to turn the handle are larger than a predetermined value when turning a certain curve. You may. In addition, the importance determination unit 114 may set a higher importance when the tension in a specific course is higher than the tension in other courses.
 提示用情報生成部は、上記重要度判別結果を統合して、例えば、下記のような提示用情報を提示してもよい。 The presentation information generation unit may integrate the above-mentioned importance determination results and present the following presentation information, for example.
 ・対象者向けに、各運転シーンにおける改善すべき点、注意すべき点を提示する提示情報を生成してもよい。上記提示情報に基づいて、難しい運転シーン等の重要度の高い運転をクローズアップやスロー再生したビデオによる運転マニュアルを作成してもよい。 ・ For the target person, presentation information that presents points to be improved and points to be noted in each driving scene may be generated. Based on the above-mentioned presented information, a driving manual may be created by a close-up or slow-playing video of highly important driving such as a difficult driving scene.
 ・さらに、教官向けに、ほとんどの生徒の緊張度が高いコースや運転シーンを示す提示情報を生成してもよい。 -Furthermore, you may generate presentation information for instructors that shows the courses and driving scenes where most students are highly nervous.
 <実施形態2>
 本発明における、情報処理システムの構成は、上述のものに限られない。図10は、実施形態2に係る情報処理システム1の構成例を模式的に例示するブロック図である。実施形態2の情報処理システム1では、重要度判別部215が、サーバ100に備えられず、学習者用端末装置200に備えられている点が、実施形態1の情報処理システム1と異なるが、他の構成は同一である。
<Embodiment 2>
The configuration of the information processing system in the present invention is not limited to the above. FIG. 10 is a block diagram schematically illustrating a configuration example of the information processing system 1 according to the second embodiment. The information processing system 1 of the second embodiment is different from the information processing system 1 of the first embodiment in that the importance determination unit 215 is not provided in the server 100 but is provided in the learner terminal device 200. Other configurations are the same.
 次に、実施形態2の情報処理システム1の動作について説明する。図11は、実施形態2に係る情報処理システム1の動作例を例示するシーケンス図である。図11を参照しながら、実施形態2の情報処理システム1における提示用情報を生成する処理について説明する。 Next, the operation of the information processing system 1 of the second embodiment will be described. FIG. 11 is a sequence diagram illustrating an operation example of the information processing system 1 according to the second embodiment. The process of generating the presentation information in the information processing system 1 of the second embodiment will be described with reference to FIG.
 (ステップS202)
 ステップS202では、アノテーション付与部214が、系列情報取得部212で取得された対象者に関する系列情報に含まれる各要素にアノテーションを付与することによって、アノテーション付系列情報を生成する。
(Step S202)
In step S202, the annotation-giving unit 214 generates the annotated series information by adding annotations to each element included in the series information regarding the target person acquired by the series information acquisition unit 212.
 (ステップS206)
 ステップS206では、重要度判別部215が、アノテーション付与部214で生成された対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。学習者用端末装置200の通信部220は、上記重要度判別結果を、サーバ100に送信する。
(Step S206)
In step S206, the importance determination unit 215 refers to the annotated series information about the target person generated by the annotation unit 214, and determines the importance of each element included in the annotation series information of the target person. The communication unit 220 of the learner terminal device 200 transmits the importance determination result to the server 100.
 (ステップS207)
 ステップS207では、サーバ100の系列情報取得部112が、上記重要度判別結果を受信する。
(Step S207)
In step S207, the series information acquisition unit 112 of the server 100 receives the importance determination result.
 (ステップS208)
 ステップS208では、サーバ100の判別統合部116が、複数の対象者A、B、C・・・に関する重要度の判別結果を参照して、判別統合情報を生成する。
(Step S208)
In step S208, the discrimination integration unit 116 of the server 100 refers to the judgment results of the importance of the plurality of target persons A, B, C ..., And generates the discrimination integration information.
 (ステップS210)
 ステップS210では、提示用情報生成部118が、重要度判別部215による判別結果と判別統合情報との少なくとも何れかを参照して提示用情報を生成する。なお、提示用情報生成部118が、重要度判別部215による判別結果のみを参照して提示用情報を生成する場合、ステップS208の処理は必ずしも実行されることを要せず、ステップS207に続いて本ステップS210の処理が実行されてもよい。サーバ100の通信部120は、生成された提示用情報を各学習者用端末装置200に送信する。
(Step S210)
In step S210, the presentation information generation unit 118 generates presentation information by referring to at least one of the discrimination result by the importance determination unit 215 and the discrimination integration information. When the presentation information generation unit 118 generates the presentation information by referring only to the determination result by the importance determination unit 215, the process of step S208 does not necessarily have to be executed, and the process follows step S207. The process of this step S210 may be executed. The communication unit 120 of the server 100 transmits the generated presentation information to each learner terminal device 200.
 (ステップS212)
 ステップS212では、各学習者用端末装置200の提示用情報取得部216が、生成された提示用情報を取得する。
(Step S212)
In step S212, the presentation information acquisition unit 216 of each learner terminal device 200 acquires the generated presentation information.
 (ステップS214)
 ステップS214では、提示部(表示部)244が、取得した提示用情報を提示する。
(Step S214)
In step S214, the presentation unit (display unit) 244 presents the acquired presentation information.
 また、本ステップS214では、アノテーション付与部214は、提示用情報に含まれる情報を参照して、系列情報にアノテーションを付与する場合に用いる判別ロジックを更新してもよい。 Further, in this step S214, the annotation unit 214 may update the discrimination logic used when annotating the series information by referring to the information included in the presentation information.
 (ステップS216)
 ステップS216では、端末装置200が、フィードバック情報を取得する。フィードバック情報は、上述したように、対象者が講師とともに提示用情報を確認し、判別された重要度の確認・修正を行った結果として生成される。フィードバック情報は、端末装置200からサーバ100に送信される。
(Step S216)
In step S216, the terminal device 200 acquires feedback information. As described above, the feedback information is generated as a result of the subject confirming the presentation information together with the instructor and confirming / correcting the determined importance. The feedback information is transmitted from the terminal device 200 to the server 100.
 (ステップS218)
 ステップS218では、学習者用端末装置200の重要度判別部215が、上記フィードバック情報に基づいて、判別ロジックを更新する。例えば、重要度判別部215は、確認・修正された重要度を含むフィードバック情報に基づいて、重要度の判別ロジックを更新する。また、サーバ100の判別統合部116も、フィードバック情報に基づいて、判別統合ロジックを更新する。
(Step S218)
In step S218, the importance determination unit 215 of the learner terminal device 200 updates the determination logic based on the feedback information. For example, the importance determination unit 215 updates the importance determination logic based on the feedback information including the confirmed / corrected importance. Further, the discrimination integration unit 116 of the server 100 also updates the discrimination integration logic based on the feedback information.
 (ステップS220)
 端末装置200の系列情報取得部212が、再び、系列情報を取得すると、ステップS220では、アノテーション付与部214が、系列情報にアノテーションを付与することによって、アノテーション付系列情報を生成する。以下、ステップS220~ステップS236の工程は、上記ステップS202~ステップS218の工程と同じである。
(Step S220)
When the series information acquisition unit 212 of the terminal device 200 acquires the series information again, in step S220, the annotation unit 214 adds annotations to the series information to generate the annotated series information. Hereinafter, the steps of steps S220 to S236 are the same as the steps of steps S202 to S218.
 実施形態2の情報処理システム1では、学習者用端末装置200が系列情報を取得する度に、上記ステップS202~ステップS218までのステップを繰り返す。 In the information processing system 1 of the second embodiment, every time the learner terminal device 200 acquires the series information, the steps from step S202 to step S218 are repeated.
 したがって、学習者用端末装置200の重要度判別部215では、対象者から系列情報を取得する度に、重要度判別ロジックを更新する。対象者からの系列情報が増加するほど、重要度判別ロジックが改善され、より好適に重要度を判別することができる。また、判別統合部116が判別統合処理に用いる判別ロジックについても同様である。 Therefore, the importance determination unit 215 of the learner terminal device 200 updates the importance determination logic every time the sequence information is acquired from the target person. As the series information from the target person increases, the importance determination logic is improved, and the importance can be determined more preferably. The same applies to the discrimination logic used by the discrimination integration unit 116 for the discrimination integration process.
 <実施形態3>
 図12は、実施形態3に係る情報処理システム1の構成例を模式的に例示するブロック図である。実施形態3の情報処理システムでは、提示用情報生成部1013が、サーバ100に備えられず、講師用端末装置1000に備えられている点が、実施形態1の情報処理システムと異なるが、他の構成は同一である。
<Embodiment 3>
FIG. 12 is a block diagram schematically illustrating a configuration example of the information processing system 1 according to the third embodiment. The information processing system of the third embodiment is different from the information processing system of the first embodiment in that the presentation information generation unit 1013 is not provided in the server 100 but is provided in the instructor terminal device 1000. The configuration is the same.
 次に、実施形態3の情報処理システム1の動作について説明する。図13は、実施形態3に係る情報処理システム1の動作例を例示するシーケンス図である。図13を参照しながら、実施形態3の情報処理システム1における提示用情報を生成する処理について説明する。 Next, the operation of the information processing system 1 of the third embodiment will be described. FIG. 13 is a sequence diagram illustrating an operation example of the information processing system 1 according to the third embodiment. The process of generating the presentation information in the information processing system 1 of the third embodiment will be described with reference to FIG.
 (ステップS302)
 ステップS302では、アノテーション付与部214が、系列情報取得部212で取得された対象者に関する系列情報に含まれる各要素にアノテーションを付与することによって、アノテーション付系列情報を生成する。通信部220は、生成されたアノテーション付系列情報を、サーバ100に送信する。
(Step S302)
In step S302, the annotation-giving unit 214 generates the annotated series information by adding annotations to each element included in the series information regarding the target person acquired by the series information acquisition unit 212. The communication unit 220 transmits the generated annotated sequence information to the server 100.
 (ステップS304)
 ステップS304では、サーバ100の系列情報取得部112が、アノテーション付系列情報を取得する。
(Step S304)
In step S304, the sequence information acquisition unit 112 of the server 100 acquires the annotated sequence information.
 (ステップS306)
 ステップ306では、重要度判別部114が、対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する。
(Step S306)
In step 306, the importance determination unit 114 refers to the annotated series information about the target person and determines the importance of each element included in the annotated series information of the target person.
 (ステップS308)
 ステップS308では、判別統合部116が、複数の対象者A、B、C・・・に関する重要度の判別結果を参照して、判別統合情報を生成する。生成された判別統合情報は、講師用端末装置1000に送信される。
(Step S308)
In step S308, the discrimination integration unit 116 refers to the discrimination results of the importance of the plurality of subjects A, B, C ..., And generates the discrimination integration information. The generated discriminant integrated information is transmitted to the instructor terminal device 1000.
 (ステップS310)
 ステップS310では、講師用端末装置1000の提示用情報生成部1013が、重要度判別部114による判別結果と判別統合情報との少なくとも何れかを参照して提示用情報を生成する。なお、提示用情報生成部1013が、重要度判別部114による判別結果のみを参照して提示用情報を生成する場合、ステップS308の処理は必ずしも実行されることを要せず、ステップS306における判別結果が講師用端末装置1000に送信され、続いて本ステップS310の処理が実行されてもよい。生成された提示用情報は、各学習者用端末装置200に送信される。
(Step S310)
In step S310, the presentation information generation unit 1013 of the instructor terminal device 1000 generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit 114 and the discrimination integrated information. When the presentation information generation unit 1013 generates presentation information by referring only to the determination result by the importance determination unit 114, the process of step S308 does not necessarily have to be executed, and the determination in step S306 is required. The result may be transmitted to the instructor terminal device 1000, and subsequently the process of this step S310 may be executed. The generated presentation information is transmitted to each learner terminal device 200.
 (ステップS312)
 ステップS312では、各学習者用端末装置200において、提示用情報取得部216が、生成された提示用情報を取得する。
(Step S312)
In step S312, in each learner terminal device 200, the presentation information acquisition unit 216 acquires the generated presentation information.
 (ステップS314)
 ステップS314では、提示部(表示部)244が、取得した提示用情報を提示する。
(Step S314)
In step S314, the presentation unit (display unit) 244 presents the acquired presentation information.
 また、本ステップS314では、アノテーション付与部214は、提示用情報に含まれる情報を参照して、系列情報にアノテーションを付与する場合に用いる判別ロジックを更新してもよい。 Further, in this step S314, the annotation unit 214 may update the discrimination logic used when annotating the series information by referring to the information included in the presentation information.
 (ステップS316)
 ステップS316では、講師用端末装置1000が、フィードバック情報を取得する。フィードバック情報は、上述したように、対象者が講師とともに提示用情報を確認し、判別された重要度の確認・修正を行った結果として生成される。フィードバック情報は、講師用端末装置1000からサーバ100に送信される。
(Step S316)
In step S316, the instructor terminal device 1000 acquires feedback information. As described above, the feedback information is generated as a result of the subject confirming the presentation information together with the instructor and confirming / correcting the determined importance. The feedback information is transmitted from the instructor terminal device 1000 to the server 100.
 (ステップS318)
 ステップS318では、サーバ100の重要度判別部114または判別統合部116が、講師用端末装置1000からのフィードバック情報に基づいて、各判別ロジックを更新する。例えば、重要度判別部114は、講師用端末装置1000からの、確認・修正された重要度を含むフィードバック情報に基づいて、重要度の判別ロジックを更新する。
(Step S318)
In step S318, the importance determination unit 114 or the determination integration unit 116 of the server 100 updates each determination logic based on the feedback information from the instructor terminal device 1000. For example, the importance determination unit 114 updates the importance determination logic based on the feedback information including the confirmed / corrected importance from the instructor terminal device 1000.
 (ステップS320)
 端末装置200の系列情報取得部212が、再び、系列情報を取得すると、アノテーション付与部214が、系列情報にアノテーションを付与することによって、アノテーション付系列情報を生成する。以下、ステップS320~ステップS336の工程は、上記ステップS302~ステップS318の工程と同じである。
(Step S320)
When the series information acquisition unit 212 of the terminal device 200 acquires the series information again, the annotation unit 214 adds annotations to the series information to generate the annotated series information. Hereinafter, the steps of steps S320 to S336 are the same as the steps of steps S302 to S318.
 情報処理システム1では、学習者用端末装置200が系列情報を取得する度に、上記ステップS302~ステップS318までのステップを繰り返す。 In the information processing system 1, every time the learner terminal device 200 acquires the series information, the steps from step S302 to step S318 are repeated.
 したがって、サーバ100では、対象者から系列情報を取得する度に、各判別ロジックを更新する。対象者からの系列情報が増加するほど、各判別ロジックが改善され、より好適に各判別を行うことができる。 Therefore, the server 100 updates each discrimination logic every time the series information is acquired from the target person. As the series information from the target person increases, each discrimination logic is improved, and each discrimination can be performed more preferably.
 〔ソフトウェアによる実現例〕
 情報処理装置100の制御ブロック(特に、系列情報取得部112、重要度判別部114、判別統合部116、提示用情報生成部118)、及び各端末装置200(特に系列情報取得部212、アノテーション付与部214、および提示用情報取得部216)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、ソフトウェアによって実現してもよい。
[Example of realization by software]
The control block of the information processing device 100 (particularly, the sequence information acquisition unit 112, the importance determination unit 114, the discrimination integration unit 116, the presentation information generation unit 118), and each terminal device 200 (particularly the sequence information acquisition unit 212, annotation addition). The unit 214 and the presentation information acquisition unit 216) may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be realized by software.
 後者の場合、情報処理装置100及び各端末装置200は、各機能を実現するソフトウェアであるプログラムの命令を実行するコンピュータを備えている。このコンピュータは、例えば1つ以上のプロセッサを備えていると共に、上記プログラムを記憶したコンピュータ読み取り可能な記録媒体を備えている。そして、上記コンピュータにおいて、上記プロセッサが上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記プロセッサとしては、例えばCPU(Central Processing Unit)を用いることができる。上記記録媒体としては、「一時的でない有形の媒体」、例えば、ROM(Read Only Memory)等の他、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムを展開するRAM(Random Access Memory)などをさらに備えていてもよい。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 In the latter case, the information processing device 100 and each terminal device 200 include a computer that executes a program instruction that is software that realizes each function. The computer includes, for example, one or more processors and a computer-readable recording medium that stores the program. Then, in the computer, the processor reads the program from the recording medium and executes it, thereby achieving the object of the present invention. As the processor, for example, a CPU (Central Processing Unit) can be used. As the recording medium, in addition to a “non-temporary tangible medium” such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. Further, a RAM (RandomAccessMemory) for expanding the above program may be further provided. Further, the program may be supplied to the computer via an arbitrary transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program. It should be noted that one aspect of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
 〔まとめ〕
 上記の課題を解決するために、本発明の一態様に係る情報処理装置は、対象者に関する系列情報に含まれる各要素にアノテーションが付与されたアノテーション付系列情報を取得する系列情報取得部と、前記対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別部と、複数の対象者に関する前記重要度判別部による判別結果を参照し、判別統合情報を生成する判別統合部と、前記重要度判別部による判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成部と、を備えている。
[Summary]
In order to solve the above problems, the information processing apparatus according to one aspect of the present invention includes a sequence information acquisition unit that acquires annotated sequence information in which annotations are added to each element included in the sequence information related to the target person. The importance determination unit for determining the importance of each element included in the annotated series information of the target person by referring to the annotated series information regarding the target person, and the determination result by the importance determination unit for a plurality of target persons. A discrimination integration unit that generates discrimination integration information by referring to, a presentation information generation unit that generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit and the discrimination integration information. It has.
 上記の構成によれば、系列性を有する情報のうち、重要な情報を好適に判別し提示することができる。 According to the above configuration, important information can be suitably discriminated and presented among the information having series.
 上記一態様に係る情報処理装置において、前記重要度判別部は、ユーザからのフィードバック情報を参照して、判別ロジックを更新してもよい。 In the information processing device according to the above aspect, the importance determination unit may update the determination logic with reference to the feedback information from the user.
 上記の構成によれば、ユーザからのフィードバック情報を参照して、重要度判別部における判別ロジックを更新するので、重要度判別を繰り返すたびに、より的確な系列情報の重要度判別を行うことができる。 According to the above configuration, since the discrimination logic in the importance determination unit is updated with reference to the feedback information from the user, it is possible to more accurately determine the importance of the series information each time the importance determination is repeated. it can.
 上記一態様に係る情報処理装置において、前記系列情報には、前記対象者の視線に関する情報が含まれていてもよい。 In the information processing device according to the above aspect, the series information may include information regarding the line of sight of the target person.
 上記の構成によれば、前記対象者の視線に関する情報を参照することにより、重要度の判別をより好適に行うことができる。 According to the above configuration, the importance can be more preferably determined by referring to the information regarding the line of sight of the subject.
 上記一態様に係る情報処理装置において、前記判別統合部は、前記複数の対象者の各々に対する前記重要度判別部による判別結果から、共通する情報を抽出し、前記判別統合情報に含めてもよい。 In the information processing apparatus according to the above aspect, the discrimination integration unit may extract common information from the discrimination results of the importance discrimination unit for each of the plurality of target persons and include it in the discrimination integration information. ..
 上記の構成によれば、対象者により系列情報が異なっていても、共通する情報を抽出するので、重要度の判別をより好適に行うことができる。 According to the above configuration, even if the series information differs depending on the target person, common information is extracted, so that the importance can be determined more preferably.
 上記一態様に係る情報処理装置において、前記重要度判別部は、前記重要度として、前記対象者に関する集中度を判別してもよい。 In the information processing device according to the above aspect, the importance determination unit may determine the degree of concentration regarding the target person as the importance.
 上記の構成によれば、対象者に関する集中度を参照して系列情報の重要度判別を行うことができるので、より好適に重要度判別を行うことができる。 According to the above configuration, the importance of the series information can be determined by referring to the degree of concentration regarding the target person, so that the importance can be determined more preferably.
 上記の課題を解決するために、本発明の一態様に係る端末装置において、系列情報を取得する系列情報取得部と、取得した系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部と、アノテーション付系列情報を参照して生成された提示用情報を取得する提示用情報取得部と、前記提示用情報を提示する提示部とを備えている。 In order to solve the above problems, in the terminal device according to one aspect of the present invention, an annotation series is added by annotating a series information acquisition unit that acquires series information and each element included in the acquired series information. It includes an annotation addition unit that generates information, a presentation information acquisition unit that acquires presentation information generated by referring to the annotation series information, and a presentation unit that presents the presentation information.
 上記の構成によれば、各系列情報が統合された提示用情報を取得し、ユーザに対して提示することができる。 According to the above configuration, it is possible to acquire presentation information in which each series information is integrated and present it to the user.
 上記一態様に係る端末装置において、複数の対象者のアノテーション付系列情報に関する判別結果を参照して生成された判別統合情報を取得する判別統合情報取得部を更に備え、前記アノテーション付与部は、ユーザからのフィードバック情報、及び前記判別統合情報の少なくとも何れかを参照して、アノテーション付与ロジックを更新してもよい。 The terminal device according to the above aspect further includes a discrimination integrated information acquisition unit that acquires discrimination integration information generated by referring to discrimination results related to annotation series information of a plurality of target persons, and the annotation addition unit is a user. The annotation logic may be updated with reference to at least one of the feedback information from the above and the discrimination integration information.
 上記の構成によれば、各情報を参照してアノテーション付与ロジックを更新することにより、より好適にアノテーション付与を行うことができる。 According to the above configuration, annotation can be more preferably performed by updating the annotation logic with reference to each information.
 上記一態様に係る端末装置において、前記アノテーション付与部は、前記アノテーションに加え、当該アノテーションの信頼度を示す情報を前記各要素に付与してもよい。 In the terminal device according to the above aspect, the annotation unit may add information indicating the reliability of the annotation to each element in addition to the annotation.
 上記の構成によれば、アノテーションを参照する各処理をより好適に実行することができる。 According to the above configuration, each process that refers to the annotation can be executed more preferably.
 上記の課題を解決するために、本発明の一態様に係る情報処理システムにおいて、系列情報を取得する取得部と、取得した系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部と、ある対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別部と、複数の対象者に関する重要度判別部による判別結果を参照し、判別統合情報を生成する判別統合部と、前記重要度判別部による判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成部と、を備えている。 In order to solve the above problems, in the information processing system according to one aspect of the present invention, annotated series information is added by adding an annotation to an acquisition unit that acquires series information and each element included in the acquired series information. The importance determination unit that determines the importance of each element included in the annotated series information of the target person by referring to the annotated series information about a certain target person, and the plurality of target persons. The presentation information is generated by referring to at least one of the discrimination integration unit that refers to the discrimination result by the importance discrimination unit and generates the discrimination integration information, and the discrimination result by the importance discrimination unit and the discrimination integration information. It is equipped with a presentation information generation unit.
 上記の構成によれば、本発明の一態様に係る情報処理装置と同様の効果が得られる。 According to the above configuration, the same effect as that of the information processing device according to one aspect of the present invention can be obtained.
 上記の課題を解決するために、本発明の一態様に係る情報処理方法において、対象者に関する系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与工程と、前記アノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別工程と、複数の対象者に関する前記重要度判別工程における判別結果を参照し、判別統合情報を生成する判別統合工程と、前記重要度判別工程における判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成工程と、を含む。 In order to solve the above problems, in the information processing method according to one aspect of the present invention, an annotation adding step of generating annotated series information by adding an annotation to each element included in the series information about a target person, With reference to the annotated series information, the importance determination step of determining the importance of each element included in the annotated series information of the target person and the determination result in the importance determination step for a plurality of target persons are referred to. The present invention includes a discrimination integration step of generating discrimination integration information, and a presentation information generation step of generating presentation information by referring to at least one of the discrimination result in the importance discrimination step and the discrimination integration information.
 上記の構成によれば、本発明の一態様に係る情報処理装置と同様の効果が得られる。 According to the above configuration, the same effect as that of the information processing device according to one aspect of the present invention can be obtained.
 本発明の一態様に係る情報処理プログラムは、上記何れかに記載の情報処理装置としてコンピュータを機能させるための情報処理プログラムであって、前記系列情報取得部、重要度判別部、判別統合部、および提示用情報生成部としてコンピュータを機能させる。 The information processing program according to one aspect of the present invention is an information processing program for operating a computer as the information processing device according to any one of the above, and includes the series information acquisition unit, the importance determination unit, and the discrimination integration unit. And make the computer function as a presentation information generator.
 上記の構成によれば、本発明の一態様に係る情報処理装置と同様の効果が得られる。 According to the above configuration, the same effect as that of the information processing device according to one aspect of the present invention can be obtained.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。 The present invention is not limited to the above-described embodiments, and various modifications can be made within the scope of the claims, and the embodiments obtained by appropriately combining the technical means disclosed in the different embodiments. Is also included in the technical scope of the present invention.
 1   情報処理システム
 100 情報処理装置
 100 サーバ
 110、210、1010 制御部
 112、212 系列情報取得部
 114、215 重要度判別部
 116 判別統合部
 118、1013 提示用情報生成部
 120、220、1020 通信部
 130、230、1030 記憶部
 200 学習者用端末装置
 214 アノテーション付与部
 216、1014 提示用情報取得部
 241 カメラ
 242 マイク
 243、1043 操作受付部
 244 表示部(提示部)
 1044 表示部
 245、1045 スピーカ
 1000 講師用端末装置
 1012 フィードバック情報取得部
1 Information processing system 100 Information processing device 100 Server 110, 210, 1010 Control unit 112, 212 Series information acquisition unit 114, 215 Importance determination unit 116 Discrimination integration unit 118, 1013 Presentation information generation unit 120, 220, 1020 Communication unit 130, 230, 1030 Storage unit 200 Learner terminal device 214 Annotating unit 216, 1014 Information acquisition unit for presentation 241 Camera 242 Microphone 243, 1043 Operation reception unit 244 Display unit (presentation unit)
1044 Display unit 245, 1045 Speaker 1000 Instructor terminal device 1012 Feedback information acquisition unit

Claims (11)

  1.  対象者に関する系列情報に含まれる各要素にアノテーションが付与されたアノテーション付系列情報を取得する系列情報取得部と、
     前記対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別部と、
     複数の対象者に関する前記重要度判別部による判別結果を参照し、判別統合情報を生成する判別統合部と、
     前記重要度判別部による判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成部と、
    を備えていることを特徴とする情報処理装置。
    A series information acquisition unit that acquires annotated series information with annotations attached to each element included in the series information related to the target person, and
    An importance determination unit that refers to the annotated series information about the target person and determines the importance of each element included in the annotated series information of the target person.
    A discrimination integration unit that generates discrimination integration information by referring to the discrimination results of the importance discrimination unit for a plurality of target persons, and
    A presentation information generation unit that generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit and the discrimination integrated information.
    An information processing device characterized by being equipped with.
  2.  前記重要度判別部は、ユーザからのフィードバック情報、及び前記判別統合情報の少なくとも何れかを参照して、判別ロジックを更新する
    ことを特徴とする請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the importance determination unit updates the determination logic with reference to at least one of the feedback information from the user and the determination integrated information.
  3.  前記系列情報には、前記対象者の視線に関する情報が含まれている
    ことを特徴とする請求項1又は2に記載の情報処理装置。
    The information processing apparatus according to claim 1 or 2, wherein the series information includes information about the line of sight of the target person.
  4.  前記判別統合部は、前記複数の対象者の各々に対する前記重要度判別部による判別結果から、共通する情報を抽出し、前記判別統合情報に含める
    ことを特徴とする請求項1から3の何れか1項に記載の情報処理装置。
    Any of claims 1 to 3, wherein the discrimination integration unit extracts common information from the discrimination results by the importance discrimination unit for each of the plurality of target persons and includes the common information in the discrimination integration information. The information processing apparatus according to item 1.
  5.  前記重要度判別部は、前記重要度として、前記対象者に関する集中度を判別する
    ことを特徴とする請求項1から4の何れか1項に記載の情報処理装置。
    The information processing apparatus according to any one of claims 1 to 4, wherein the importance determination unit determines the degree of concentration with respect to the target person as the importance.
  6.  対象者に関する系列情報を取得する系列情報取得部と、
     取得した系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部と、
     アノテーション付系列情報を参照して生成された提示用情報を取得する提示用情報取得部と、
     前記提示用情報を提示する提示部と
    を備えている端末装置。
    The series information acquisition unit that acquires series information about the target person,
    Annotation section that generates annotated series information by annotating each element included in the acquired series information,
    The presentation information acquisition unit that acquires the presentation information generated by referring to the annotated series information,
    A terminal device including a presentation unit that presents the presentation information.
  7.  複数の対象者のアノテーション付系列情報に関する判別結果を参照して生成された判別統合情報を取得する判別統合情報取得部を更に備え、
     前記アノテーション付与部は、ユーザからのフィードバック情報、及び前記判別統合情報の少なくとも何れかを参照して、アノテーション付与ロジックを更新する
    請求項6に記載の端末装置。
    It also has a discriminant integrated information acquisition unit that acquires discriminant integrated information generated by referring to the discriminant results related to annotated series information of multiple subjects.
    The terminal device according to claim 6, wherein the annotation section refers to at least one of the feedback information from the user and the discriminant integration information to update the annotation logic.
  8.  前記アノテーション付与部は、前記アノテーションに加え、当該アノテーションの信頼度を示す情報を前記各要素に付与する
    請求項6又は7に記載の端末装置。
    The terminal device according to claim 6 or 7, wherein the annotation unit adds information indicating the reliability of the annotation to each element in addition to the annotation.
  9.  系列情報を取得する取得部と、
     取得した系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与部と、
     ある対象者に関するアノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別部と、
     複数の対象者に関する重要度判別部による判別結果を参照し、判別統合情報を生成する判別統合部と、
     前記重要度判別部による判別結果と前記判別統合情報との少なくとも何れかを参照して提示用情報を生成する提示用情報生成部と、
    を備えている、情報処理システム。
    The acquisition unit that acquires series information,
    Annotation section that generates annotated series information by annotating each element included in the acquired series information,
    An importance determination unit that refers to the annotated series information about a target person and determines the importance of each element included in the annotated series information of the target person.
    The discrimination integration unit that generates discrimination integration information by referring to the discrimination results by the importance discrimination unit for multiple subjects,
    A presentation information generation unit that generates presentation information by referring to at least one of the discrimination result by the importance discrimination unit and the discrimination integrated information.
    Information processing system equipped with.
  10.  対象者に関する系列情報に含まれる各要素にアノテーションを付与することによってアノテーション付系列情報を生成するアノテーション付与工程と、
     前記アノテーション付系列情報を参照し、当該対象者のアノテーション付系列情報に含まれる各要素の重要度を判別する重要度判別工程と、
     複数の対象者に関する前記重要度判別工程における判別結果を参照し、判別統合情報を生成する判別統合工程と、
     前記重要度判別工程における判別結果と前記判別統合情報との少なくとも何れかを生成する提示用情報生成工程と、
    を含むことを特徴とする情報処理方法。
    Annotation process that generates annotated series information by annotating each element included in the series information about the target person, and
    The importance determination step of referring to the annotated series information and determining the importance of each element included in the annotated series information of the target person, and
    A discrimination integration step that generates discrimination integration information by referring to the discrimination results in the importance discrimination step for a plurality of target persons, and
    A presentation information generation step that generates at least one of the discrimination result in the importance discrimination step and the discrimination integrated information, and
    An information processing method characterized by including.
  11.  請求項1から5の何れか1項に記載の情報処理装置としてコンピュータを機能させるためのプログラムであって、前記系列情報取得部、前記重要度判別部、前記判別統合部、および前記提示用情報生成部としてコンピュータを機能させるための情報処理プログラム。 A program for operating a computer as the information processing device according to any one of claims 1 to 5, wherein the series information acquisition unit, the importance determination unit, the discrimination integration unit, and the presentation information. An information processing program that allows a computer to function as a generator.
PCT/JP2020/015187 2019-04-09 2020-04-02 Iinformation processing device, information processing system, information processing method, and information processing program WO2020209171A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019074291 2019-04-09
JP2019-074291 2019-04-09
JP2020032229A JP2020173787A (en) 2019-04-09 2020-02-27 Information processing apparatus, information processing system, information processing method, and information processing program
JP2020-032229 2020-02-27

Publications (1)

Publication Number Publication Date
WO2020209171A1 true WO2020209171A1 (en) 2020-10-15

Family

ID=72751077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/015187 WO2020209171A1 (en) 2019-04-09 2020-04-02 Iinformation processing device, information processing system, information processing method, and information processing program

Country Status (1)

Country Link
WO (1) WO2020209171A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04282129A (en) * 1991-03-08 1992-10-07 Fujitsu Ltd Gaze point analyzing device
JP2008139553A (en) * 2006-12-01 2008-06-19 National Agency For Automotive Safety & Victim's Aid Driving aptitude diagnosing method, evaluation standard determining method for driving aptitude diagnosis, and driving aptitude diagnostic program
WO2017018012A1 (en) * 2015-07-28 2017-02-02 ソニー株式会社 Information processing system, information processing method, and storage medium
US20180060150A1 (en) * 2016-08-26 2018-03-01 International Business Machines Corporation Root cause analysis
JP2018530798A (en) * 2015-08-15 2018-10-18 アイフルエンス, インコーポレイテッドEyefluence, Inc. Systems and methods for visual signals based on biomechanics for interacting with real and virtual objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04282129A (en) * 1991-03-08 1992-10-07 Fujitsu Ltd Gaze point analyzing device
JP2008139553A (en) * 2006-12-01 2008-06-19 National Agency For Automotive Safety & Victim's Aid Driving aptitude diagnosing method, evaluation standard determining method for driving aptitude diagnosis, and driving aptitude diagnostic program
WO2017018012A1 (en) * 2015-07-28 2017-02-02 ソニー株式会社 Information processing system, information processing method, and storage medium
JP2018530798A (en) * 2015-08-15 2018-10-18 アイフルエンス, インコーポレイテッドEyefluence, Inc. Systems and methods for visual signals based on biomechanics for interacting with real and virtual objects
US20180060150A1 (en) * 2016-08-26 2018-03-01 International Business Machines Corporation Root cause analysis

Similar Documents

Publication Publication Date Title
CN110349667B (en) Autism assessment system combining questionnaire and multi-modal model behavior data analysis
US8243132B2 (en) Image output apparatus, image output method and image output computer readable medium
US7506979B2 (en) Image recording apparatus, image recording method and image recording program
CN106599881A (en) Student state determination method, device and system
WO2017024845A1 (en) Stimulus information compiling method and system for tests
US20180060757A1 (en) Data annotation method and apparatus for enhanced machine learning
US8150118B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
US9498123B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
US20200090536A1 (en) Classroom assistance system
Dubbaka et al. Detecting learner engagement in MOOCs using automatic facial expression recognition
JP2020173787A (en) Information processing apparatus, information processing system, information processing method, and information processing program
CN115205764A (en) Online learning concentration monitoring method, system and medium based on machine vision
WO2020209171A1 (en) Iinformation processing device, information processing system, information processing method, and information processing program
WO2022180860A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
WO2022168180A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
WO2022168185A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
JP7111042B2 (en) Information processing device, presentation system, and information processing program
Rao et al. Teacher assistance system to detect distracted students in online classroom environment
KR101996039B1 (en) Apparatus for constructing training template of facial emotion recognition and method thereof
JP7152825B1 (en) VIDEO SESSION EVALUATION TERMINAL, VIDEO SESSION EVALUATION SYSTEM AND VIDEO SESSION EVALUATION PROGRAM
WO2022180862A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
WO2022180854A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
WO2022180855A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
WO2022180858A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
WO2022180852A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20787496

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20787496

Country of ref document: EP

Kind code of ref document: A1