[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023079370A1 - System and method for enhancing quality of a teaching-learning experience - Google Patents

System and method for enhancing quality of a teaching-learning experience Download PDF

Info

Publication number
WO2023079370A1
WO2023079370A1 PCT/IB2022/050037 IB2022050037W WO2023079370A1 WO 2023079370 A1 WO2023079370 A1 WO 2023079370A1 IB 2022050037 W IB2022050037 W IB 2022050037W WO 2023079370 A1 WO2023079370 A1 WO 2023079370A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
teaching
module
learning
trained model
Prior art date
Application number
PCT/IB2022/050037
Other languages
French (fr)
Inventor
Bharani Kumar D
Original Assignee
Bharani Kumar D
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bharani Kumar D filed Critical Bharani Kumar D
Publication of WO2023079370A1 publication Critical patent/WO2023079370A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/04Devices for conversing with the deaf-blind
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Definitions

  • Embodiments of a present disclosure relate to education and upskilling, and more particularly to a system and method for enhancing quality of a teaching-learning experience.
  • a system to enhance quality of a teaching-learning experience includes a processing subsystem hosted on a server.
  • the processing subsystem is configured to execute on a network to control bidirectional communications among a plurality of modules.
  • the processing subsystem includes an input module.
  • the input module is configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration.
  • the processing subsystem also includes a monitoring module operatively coupled to the input module.
  • the monitoring module is configured to identify one or more first timeslots associated with the teaching -learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model.
  • the monitoring module is also configured to monitor an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots.
  • the processing subsystem also includes a complex topic identification module operatively coupled to the monitoring module.
  • the complex topic identification module is configured to track a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user.
  • the complex topic identification module is also configured to identify one or more second timeslots associated with the teaching-learning session when the corresponding teaching-learning session is replayed for a count greater than a threshold re -play count value based on the tracking.
  • the complex topic identification module is also configured to identify one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots.
  • the processing subsystem also includes an interaction module operatively coupled to the complex topic identification module.
  • the interaction module is configured to generate a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics.
  • the interaction module is also configured to generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time. Further, the interaction module is also configured to build an interactive digital assistant using the knowledge trained model.
  • the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form.
  • the processing subsystem also includes a recommendation module operatively coupled to the interaction module.
  • the recommendation module is configured to build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time.
  • the training data incudes at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content- related data.
  • the recommendation module is also configured to generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching-learning session, thereby enhancing the quality of the teaching-learning experience.
  • the one or more recommendations correspond to information for the first user based on the training data.
  • a method for enhancing quality of a teachinglearning experience includes receiving multimedia data corresponding to a first user experiencing a teaching-learning session upon registration.
  • the method also includes identifying one or more first timeslots associated with the teaching -learning session when a responsive behavior of the first user is negative in response to the corresponding teaching-learning session, based on analysis of the multimedia data using a behavior identification trained model.
  • the method also includes monitoring an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots.
  • the method also includes tracking a timeline associated with the teaching -learning session using a predefined tracking technique upon monitoring the attention level of the first user.
  • the method also includes identifying one or more second timeslots associated with the teaching-learning session when the corresponding training- learning session is re-played for a count greater than a threshold re-play count value based on the tracking. Furthermore, the method also includes identifying one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots. Furthermore, the method also includes generating a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics.
  • the method also includes generating a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time. Furthermore, the method also includes building an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form. Furthermore, the method also includes building a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time, wherein the training data comprises at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content-related data.
  • the method also includes generating one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teachinglearning session, wherein the one or more recommendations correspond to information for the first user based on the training data, thereby enhancing the quality of the teaching-learning experience.
  • FIG. 1 is a block diagram representation of a system to enhance quality of a teachinglearning experience in accordance with an embodiment of the present disclosure
  • FIG. 2 is a block diagram representation of an exemplary embodiment of the system to enhance the quality of the teaching -learning experience of FIG. 1 in accordance with an embodiment of the present disclosure
  • FIG. 3 is a block diagram of a teaching-learning quality enhancing computer or a teaching-learning quality enhancing server in accordance with an embodiment of the present disclosure.
  • FIG. 4 (a) and FIG. 4 (b) are flow charts representing steps involved in a method for enhancing quality of a teaching -learning experience in accordance with an embodiment of the present disclosure.
  • Embodiments of the present disclosure relate to a system to enhance quality of a teaching-learning experience.
  • the term “teaching-learning experience” is defined as an act of experiencing a teaching-learning session by a user.
  • the teaching-learning session may a live training session, a live knowledge sharing session, a live classroom session, a recorded video about a subject matter, or the like. Therefore, during such a teaching-learning session, knowing how well content in the corresponding teaching-learning session is been absorbed by a viewer is important which corresponds to knowing the quality of the teaching-learning experience by the viewer of the corresponding teaching-learning session. Upon knowing the quality, it can be enhanced by performing certain operations. Further, the system described hereafter in FIG. 1 is the system to enhance the quality of the teaching-learning experience.
  • FIG. 1 is a block diagram representation of a system (10) to enhance quality of a teaching-learning experience in accordance with an embodiment of the present disclosure.
  • the system (10) includes a processing subsystem (20) hosted on a server (30).
  • the server (30) may include a cloud server.
  • the server (30) may include a local server.
  • the processing subsystem (20) is configured to execute on a network (not shown in FIG. 1) to control bidirectional communications among a plurality of modules.
  • the network may include a wired network such as local area network (LAN).
  • the network may include a wireless network such as wireless fidelity (Wi-Fi), Bluetooth, Zigbee, near field communication (NFC), infra-red communication (RFID), or the like.
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • RFID infra-red communication
  • the processing subsystem (20) includes an input module (40).
  • the input module (40) is configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration.
  • the multimedia data may include one or more images, one or more videos, or the like.
  • the first user may correspond to a student, a trainee, an employee, a team member, a conference attendee, or the like. Suppose the first user is experiencing the teaching-learning session on a virtual platform via a communication means.
  • the teaching-learning session may have content being either displayed as a presentation or explained by a second user in a predefined form and a predefined language.
  • the second user may include a teacher, a trainer, a leader, a manager, a mentor, or the like.
  • the predefined form may include a speech form, a sign language form, a textual form, or the like.
  • the predefined language may include English, Hindi, Kannada, or the like.
  • the communication means may include a voice call, a video call, a video conference, or the like.
  • the multimedia data may be captured via a multimedia data capturing device, wherein the multimedia data capturing device may be operatively coupled to the input module (40).
  • the multimedia data capturing device may include a mobile phone camera, a tablet camera, a laptop camera, a video camera, or the like.
  • the input module (40) may receive the same in real-time during the teaching-learning session.
  • the teaching-learning session may be a pre- recorded video with audio, and the first user may be watching the same on the corresponding virtual platform.
  • the processing subsystem (20) may also include a registration module (as shown in FIG. 2) operatively coupled to the input module (40).
  • the registration module may be configured to register the first user with the system (10) upon receiving a plurality of first user details via a first user device.
  • the plurality of first user details may include a first username, contact details, qualification details, occupation details, or the like.
  • the plurality of first user details may be stored in a database (as shown in FIG. 2) of the system (10).
  • the database may include a local database or a cloud database.
  • the first user device may include a mobile phone, a tablet, a laptop, or the like.
  • the first user may choose to experience the teaching -learning session according to a predefined schedule upon choosing a predefined learning course.
  • the predefined schedule may include a daily one hour, weekly one hour, daily two hours, weekly two hours, two days in a week, or the like.
  • the predefined leaning course may correspond to a data science course, a digital marketing course, Internet of Things (loT) course, or the like.
  • the system (10) may have to analyze the corresponding multimedia data for checking an attention of the first user and identifying one or more parts of the corresponding teaching-learning session during which the first user may have got distracted while experiencing the corresponding teaching-learning session.
  • the processing subsystem (20) also includes a monitoring module (50) operatively coupled to the input module (40).
  • the monitoring module (50) is configured to identify one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model.
  • the responsive behavior of the first user may be considered as positive, when the responsive behavior may correspond to nodding head as a response to understanding and attentively listening to the content from the corresponding teaching-learning session, a joyful facial expression, a surprised facial expression, or the like.
  • the responsive behavior of the first user may be considered as negative, when the responsive behavior may correspond to nodding the head as a response to not understanding the content in the corresponding teachinglearning session, a drowsy facial expression, a frowning facial expression, a sad facial expression, over a mobile phone, a confused facial expression, diversion in a gaze of the first user, facial expressions indicating an attention of the first user being away from the teaching-learning session, or the like.
  • the multimedia data corresponding to the first user may have to be analyzed.
  • the multimedia data may be analyzed using a machine learning technique.
  • machine learning is defined as an application of artificial intelligence that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
  • the machine learning technique may include an image processing technique, facial expression recognition technique, a sentiment analysis technique, a speech recognition technique, a natural language processing (NLP) technique, and the like.
  • image processing is defined as a method of manipulating an image to either enhance the quality or extract relevant information from it.
  • the facial expression recognition technique may be used to identify one or more facial expressions of the first user, so that one or more emotions of the first user can be identified.
  • facial expression recognition technique is defined as a technology which uses biometric markers to detect emotions in human faces. Basically, the facial expression recognition technique performs a face detection operation, a facial landmark detection operation, and finally performs a facial expression and emotion classification operation.
  • a model may have to be trained which may then be capable of identifying the one or more emotions of the first user.
  • the behavior identification trained model may be generated by the monitoring module (50) by training the corresponding behavior identification trained model with a plurality of images and a plurality of videos of people from different countries, genders, sexual orientations, skin tones, and the like, so that the behavior identification trained model is not biased.
  • one or more dimensions such as, but not limited to, face symmetry, facial contrast, a pose the face is in, a length of one or more face’s attributes, a width of the one or more face’s attributes, and the like may also be considered.
  • the one or more face’s attributes may include eyes, nose, forehead, or the like.
  • the multimedia data may be analyzed for identifying the responsive behavior of the first user.
  • the system (10) may be analyzing the multimedia data, there is a possibility of a data privacy issue and a data security issue, and hence to address such issues, the system (10) may use a batch processing technique, so that none of the multimedia data is stored in a database of the system (10) after analyzing or processing the corresponding multimedia data.
  • a built-in clustering may be implemented using modern equipment or one or more modem services.
  • a timeline associated with the teaching-learning session when the responsive behavior may be negative may be identified, by simultaneously tracking the corresponding timeline and associating a first flag with the corresponding timeline.
  • the one or more timeslots on the timeline having the first flag may be identified as the one or more first timeslots associated with the teaching-learning session when the responsive behavior of the first user is negative.
  • the corresponding one or more first timeslots may then be provided with an accessible link for the corresponding first user to click the corresponding accessible link and watch the corresponding part of the teaching -learning session to make learning complete and comprehensive.
  • the monitoring module (50) is also configured to monitor the attention level of the first user over a predefined time period associated with the teaching -learning session based on the identification of the corresponding one or more first timeslots.
  • the predefined time period may include a complete timeline associated with the teaching-learning session, a predefined percentage of the complete timeline, or the like.
  • the attention level may correspond to a high attention level when the count of the corresponding one or more accessible links may be less than or equal to a threshold link count value. In an alternative embodiment, the attention level may correspond to a low attention level when the count of the corresponding one or more accessible links may be greater than the threshold link count value.
  • the term “sentiment analysis technique” refers to a technique which identifies a sentiment or an emotion of people who are a part of a session conducted online or offline by processing one or more image frames, one or more video frames, an audio, a speech, a text, and the like.
  • speech recognition technique is defined as a methodology and a technology that enables the recognition and translation of spoken language into text by computers.
  • natural language processing is defined as a branch of artificial intelligence that helps computers understand, interpret and manipulate human language.
  • a historic data may be used to train a model that may be used to an operation of the natural language processing, wherein the historic data may include a word dictionary of a plurality of words with meaning.
  • the first user may watch the one or more parts of the teaching-learning session for a certain number of times, as the first user may have found the corresponding one or more parts a little difficult to understand or the content in the corresponding one or more parts may have one or more complex topics.
  • the processing subsystem (20) also includes a complex topic identification module (60) operatively coupled to the monitoring module (50).
  • the complex topic identification module (60) is configured to track the timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user.
  • the predefined tracking technique may refer to a technique which keeps checking for an occurrence of an event associated with the timeline of the corresponding teaching-learning session.
  • the event may include re-playing of the corresponding teaching-learning session at a particular timeslot on the corresponding timeline.
  • the complex topic identification module (60) is also configured to identify one or more second timeslots associated with the teaching-learning session when the corresponding teaching-learning session is re-played for a count greater than a threshold re-play count value based on the tracking.
  • a second flag may be associated with the timeline based on the identification of the one or more second timeslots.
  • the one or more timeslots having the second flag may be identified as the one or more second timeslots.
  • the count for which one or more parts of the teaching-learning session may be replayed may be a count of one or more second flags on the timeline.
  • the complex topic identification module (60) is also configured to identify one or more topics covered in the corresponding teaching-learning session as the one or more complex topics based on the identification of the corresponding one or more second timeslots.
  • a first part of the teaching-learning session possess the count of the one or more second flags as four and a second part of the teaching-learning session possess the count of the one or more second flags as ten. Then, as the count on the second part is greater than the count at the first part, the second part may possess the one or more complex topics in comparison to the one or more topics covered in the first part of the corresponding teaching -learning session.
  • the one or more complex topics that may be covered in the corresponding one or more parts may be identified.
  • the processing subsystem (20) also includes an interaction module (70) operatively coupled to the complex topic identification module (60).
  • the interaction module (70) is configured to generate a data set by converting the content in the corresponding teaching-learning session from the predefined form to the textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics.
  • the interaction module (70) is also configured to generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using the machine learning technique in real-time.
  • the interaction module (70) is also configured to build an interactive digital assistant using the knowledge trained model.
  • the interactive digital assistant is adapted to interact with the first user based on the one or more queries of the corresponding first user, in the predefined form.
  • the textual data generation technique may refer to a speech to text conversion technique, wherein the speech to text conversion technique works similar to that of the speech recognition technique as defined above.
  • the text extracted from the teaching-learning session may be used to as a corpus or the data set to train the knowledge trained model. Later, the corresponding knowledge trained model may be used to generate the interactive digital assistant which is intelligent enough to answer the one or more queries of the first user in real-time.
  • the interactive digital assistant may be a chatbot.
  • the interactive digital assistant may be a three-dimensional (3-D) image of the second user displayed in a 3-D space, wherein the 3-D image of the second user is generated using an augmented reality (AR) technology.
  • AR augmented reality
  • augmented reality is defined as a system that incorporates three basic features such as a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects.
  • the interactive digital assistant may convert the one or more queries in the speech form into the one or more queries in the textual form using the speech to text conversion technique. Then, the interactive digital assistant may generate the response for the corresponding one or more queries using the knowledge trained model in the textual form and then convert back to the speech form and narrate the same to the first user. Basically, availability of such an option may help visually impaired people as such people may not be able to see the response for the one or more queries asked by them.
  • the interactive digital assistant may convert the one or more queries in the sign language form into the one or more queries in the textual form using one or more conversion techniques upon receiving the one or more queries in the sign language form.
  • the one or more conversion techniques may include a recurrent neural network (RNN), a Long short-term memory (LSTM), a Gated recurrent unit (GRU), or the like.
  • RNN recurrent neural network
  • LSTM Long short-term memory
  • GRU Gated recurrent unit
  • the interactive digital assistant may generate a corresponding response for the corresponding one or more queries using the knowledge trained model in the textual form, upon conversion.
  • the interactive digital assistant may display the corresponding response to the first user upon converting the corresponding response in the textual form to the corresponding response in the sign language form using the one or more conversion techniques upon generating the corresponding response.
  • availability of such an option may help deaf and mute people as such people may not be able to speak and hear the response for the one or more queries asked by them.
  • the interaction module (70) may include a translation sub-module.
  • the translation sub-module may be configured to translate at least one of the one or more queries and the corresponding response of the interaction digital assistant, from a first language to a second language using a translation technique, based on a language used by the first user to narrate the corresponding one or more queries to the interaction digital assistant.
  • the translation technique may include a multilingual Bidirectional Encoder Representations from Transformers (BERT) technique.
  • the processing subsystem (20) also includes a recommendation module (80) operatively coupled to the interaction module (70).
  • the recommendation module (80) is configured to build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using the machine learning technique in real-time.
  • the training data incudes at least one of the data set generated by the interaction module (70), the one or more queries of the corresponding first user, and standard content-related data.
  • the recommendation module (80) is also configured to generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teachinglearning session, thereby enhancing the quality of the teaching-learning experience.
  • the one or more recommendations correspond to information for the first user based on the training data.
  • the one or more recommendations may correspond to one or more blogs, one or more videos, one or more top articles, and the like corresponding to the one or more topics which the first user may be interested in knowing more about.
  • the processing subsystem (20) may also include an invigilation module (as shown in FIG. 2) operatively coupled to the input module (40).
  • the invigilation module may be configured to invigilate the first user when the first user is experiencing the examination session, for identification of one or more activities being in disagreement with one or more regular exam protocols, based on the analysis of the multimedia data using the behavior identification trained model.
  • the processing subsystem (20) may also include a report generation module (as shown in FIG. 2) operatively coupled to the recommendation module (80).
  • the report generation module may be configured to generate a comprehensive personalized report corresponding to the first user upon generating the one or more recommendations via the recommendation module (80).
  • the comprehensive personalized report may include information about at least one of the first user, the teaching-learning session, a behavior of the first user when experiencing the teaching-learning session and an examination session, the one or more complex topics, the one or more queries, and the like.
  • the processing subsystem (20) may also include an alert generation module (as shown in FIG. 2) operatively coupled to the report generation module.
  • the alert generation module may be configured to generate an alert to at least one of the first user and a second user based on the information disclosed in the comprehensive personalized report generated by the report generation module.
  • the alert may correspond to information about the behavior and an experience of the first user with the teaching-learning session and the examination session.
  • the alert may be generated in one or more forms such as, but not limited to, a text message, an audio alert, an e-mail, or the like.
  • the second user may amend the corresponding specific portions to increase the attention level of the first user. Also, upon receiving the alert regarding the one or more complex topics corresponding to the first user, the second user may arrange for one or more special classes for the corresponding complex topics to improve the attention level and the knowledge level of the first user.
  • FIG. 2 is a block diagram representation of an exemplary embodiment of the system (10) to enhance the quality of the teaching-learning experience of FIG. 1 in accordance with an embodiment of the present disclosure.
  • a school (90) is conducting online classes due to some reason in which the students (100) can take the classes from home (110). Then, to check the attention of students (100) and enhance the quality of such online classes, the school (90) plans to use the system (10) proposed in the present disclosure, by installing the system (10) in one or more school’s laptops (120).
  • the system (10) includes the processing subsystem (20) hosted on the server (30).
  • the students (100) are made to register with the system (10) via the registration module (130) using personal or parent’s laptops (140) by providing personal details.
  • the personal details of the students (100) are stored in the database (150) of the system (10). Then, the students (100) sign up for their online classes and attend the online classes as per the schedule. As the students (100) take the online classes, the students (100) may have to permit to access to cameras (160) of the personal or parent’s laptops (140) which the students (100) are using, so that the video of the students (100) is constantly captured and given to the system (10) via the input module (40).
  • teachers (170) have pre-recorded videos in which the teachers (170) are explaining subject related concepts and uploaded on the system (10).
  • the timeline of the pre-recorded videos has been tracked to check for the one or more first timeslots when the students (100) are not attentive, and the attention level of the students (100) is monitored via the monitoring module (50). Also, a number of times the students (100) re-played specific portions of the pre-recorded videos have also been identified for identifying the one or more complex topics via the complex topic identification module (60). Later, as the students (100) get some doubts, the students (100) can ask them, and an instant response can be received via the interaction module (70) which uses an intelligent chatbot (180) to answer the doubts of the students (100).
  • the students (100) can also communicate in mother tongue language as the interaction module (70) is provided with the translation sub-module (190) which translates the mother tongue language into machine understandable language and vice versa.
  • the intelligent chatbot (180) is also capable of communicating in the speech form (192), the textual form (194) and the sign language form (196) with the students (100). Then, based on frequently asked doubts by the students (100), top articles and blog are recommended to the students (100) regarding the content of the frequently asked doubts via the recommendation module (80). Occasionally, the students (100) are made to take certain tests and during such tests, the students (100) are also invigilated via the invigilation module (200).
  • the comprehensive personalized report is also generated for each student as well a report having information which is an average for all the students (100) via the report generation module (210). Also, based on the information available in the comprehensive personalized report, the teachers (170) and the students (100) are alerted via the alert generation module (220), so that the content in the pre-recorded videos can be updated, modified, or special classes could be arranged for the students (100) for improving the attention of the students (100) for them to grasp the content more efficiently. Therefore, this way the quality of the online classes can be enhanced.
  • FIG. 3 is a block diagram of a teaching-learning quality enhancing computer or a teaching-learning quality enhancing server (240) in accordance with an embodiment of the present disclosure.
  • the teaching-learning quality enhancing server (240) includes processor(s) (250), and memory (260) operatively coupled to a bus (270).
  • the processor(s) (250), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
  • Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like.
  • Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts.
  • Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (250).
  • the memory (260) includes a plurality of subsystems stored in the form of executable program which instructs the processor (250) to perform method steps illustrated in FIG. 4.
  • the memory (260) includes a processing subsystem (20) of FIG 1.
  • the processing subsystem (20) further has following modules: an input module (40), a monitoring module (50), a complex topic identification module (60), an interaction module (70), and a recommendation module (80).
  • the input module (40) is configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration.
  • the monitoring module (50) is configured to identify one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model.
  • the monitoring module (50) is also configured to monitor an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots.
  • the complex topic identification module (60) is configured to track a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user.
  • the complex topic identification module (60) is also configured to identify one or more second timeslots associated with the teaching-learning session when the corresponding training-learning session is re-played for a count greater than a threshold re-play count value based on the tracking.
  • the complex topic identification module (60) is also configured to identify one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots.
  • the interaction module (70) is configured to generate a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics.
  • the interaction module (70) is also configured to generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time.
  • the interaction module (70) is also configured to build an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form.
  • the recommendation module (80) is configured to build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time.
  • the recommendation module (80) is also configured to generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching-learning session, thereby enhancing the quality of the teaching-learning experience.
  • FIG. 4 (a) and FIG. 4 (b) are flow charts representing steps involved in a method (280) for enhancing quality of a teaching-learning experience in accordance with an embodiment of the present disclosure.
  • the method (280) includes receiving multimedia data corresponding to a first user experiencing a teaching-learning session upon registration in step 290.
  • receiving the multimedia data may include receiving the multimedia data by an input module (40).
  • the method (280) also includes identifying one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching-learning session, based on analysis of the multimedia data using a behavior identification trained model in step 300.
  • identifying the one or more first time slots may include identifying the one or more first timeslots by a monitoring module (50).
  • the method (280) includes monitoring an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots in step 310.
  • monitoring the attention level may include monitoring the attention level by the monitoring module (50).
  • the method (280) also includes tracking a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user in step 320.
  • tracking the timeline may include tracking the timeline by a complex topic identification module (60).
  • the method (280) also includes identifying one or more second timeslots associated with the teaching-learning session when the corresponding traininglearning session is re-played for a count greater than a threshold re-play count value based on the tracking in step 330.
  • identifying the one or more second timeslots may include identifying the one or more second timeslots by the complex topic identification module (60).
  • the method (280) also includes identifying one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots in step 340.
  • identifying the one or more topics as the one or more complex topics may include identifying the one or more topics as the one or more complex topics by the complex topic identification module (60).
  • the method (280) also includes generating a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics in step 350.
  • generating the data set may include generating the data set by an interaction module (70).
  • the method (280) also includes generating a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time in step 360.
  • generating the knowledge trained model may include generating the knowledge trained model by the interaction module (70).
  • the method (280) also includes building an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form in step 370.
  • building the interactive digital assistant may include building the interactive digital assistant by the interaction module (70).
  • the method (280) also includes building a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time, wherein the training data comprises at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content-related data in step 380.
  • building the recommendation trained model may include building the recommendation trained model by a recommendation module (80).
  • the method (280) also includes generating one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching-learning session, wherein the one or more recommendations correspond to information for the first user based on the training data, thereby enhancing the quality of the teaching-learning experience in step 390.
  • generating the one or more recommendations may include generating the one or more recommendations by the recommendation module (80).
  • the method (280) may also include converting the one or more queries in a sign language form into the one or more queries in the textual form using one or more conversion techniques upon receiving the one or more queries in the sign language form.
  • converting the one or more queries in the sign language form into the one or more queries in the textual form may include converting the one or more queries in the sign language form into the one or more queries in the textual form by the interactive digital assistant.
  • the method (280) may also include generating a corresponding response for the corresponding one or more queries using the knowledge trained model in the textual form, upon conversion. In such embodiment, generating the corresponding response may include generating the corresponding response by the interactive digital assistant.
  • the method (280) may also include displaying the corresponding response to the first user upon converting the corresponding response in the textual form to the corresponding response in the sign language form using one or more conversion techniques upon generating the corresponding response.
  • displaying the corresponding response to the first user may include displaying the corresponding response to the first user by the interactive digital assistant.
  • Various embodiments of the present disclosure enable enhancing the quality of the teaching-learning experience of students, as the one or more queries of the students get answered instantly with the help of the interactive digital assistant.
  • the interactive digital assistant could be a 3-D image of a tutor of the students in the 3-D space, gives a realistic experience to the students of interacting with the corresponding tutor for resolving the one or more queries.
  • the interactive digital assistant is multilingual, makes the system more flexible as now the students can interact in any language based on a comfort of the students.
  • the interactive digital assistant can also interact in the sign language
  • deaf and mute people can also take advantage of the system, thereby making the system more efficient and more reliable.
  • identifying specific portions of the teachinglearning session, during which the students are not attentive or having trouble understanding the content in the corresponding specific portions helps the tutors to amend the content in a way to grab the attention of the students and make it easy for the students to understand the same.
  • the system is more advantages because, the system provided the one or more recommendations for the students regarding the one or more topics which the students are most interested in.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Resources & Organizations (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Primary Health Care (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A system to enhance quality of a teaching-learning experience is provided. The system includes a processing subsystem which includes an input module (40) which receives multimedia data. The processing subsystem also includes a monitoring module (50) which identifies first timeslot(s) when a responsive behavior is negative and monitors an attention level of the first user. The processing subsystem also includes a complex topic identification module (60) which tracks a timeline, identifies second timeslot(s) when a teaching-learning session is re-played for a count greater than a threshold re-play count value, and identifies complex topic(s). The processing subsystem also includes an interaction module (70) which generates a data set, generates a knowledge trained model using the data set, and builds an interactive digital assistant. The processing subsystem also includes a recommendation module (80) which builds a recommendation trained model and generates recommendation(s), thereby enhancing the quality of the teaching-learning experience.

Description

SYSTEM AND METHOD FOR ENHANCING QUALITY OF A TEACHINGLEARNING EXPERIENCE
EARLIEST PRIORITY DATE:
This Application claims priority from a Complete patent application filed in India having Patent Application No. 202141050607, filed on November 03, 2021 and titled “SYSTEM AND METHOD FOR ENHANCING QUALITY OF ATEACHING- LEARNING EXPERIENCE”.
FIELD OF INVENTION
Embodiments of a present disclosure relate to education and upskilling, and more particularly to a system and method for enhancing quality of a teaching-learning experience.
BACKGROUND
In every industry, such as education, training, employment, knowledge improvement, technology, and so on, quality teaching and learning has become a major concern. This is due to growing worldwide competitiveness, greater demands for value for money, technological advancements, and other factors. As a result, teachers lead sessions to improve the knowledge of those who want to study. In addition, the contemporary educational environment necessitates the adoption of digital technology. However, determining the teaching quality of the teachers as well as the learners' learning aptitude is a difficult undertaking.
There are multiple approaches implemented to implement the same. However, such multiple approaches yield fewer accurate outcomes, causing teachers and students to make ineffective judgments based on the data acquired, and so making such multiple approaches less reliable and less efficient. Also, such multiple approaches fail to clarify doubts of the students instantly, thereby making the students to proceed without fully comprehending the prior topic, resulting in the emergence of further doubts at a later stage. Hence, there is a need for an improved system and method to enhance quality of a teaching-learning experience which addresses the aforementioned issues.
BRIEF DESCRIPTION
In accordance with one embodiment of the disclosure, a system to enhance quality of a teaching-learning experience is provided. The system includes a processing subsystem hosted on a server. The processing subsystem is configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an input module. The input module is configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration. The processing subsystem also includes a monitoring module operatively coupled to the input module. The monitoring module is configured to identify one or more first timeslots associated with the teaching -learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model. The monitoring module is also configured to monitor an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots. Further, the processing subsystem also includes a complex topic identification module operatively coupled to the monitoring module. The complex topic identification module is configured to track a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user. The complex topic identification module is also configured to identify one or more second timeslots associated with the teaching-learning session when the corresponding teaching-learning session is replayed for a count greater than a threshold re -play count value based on the tracking. Further, the complex topic identification module is also configured to identify one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots. Furthermore, the processing subsystem also includes an interaction module operatively coupled to the complex topic identification module. The interaction module is configured to generate a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics. The interaction module is also configured to generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time. Further, the interaction module is also configured to build an interactive digital assistant using the knowledge trained model. The interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form. Furthermore, the processing subsystem also includes a recommendation module operatively coupled to the interaction module. The recommendation module is configured to build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time. The training data incudes at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content- related data. The recommendation module is also configured to generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching-learning session, thereby enhancing the quality of the teaching-learning experience. The one or more recommendations correspond to information for the first user based on the training data.
In accordance with another embodiment, a method for enhancing quality of a teachinglearning experience is provided. The method includes receiving multimedia data corresponding to a first user experiencing a teaching-learning session upon registration. The method also includes identifying one or more first timeslots associated with the teaching -learning session when a responsive behavior of the first user is negative in response to the corresponding teaching-learning session, based on analysis of the multimedia data using a behavior identification trained model. Further, the method also includes monitoring an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots. Furthermore, the method also includes tracking a timeline associated with the teaching -learning session using a predefined tracking technique upon monitoring the attention level of the first user. Furthermore, the method also includes identifying one or more second timeslots associated with the teaching-learning session when the corresponding training- learning session is re-played for a count greater than a threshold re-play count value based on the tracking. Furthermore, the method also includes identifying one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots. Furthermore, the method also includes generating a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics. Furthermore, the method also includes generating a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time. Furthermore, the method also includes building an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form. Furthermore, the method also includes building a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time, wherein the training data comprises at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content-related data. Furthermore, the method also includes generating one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teachinglearning session, wherein the one or more recommendations correspond to information for the first user based on the training data, thereby enhancing the quality of the teaching-learning experience.
To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
FIG. 1 is a block diagram representation of a system to enhance quality of a teachinglearning experience in accordance with an embodiment of the present disclosure;
FIG. 2 is a block diagram representation of an exemplary embodiment of the system to enhance the quality of the teaching -learning experience of FIG. 1 in accordance with an embodiment of the present disclosure;
FIG. 3 is a block diagram of a teaching-learning quality enhancing computer or a teaching-learning quality enhancing server in accordance with an embodiment of the present disclosure; and
FIG. 4 (a) and FIG. 4 (b) are flow charts representing steps involved in a method for enhancing quality of a teaching -learning experience in accordance with an embodiment of the present disclosure.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure. The terms ' comprises' , ' comprising' , or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Embodiments of the present disclosure relate to a system to enhance quality of a teaching-learning experience. As used herein, the term “teaching-learning experience” is defined as an act of experiencing a teaching-learning session by a user. In one embodiment, the teaching-learning session may a live training session, a live knowledge sharing session, a live classroom session, a recorded video about a subject matter, or the like. Therefore, during such a teaching-learning session, knowing how well content in the corresponding teaching-learning session is been absorbed by a viewer is important which corresponds to knowing the quality of the teaching-learning experience by the viewer of the corresponding teaching-learning session. Upon knowing the quality, it can be enhanced by performing certain operations. Further, the system described hereafter in FIG. 1 is the system to enhance the quality of the teaching-learning experience.
FIG. 1 is a block diagram representation of a system (10) to enhance quality of a teaching-learning experience in accordance with an embodiment of the present disclosure. The system (10) includes a processing subsystem (20) hosted on a server (30). In one embodiment, the server (30) may include a cloud server. In another embodiment, the server (30) may include a local server. The processing subsystem (20) is configured to execute on a network (not shown in FIG. 1) to control bidirectional communications among a plurality of modules. In one embodiment, the network may include a wired network such as local area network (LAN). In another embodiment, the network may include a wireless network such as wireless fidelity (Wi-Fi), Bluetooth, Zigbee, near field communication (NFC), infra-red communication (RFID), or the like.
Basically, for the system (10) to be able to know and enhance the quality of the teaching-learning experience, the system (10) may have to receive certain inputs. Thus, the processing subsystem (20) includes an input module (40). The input module (40) is configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration. In one exemplary embodiment, the multimedia data may include one or more images, one or more videos, or the like. Also, in an embodiment, the first user may correspond to a student, a trainee, an employee, a team member, a conference attendee, or the like. Suppose the first user is experiencing the teaching-learning session on a virtual platform via a communication means. In one embodiment, the teaching-learning session may have content being either displayed as a presentation or explained by a second user in a predefined form and a predefined language. In one exemplary embodiment, the second user may include a teacher, a trainer, a leader, a manager, a mentor, or the like. Also, in an embodiment, the predefined form may include a speech form, a sign language form, a textual form, or the like. Further, in one embodiment, the predefined language may include English, Hindi, Kannada, or the like. In one embodiment, the communication means may include a voice call, a video call, a video conference, or the like. Therefore, the multimedia data may be captured via a multimedia data capturing device, wherein the multimedia data capturing device may be operatively coupled to the input module (40). In one embodiment, the multimedia data capturing device may include a mobile phone camera, a tablet camera, a laptop camera, a video camera, or the like. Thus, in one embodiment, upon capturing the multimedia data, the input module (40) may receive the same in real-time during the teaching-learning session. Alternatively, in one embodiment, the teaching-learning session may be a pre- recorded video with audio, and the first user may be watching the same on the corresponding virtual platform.
However, prior to receiving the multimedia data corresponding to the first user, the first user may have to be registered with the system (10). Thus, in an embodiment, the processing subsystem (20) may also include a registration module (as shown in FIG. 2) operatively coupled to the input module (40). The registration module may be configured to register the first user with the system (10) upon receiving a plurality of first user details via a first user device. In one embodiment, the plurality of first user details may include a first username, contact details, qualification details, occupation details, or the like. The plurality of first user details may be stored in a database (as shown in FIG. 2) of the system (10). In one embodiment, the database may include a local database or a cloud database. Further, in one embodiment, the first user device may include a mobile phone, a tablet, a laptop, or the like. Upon registration, the first user may choose to experience the teaching -learning session according to a predefined schedule upon choosing a predefined learning course. In one embodiment, the predefined schedule may include a daily one hour, weekly one hour, daily two hours, weekly two hours, two days in a week, or the like. Also, in one embodiment, the predefined leaning course may correspond to a data science course, a digital marketing course, Internet of Things (loT) course, or the like.
Later, upon receiving the multimedia data, the system (10) may have to analyze the corresponding multimedia data for checking an attention of the first user and identifying one or more parts of the corresponding teaching-learning session during which the first user may have got distracted while experiencing the corresponding teaching-learning session. Thus, the processing subsystem (20) also includes a monitoring module (50) operatively coupled to the input module (40). The monitoring module (50) is configured to identify one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model. Basically, in an embodiment, the responsive behavior of the first user may be considered as positive, when the responsive behavior may correspond to nodding head as a response to understanding and attentively listening to the content from the corresponding teaching-learning session, a joyful facial expression, a surprised facial expression, or the like. Also, in one embodiment, the responsive behavior of the first user may be considered as negative, when the responsive behavior may correspond to nodding the head as a response to not understanding the content in the corresponding teachinglearning session, a drowsy facial expression, a frowning facial expression, a sad facial expression, over a mobile phone, a confused facial expression, diversion in a gaze of the first user, facial expressions indicating an attention of the first user being away from the teaching-learning session, or the like. However, for the system (10) to be able to understand the responsive behavior of the first user, the multimedia data corresponding to the first user may have to be analyzed. Thus, in an embodiment, the multimedia data may be analyzed using a machine learning technique. As used herein, the term “machine learning” is defined as an application of artificial intelligence that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
In one embodiment, the machine learning technique may include an image processing technique, facial expression recognition technique, a sentiment analysis technique, a speech recognition technique, a natural language processing (NLP) technique, and the like. As used herein, the term “image processing” is defined as a method of manipulating an image to either enhance the quality or extract relevant information from it. More specifically, in an embodiment, the facial expression recognition technique may be used to identify one or more facial expressions of the first user, so that one or more emotions of the first user can be identified. As used herein, the term “facial expression recognition technique” is defined as a technology which uses biometric markers to detect emotions in human faces. Basically, the facial expression recognition technique performs a face detection operation, a facial landmark detection operation, and finally performs a facial expression and emotion classification operation. However, for the system (10) to implement the facial expression recognition technique, a model may have to be trained which may then be capable of identifying the one or more emotions of the first user.
Thus, in an embodiment, the behavior identification trained model may be generated by the monitoring module (50) by training the corresponding behavior identification trained model with a plurality of images and a plurality of videos of people from different countries, genders, sexual orientations, skin tones, and the like, so that the behavior identification trained model is not biased. Further, in an embodiment, one or more dimensions such as, but not limited to, face symmetry, facial contrast, a pose the face is in, a length of one or more face’s attributes, a width of the one or more face’s attributes, and the like may also be considered. In one exemplary embodiment, the one or more face’s attributes may include eyes, nose, forehead, or the like. Therefore, upon generating the behavior identification trained model, the multimedia data may be analyzed for identifying the responsive behavior of the first user. Furthermore, as the system (10) may be analyzing the multimedia data, there is a possibility of a data privacy issue and a data security issue, and hence to address such issues, the system (10) may use a batch processing technique, so that none of the multimedia data is stored in a database of the system (10) after analyzing or processing the corresponding multimedia data. In a specific embodiment, along with the usage of the batch processing technique, a built-in clustering may be implemented using modern equipment or one or more modem services.
Subsequently, as the responsive behavior of the first user may be identified, a timeline associated with the teaching-learning session when the responsive behavior may be negative may be identified, by simultaneously tracking the corresponding timeline and associating a first flag with the corresponding timeline. Thus, in an embodiment, the one or more timeslots on the timeline having the first flag may be identified as the one or more first timeslots associated with the teaching-learning session when the responsive behavior of the first user is negative. Further, in one embodiment, the corresponding one or more first timeslots may then be provided with an accessible link for the corresponding first user to click the corresponding accessible link and watch the corresponding part of the teaching -learning session to make learning complete and comprehensive. Basically, throughout the timeline associated with the teaching-learning session, there is a possibility of presence of one or more accessible links. Later, based on a count of the one or more accessible links, an attention level of the first user can be identified and hence monitored. Thus, the monitoring module (50) is also configured to monitor the attention level of the first user over a predefined time period associated with the teaching -learning session based on the identification of the corresponding one or more first timeslots. In one embodiment, the predefined time period may include a complete timeline associated with the teaching-learning session, a predefined percentage of the complete timeline, or the like. Also, in an embodiment, the attention level may correspond to a high attention level when the count of the corresponding one or more accessible links may be less than or equal to a threshold link count value. In an alternative embodiment, the attention level may correspond to a low attention level when the count of the corresponding one or more accessible links may be greater than the threshold link count value.
Additionally, as used herein, the term “sentiment analysis technique” refers to a technique which identifies a sentiment or an emotion of people who are a part of a session conducted online or offline by processing one or more image frames, one or more video frames, an audio, a speech, a text, and the like. Further, as used herein, the term “speech recognition technique” is defined as a methodology and a technology that enables the recognition and translation of spoken language into text by computers. Furthermore, as used herein, the term “natural language processing” is defined as a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. Here, a historic data may be used to train a model that may be used to an operation of the natural language processing, wherein the historic data may include a word dictionary of a plurality of words with meaning. In addition, there is also a possibility that the first user may watch the one or more parts of the teaching-learning session for a certain number of times, as the first user may have found the corresponding one or more parts a little difficult to understand or the content in the corresponding one or more parts may have one or more complex topics. Thus, in order to identify the one or more complex topics, the processing subsystem (20) also includes a complex topic identification module (60) operatively coupled to the monitoring module (50). The complex topic identification module (60) is configured to track the timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user. In one embodiment, the predefined tracking technique may refer to a technique which keeps checking for an occurrence of an event associated with the timeline of the corresponding teaching-learning session. In one embodiment, the event may include re-playing of the corresponding teaching-learning session at a particular timeslot on the corresponding timeline. The complex topic identification module (60) is also configured to identify one or more second timeslots associated with the teaching-learning session when the corresponding teaching-learning session is re-played for a count greater than a threshold re-play count value based on the tracking. Basically, while tracking the corresponding timeline, a second flag may be associated with the timeline based on the identification of the one or more second timeslots. Thus, the one or more timeslots having the second flag may be identified as the one or more second timeslots. Also, the count for which one or more parts of the teaching-learning session may be replayed, may be a count of one or more second flags on the timeline. Further, the complex topic identification module (60) is also configured to identify one or more topics covered in the corresponding teaching-learning session as the one or more complex topics based on the identification of the corresponding one or more second timeslots. Suppose a first part of the teaching-learning session possess the count of the one or more second flags as four and a second part of the teaching-learning session possess the count of the one or more second flags as ten. Then, as the count on the second part is greater than the count at the first part, the second part may possess the one or more complex topics in comparison to the one or more topics covered in the first part of the corresponding teaching -learning session. Thus, based on the count of the one or more second flags associated with the one or more second timeslots corresponding to the one or more parts of the teaching-learning session, the one or more complex topics that may be covered in the corresponding one or more parts may be identified.
Basically, as the first user may be experiencing the teaching-learning session, the first user may get one or more queries which if got resolved at the same time would be better for completely understanding the one or more topics rather then getting them resolved on a later stage. Thus, the processing subsystem (20) also includes an interaction module (70) operatively coupled to the complex topic identification module (60). The interaction module (70) is configured to generate a data set by converting the content in the corresponding teaching-learning session from the predefined form to the textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics. The interaction module (70) is also configured to generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using the machine learning technique in real-time. Further, the interaction module (70) is also configured to build an interactive digital assistant using the knowledge trained model. The interactive digital assistant is adapted to interact with the first user based on the one or more queries of the corresponding first user, in the predefined form.
In one embodiment, the textual data generation technique may refer to a speech to text conversion technique, wherein the speech to text conversion technique works similar to that of the speech recognition technique as defined above. Basically, the text extracted from the teaching-learning session may be used to as a corpus or the data set to train the knowledge trained model. Later, the corresponding knowledge trained model may be used to generate the interactive digital assistant which is intelligent enough to answer the one or more queries of the first user in real-time. In one embodiment, the interactive digital assistant may be a chatbot. Also, in one embodiment, the interactive digital assistant may be a three-dimensional (3-D) image of the second user displayed in a 3-D space, wherein the 3-D image of the second user is generated using an augmented reality (AR) technology. As used herein, the term “augmented reality” is defined as a system that incorporates three basic features such as a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects.
In one embodiment, when the predefined form may be the speech form, then the interactive digital assistant may convert the one or more queries in the speech form into the one or more queries in the textual form using the speech to text conversion technique. Then, the interactive digital assistant may generate the response for the corresponding one or more queries using the knowledge trained model in the textual form and then convert back to the speech form and narrate the same to the first user. Basically, availability of such an option may help visually impaired people as such people may not be able to see the response for the one or more queries asked by them. Further, in one embodiment, when the predefined form may be the sign language form, the interactive digital assistant may convert the one or more queries in the sign language form into the one or more queries in the textual form using one or more conversion techniques upon receiving the one or more queries in the sign language form. In one embodiment, the one or more conversion techniques may include a recurrent neural network (RNN), a Long short-term memory (LSTM), a Gated recurrent unit (GRU), or the like. Later, the interactive digital assistant may generate a corresponding response for the corresponding one or more queries using the knowledge trained model in the textual form, upon conversion. Further, the interactive digital assistant may display the corresponding response to the first user upon converting the corresponding response in the textual form to the corresponding response in the sign language form using the one or more conversion techniques upon generating the corresponding response. Basically, availability of such an option may help deaf and mute people as such people may not be able to speak and hear the response for the one or more queries asked by them.
In one exemplary embodiment, the interaction module (70) may include a translation sub-module. The translation sub-module may be configured to translate at least one of the one or more queries and the corresponding response of the interaction digital assistant, from a first language to a second language using a translation technique, based on a language used by the first user to narrate the corresponding one or more queries to the interaction digital assistant. In one embodiment, the translation technique may include a multilingual Bidirectional Encoder Representations from Transformers (BERT) technique. Basically, as the first user may be trying to understand the content in the teaching -learning session by asking the one or more queries, the system (10) may identify one or more topics which the first user may be interested to know more about in depth, and hence recommend the first user with additional information about the same. Thus, the processing subsystem (20) also includes a recommendation module (80) operatively coupled to the interaction module (70). The recommendation module (80) is configured to build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using the machine learning technique in real-time. The training data incudes at least one of the data set generated by the interaction module (70), the one or more queries of the corresponding first user, and standard content-related data. The recommendation module (80) is also configured to generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teachinglearning session, thereby enhancing the quality of the teaching-learning experience. The one or more recommendations correspond to information for the first user based on the training data. In one embodiment, the one or more recommendations may correspond to one or more blogs, one or more videos, one or more top articles, and the like corresponding to the one or more topics which the first user may be interested in knowing more about.
Later, in order to know a performance of the first user, the first user may have to experience and examination session. Further, as the first user may be experiencing the examination session, the first user may have to invigilated to check if the first user is involved in any kind of mal practices. Thus, in one embodiment, the processing subsystem (20) may also include an invigilation module (as shown in FIG. 2) operatively coupled to the input module (40). The invigilation module may be configured to invigilate the first user when the first user is experiencing the examination session, for identification of one or more activities being in disagreement with one or more regular exam protocols, based on the analysis of the multimedia data using the behavior identification trained model. Moreover, for the second user to understand an overall performance of the first user, the processing subsystem (20) may also include a report generation module (as shown in FIG. 2) operatively coupled to the recommendation module (80). The report generation module may be configured to generate a comprehensive personalized report corresponding to the first user upon generating the one or more recommendations via the recommendation module (80). The comprehensive personalized report may include information about at least one of the first user, the teaching-learning session, a behavior of the first user when experiencing the teaching-learning session and an examination session, the one or more complex topics, the one or more queries, and the like.
Also, there could be a requirement of alerting the first user or the second user based on the information disclosed in the report under certain circumstances. Thus, in one embodiment, the processing subsystem (20) may also include an alert generation module (as shown in FIG. 2) operatively coupled to the report generation module. The alert generation module may be configured to generate an alert to at least one of the first user and a second user based on the information disclosed in the comprehensive personalized report generated by the report generation module. The alert may correspond to information about the behavior and an experience of the first user with the teaching-learning session and the examination session. In one embodiment, the alert may be generated in one or more forms such as, but not limited to, a text message, an audio alert, an e-mail, or the like. Basically, upon receiving the alert regarding the attention level of the first user being low at specific portions of the teaching-learning session, the second user may amend the corresponding specific portions to increase the attention level of the first user. Also, upon receiving the alert regarding the one or more complex topics corresponding to the first user, the second user may arrange for one or more special classes for the corresponding complex topics to improve the attention level and the knowledge level of the first user.
FIG. 2 is a block diagram representation of an exemplary embodiment of the system (10) to enhance the quality of the teaching-learning experience of FIG. 1 in accordance with an embodiment of the present disclosure. Suppose a school (90) is conducting online classes due to some reason in which the students (100) can take the classes from home (110). Then, to check the attention of students (100) and enhance the quality of such online classes, the school (90) plans to use the system (10) proposed in the present disclosure, by installing the system (10) in one or more school’s laptops (120). The system (10) includes the processing subsystem (20) hosted on the server (30). The students (100) are made to register with the system (10) via the registration module (130) using personal or parent’s laptops (140) by providing personal details. The personal details of the students (100) are stored in the database (150) of the system (10). Then, the students (100) sign up for their online classes and attend the online classes as per the schedule. As the students (100) take the online classes, the students (100) may have to permit to access to cameras (160) of the personal or parent’s laptops (140) which the students (100) are using, so that the video of the students (100) is constantly captured and given to the system (10) via the input module (40). Suppose teachers (170) have pre-recorded videos in which the teachers (170) are explaining subject related concepts and uploaded on the system (10). Now, when the students (100) are watching the pre-recorded videos, the timeline of the pre-recorded videos has been tracked to check for the one or more first timeslots when the students (100) are not attentive, and the attention level of the students (100) is monitored via the monitoring module (50). Also, a number of times the students (100) re-played specific portions of the pre-recorded videos have also been identified for identifying the one or more complex topics via the complex topic identification module (60). Later, as the students (100) get some doubts, the students (100) can ask them, and an instant response can be received via the interaction module (70) which uses an intelligent chatbot (180) to answer the doubts of the students (100). Also, basically, during the interaction, the students (100) can also communicate in mother tongue language as the interaction module (70) is provided with the translation sub-module (190) which translates the mother tongue language into machine understandable language and vice versa. Further, the intelligent chatbot (180) is also capable of communicating in the speech form (192), the textual form (194) and the sign language form (196) with the students (100). Then, based on frequently asked doubts by the students (100), top articles and blog are recommended to the students (100) regarding the content of the frequently asked doubts via the recommendation module (80). Occasionally, the students (100) are made to take certain tests and during such tests, the students (100) are also invigilated via the invigilation module (200). Later, the comprehensive personalized report is also generated for each student as well a report having information which is an average for all the students (100) via the report generation module (210). Also, based on the information available in the comprehensive personalized report, the teachers (170) and the students (100) are alerted via the alert generation module (220), so that the content in the pre-recorded videos can be updated, modified, or special classes could be arranged for the students (100) for improving the attention of the students (100) for them to grasp the content more efficiently. Therefore, this way the quality of the online classes can be enhanced.
FIG. 3 is a block diagram of a teaching-learning quality enhancing computer or a teaching-learning quality enhancing server (240) in accordance with an embodiment of the present disclosure. The teaching-learning quality enhancing server (240) includes processor(s) (250), and memory (260) operatively coupled to a bus (270). The processor(s) (250), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof. Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like. Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (250).
The memory (260) includes a plurality of subsystems stored in the form of executable program which instructs the processor (250) to perform method steps illustrated in FIG. 4. The memory (260) includes a processing subsystem (20) of FIG 1. The processing subsystem (20) further has following modules: an input module (40), a monitoring module (50), a complex topic identification module (60), an interaction module (70), and a recommendation module (80).
The input module (40) is configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration. The monitoring module (50) is configured to identify one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model. The monitoring module (50) is also configured to monitor an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots.
The complex topic identification module (60) is configured to track a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user. The complex topic identification module (60) is also configured to identify one or more second timeslots associated with the teaching-learning session when the corresponding training-learning session is re-played for a count greater than a threshold re-play count value based on the tracking. The complex topic identification module (60) is also configured to identify one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots. The interaction module (70) is configured to generate a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics. The interaction module (70) is also configured to generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time. The interaction module (70) is also configured to build an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form. The recommendation module (80) is configured to build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time. The recommendation module (80) is also configured to generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching-learning session, thereby enhancing the quality of the teaching-learning experience.
FIG. 4 (a) and FIG. 4 (b) are flow charts representing steps involved in a method (280) for enhancing quality of a teaching-learning experience in accordance with an embodiment of the present disclosure. The method (280) includes receiving multimedia data corresponding to a first user experiencing a teaching-learning session upon registration in step 290. In one embodiment, receiving the multimedia data may include receiving the multimedia data by an input module (40).
The method (280) also includes identifying one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching-learning session, based on analysis of the multimedia data using a behavior identification trained model in step 300. In one embodiment, identifying the one or more first time slots may include identifying the one or more first timeslots by a monitoring module (50).
Furthermore, the method (280) includes monitoring an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots in step 310. In one embodiment, monitoring the attention level may include monitoring the attention level by the monitoring module (50).
Furthermore, the method (280) also includes tracking a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user in step 320. In one embodiment, tracking the timeline may include tracking the timeline by a complex topic identification module (60).
Furthermore, the method (280) also includes identifying one or more second timeslots associated with the teaching-learning session when the corresponding traininglearning session is re-played for a count greater than a threshold re-play count value based on the tracking in step 330. In one embodiment, identifying the one or more second timeslots may include identifying the one or more second timeslots by the complex topic identification module (60).
Furthermore, the method (280) also includes identifying one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots in step 340. In one embodiment, identifying the one or more topics as the one or more complex topics may include identifying the one or more topics as the one or more complex topics by the complex topic identification module (60).
Furthermore, the method (280) also includes generating a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics in step 350. In one embodiment, generating the data set may include generating the data set by an interaction module (70).
Furthermore, the method (280) also includes generating a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time in step 360. In one embodiment, generating the knowledge trained model may include generating the knowledge trained model by the interaction module (70). Furthermore, the method (280) also includes building an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form in step 370. In one embodiment, building the interactive digital assistant may include building the interactive digital assistant by the interaction module (70).
Furthermore, the method (280) also includes building a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time, wherein the training data comprises at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content-related data in step 380. In one embodiment, building the recommendation trained model may include building the recommendation trained model by a recommendation module (80).
Furthermore, the method (280) also includes generating one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching-learning session, wherein the one or more recommendations correspond to information for the first user based on the training data, thereby enhancing the quality of the teaching-learning experience in step 390. In one embodiment, generating the one or more recommendations may include generating the one or more recommendations by the recommendation module (80).
In one exemplary embodiment, the method (280) may also include converting the one or more queries in a sign language form into the one or more queries in the textual form using one or more conversion techniques upon receiving the one or more queries in the sign language form. In such embodiment, converting the one or more queries in the sign language form into the one or more queries in the textual form may include converting the one or more queries in the sign language form into the one or more queries in the textual form by the interactive digital assistant. Further, in one exemplary embodiment, the method (280) may also include generating a corresponding response for the corresponding one or more queries using the knowledge trained model in the textual form, upon conversion. In such embodiment, generating the corresponding response may include generating the corresponding response by the interactive digital assistant. Furthermore, in one exemplary embodiment, the method (280) may also include displaying the corresponding response to the first user upon converting the corresponding response in the textual form to the corresponding response in the sign language form using one or more conversion techniques upon generating the corresponding response. In such embodiment, displaying the corresponding response to the first user may include displaying the corresponding response to the first user by the interactive digital assistant.
Various embodiments of the present disclosure enable enhancing the quality of the teaching-learning experience of students, as the one or more queries of the students get answered instantly with the help of the interactive digital assistant. Also, as the interactive digital assistant could be a 3-D image of a tutor of the students in the 3-D space, gives a realistic experience to the students of interacting with the corresponding tutor for resolving the one or more queries. Further, as the interactive digital assistant is multilingual, makes the system more flexible as now the students can interact in any language based on a comfort of the students.
Also, as the interactive digital assistant can also interact in the sign language, deaf and mute people can also take advantage of the system, thereby making the system more efficient and more reliable. Further, identifying specific portions of the teachinglearning session, during which the students are not attentive or having trouble understanding the content in the corresponding specific portions, helps the tutors to amend the content in a way to grab the attention of the students and make it easy for the students to understand the same. Also, the system is more advantages because, the system provided the one or more recommendations for the students regarding the one or more topics which the students are most interested in.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.

Claims

I/WE CLAIM:
1. A system ( 10) to enhance quality of a teaching-learning experience, wherein the system (10) comprises: a processing subsystem (20) hosted on a server (30), and configured to execute on a network to control bidirectional communications among a plurality of modules comprising: an input module (40) configured to receive multimedia data corresponding to a first user experiencing a teaching-learning session upon registration; a monitoring module (50) operatively coupled to the input module (40), wherein the monitoring module (50) is configured to: identify one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching -learning session, based on analysis of the multimedia data using a behavior identification trained model; and monitor an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots; a complex topic identification module (60) operatively coupled to the monitoring module (50), wherein the complex topic identification module (60) is configured to: track a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user; identify one or more second timeslots associated with the teaching-learning session when the corresponding training -learning
24 session is re-played for a count greater than a threshold re-play count value based on the tracking; and identify one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots; an interaction module (70) operatively coupled to the complex topic identification module (60), wherein the interaction module (70) is configured to: generate a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics; generate a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time; and build an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form; and a recommendation module (80) operatively coupled to the interaction module (70), wherein the recommendation module (80) is configured to: build a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using the machine learning technique in real-time, wherein the training data comprises at least one of the data set generated by the interaction module (70), the one or more queries of the corresponding first user, and standard content- related data; and generate one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching -learning session, thereby enhancing the quality of the teaching-learning experience, wherein the one or more recommendations correspond to information for the first user based on the training data.
2. The system (10) as claimed in claim 1, wherein the predefined form comprises a speech form (192), a sign language form (196), or the textual form (194).
3. The system (10) as claimed in claim 1, wherein the interactive digital assistant comprises a three-dimensional image of a second user displayed in a three- dimensional space, wherein the three-dimensional image of the second user is generated using an augmented reality technology.
4. The system (10) as claimed in claim 1, wherein the interactive digital assistant is adapted to: convert the one or more queries in a sign language form into the one or more queries in the textual form using one or more conversion techniques upon receiving the one or more queries in the sign language form; generate a corresponding response for the corresponding one or more queries using the knowledge trained model in the textual form, upon conversion; and display the corresponding response to the first user upon converting the corresponding response in the textual form to the corresponding response in the sign language form using one or more conversion techniques upon generating the corresponding response.
5. The system (10) as claimed in claim 1, wherein the interaction module (70) comprises a translation sub-module (190) configured to translate at least one of the one or more queries and the corresponding response of the interaction digital assistant, from a first language to a second language using a translation technique, based on a language used by the first user to narrate the corresponding one or more queries to the interaction digital assistant.
6. The system (10) as claimed in claim 1, wherein the processing subsystem (20) comprises an invigilation module (200) operatively coupled to the input module (40), wherein the invigilation module (200) is configured to invigilate the first user when the first user is experiencing an examination session, for identification of one or more activities being in disagreement with one or more regular exam protocols, based on the analysis of the multimedia data using the behavior identification trained model.
7. The system (10) as claimed in claim 1, wherein the processing subsystem (20) comprises a report generation module (210) operatively coupled to the recommendation module (80), wherein the report generation module (210) is configured to generate a comprehensive personalized report corresponding to the first user upon generating the one or more recommendations via the recommendation module (80), wherein the comprehensive personalized report comprises information about at least one of the first user, the teaching-learning session, a behavior of the first user when experiencing the teaching-learning session and an examination session, the one or more complex topics, and the one or more queries.
8. The system (10) as claimed in claim 7, wherein the processing subsystem (20) comprises an alert generation module (220) operatively coupled to the report generation module, wherein the alert generation module (220) is configured to generate an alert to at least one of the first user and a second user based on the information disclosed in the comprehensive personalized report generated by the report generation module, wherein the alert corresponds to information about the behavior and an experience of the first user with the teaching-learning session and the examination session.
9. A method (280) for enhancing quality of a teaching -learning experience comprising:
27 receiving, by an input module (40), multimedia data corresponding to a first user experiencing a teaching -learning session upon registration; (290) identifying, by a monitoring module (50), one or more first timeslots associated with the teaching-learning session when a responsive behavior of the first user is negative in response to the corresponding teaching-learning session, based on analysis of the multimedia data using a behavior identification trained model; (300) monitoring, by the monitoring module (50), an attention level of the first user over a predefined time period associated with the teaching-learning session based on the identification of the corresponding one or more first timeslots; (310) tracking, by a complex topic identification module (60), a timeline associated with the teaching-learning session using a predefined tracking technique upon monitoring the attention level of the first user; (320) identifying, by the complex topic identification module (60), one or more second timeslots associated with the teaching-learning session when the corresponding training -learning session is re-played for a count greater than a threshold re-play count value based on the tracking; (330) identifying, by the complex topic identification module (60), one or more topics covered in the corresponding teaching-learning session as one or more complex topics based on the identification of the corresponding one or more second timeslots; (340) generating, by an interaction module (70), a data set by converting content in the corresponding teaching-learning session from a predefined form to a textual form using a textual data generation technique upon identifying the one or more topics as the one or more complex topics; (350) generating, by the interaction module (70), a knowledge trained model by training the corresponding knowledge trained model with the corresponding data set using a machine learning technique in real-time; (360) building, by the interaction module (70), an interactive digital assistant using the knowledge trained model, wherein the interactive digital assistant is adapted to interact with the first user based on one or more queries of the corresponding first user, in the predefined form; (370) building, by a recommendation module (80), a recommendation trained model by training the corresponding recommendation trained model with training data by performing content-based filtering using a machine learning technique in real-time, wherein the training data comprises at least one of the data set generated by the interaction module, the one or more queries of the corresponding first user, and standard content-related data; and (380) generating, by the recommendation module (80), one or more recommendations for the first user using the recommendation trained model when the first user is experiencing the teaching -learning session, wherein the one or more recommendations correspond to information for the first user based on the training data, thereby enhancing the quality of the teaching-learning experience (390).
10. The method (280) as claimed in claim 9, comprises: converting, by the interactive digital assistant, the one or more queries in a sign language form into the one or more queries in the textual form using one or more conversion techniques upon receiving the one or more queries in the sign language form; generating, by the interactive digital assistant, a corresponding response for the corresponding one or more queries using the knowledge trained model in the textual form, upon conversion; and displaying, by the interactive digital assistant, the corresponding response to the first user upon converting the corresponding response in the textual form to the corresponding response in the sign language form using one or more conversion techniques upon generating the corresponding response.
29
PCT/IB2022/050037 2021-11-03 2022-01-04 System and method for enhancing quality of a teaching-learning experience WO2023079370A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202141050607 2021-11-03
IN202141050607 2021-11-03

Publications (1)

Publication Number Publication Date
WO2023079370A1 true WO2023079370A1 (en) 2023-05-11

Family

ID=86240788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/050037 WO2023079370A1 (en) 2021-11-03 2022-01-04 System and method for enhancing quality of a teaching-learning experience

Country Status (1)

Country Link
WO (1) WO2023079370A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117540108A (en) * 2024-01-10 2024-02-09 人民卫生电子音像出版社有限公司 Intelligent recommendation answering system based on examination point data distributed summary

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852132B2 (en) * 2014-11-25 2017-12-26 Chegg, Inc. Building a topical learning model in a content management system
CN110148318B (en) * 2019-03-07 2021-09-07 上海晨鸟信息科技有限公司 Digital teaching assistant system, information interaction method and information processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852132B2 (en) * 2014-11-25 2017-12-26 Chegg, Inc. Building a topical learning model in a content management system
CN110148318B (en) * 2019-03-07 2021-09-07 上海晨鸟信息科技有限公司 Digital teaching assistant system, information interaction method and information processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117540108A (en) * 2024-01-10 2024-02-09 人民卫生电子音像出版社有限公司 Intelligent recommendation answering system based on examination point data distributed summary
CN117540108B (en) * 2024-01-10 2024-04-02 人民卫生电子音像出版社有限公司 Intelligent recommendation answering system based on examination point data distributed summary

Similar Documents

Publication Publication Date Title
US10643487B2 (en) Communication and skills training using interactive virtual humans
US20220319181A1 (en) Artificial intelligence (ai)-based system and method for managing education of students in real-time
Lee et al. Multimodality of ai for education: Towards artificial general intelligence
Alepis et al. Object-oriented user interfaces for personalized mobile learning
Rickley et al. Effects of video lecture design and production quality on student outcomes: A quasi-experiment exploiting change in online course development principles
Martiniello et al. Artificial intelligence for students in postsecondary education: A world of opportunity
Huang et al. Investigating an application of speech‐to‐text recognition: a study on visual attention and learning behaviour
Hermawati et al. Assistive technologies for severe and profound hearing loss: Beyond hearing aids and implants
Raca Camera-based estimation of student's attention in class
Chemnad et al. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review
Padrón-Rivera et al. Identification of action units related to affective states in a tutoring system for mathematics
Ochoa Multimodal systems for automated oral presentation feedback: A comparative analysis
Mina et al. Leveraging education through artificial intelligence virtual assistance: a case study of visually impaired learners
WO2023079370A1 (en) System and method for enhancing quality of a teaching-learning experience
Alrashidi Synergistic integration between internet of things and augmented reality technologies for deaf persons in e-learning platform
Campbell et al. The development of the Academic Incivility Scale for higher education
Zirzow Technology use by teachers of deaf and hard-of-hearing students
Barmaki Gesture assessment of teachers in an immersive rehearsal environment
Kaplan-Rakowski et al. Emerging Technologies for Blind and Visually Impaired Learners: A Case Study
Ahmad et al. Towards a Low‐Cost Teacher Orchestration Using Ubiquitous Computing Devices for Detecting Student’s Engagement
Suvorov Technology and listening in SLA
Utami et al. A Brief Study of The Use of Pattern Recognition in Online Learning: Recommendation for Assessing Teaching Skills Automatically Online Based
Farsani et al. Students’ visual attention during teacher’s talk as a predictor of mathematical achievement: a cautionary tale
Hossen et al. Attention monitoring of students during online classes using XGBoost classifier
OLAFARE Artificial intelligence in education: challenges and way forward

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22889517

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22889517

Country of ref document: EP

Kind code of ref document: A1