[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022069929A1 - System and method for creative learning - Google Patents

System and method for creative learning Download PDF

Info

Publication number
WO2022069929A1
WO2022069929A1 PCT/IB2020/060896 IB2020060896W WO2022069929A1 WO 2022069929 A1 WO2022069929 A1 WO 2022069929A1 IB 2020060896 W IB2020060896 W IB 2020060896W WO 2022069929 A1 WO2022069929 A1 WO 2022069929A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
narrative
subsystem
learner
keywords
Prior art date
Application number
PCT/IB2020/060896
Other languages
French (fr)
Inventor
Sabarigirinathan Lakshminarayanan
Lakshmi KANTHA .B.M.
Manigandan MOHAN
Original Assignee
Sabarigirinathan Lakshminarayanan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sabarigirinathan Lakshminarayanan filed Critical Sabarigirinathan Lakshminarayanan
Publication of WO2022069929A1 publication Critical patent/WO2022069929A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Definitions

  • Embodiments of the present disclosure relate to a computerised learning approach and more particularly to a system and a method for creative learning.
  • One of such conventional learning strategies includes use of narratives for learning purpose. Such narratives provide understanding of concepts and provide a scaffold for transferring of knowledge within specific contexts and environments among various types of learners. However, such conventional learning strategies use generic narratives for all kind of learners and do not provide a personalized learning for different learners having different area of interest. As a result, such conventional learning strategies involves lesser user interaction which further deviates concentration and interests of the learner from learning and creates difficulty in remembering the subject matter.
  • a system for creative learning includes a learning input receiving subsystem configured to receive one or more learning associated inputs from a learner in one or more formats.
  • the system also includes a learning input identification subsystem operatively coupled to the learning input receiving subsystem.
  • the learning input identification subsystem is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique.
  • the learning input identification subsystem is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique.
  • the system also includes a learning input processing subsystem operatively coupled to the learning input identification subsystem.
  • the learning input processing subsystem is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance.
  • the learning input processing subsystem is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
  • the system also includes a narrative creation subsystem operatively coupled to the learning input processing subsystem.
  • the narrative creation subsystem is configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the second set of one or more keywords
  • the narrative creation subsystem is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
  • a method for creative learning includes receiving, by a learning input subsystem, one or more learning associated inputs from a learner in one or more formats.
  • the method also includes identifying, by a learning input identification subsystem, a language of the one or more learning associated inputs received using a natural language processing technique.
  • the method also includes determining, by the learning input identification subsystem, a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique.
  • the method also includes generating, by a learning input processing subsystem, a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance.
  • the method also includes matching, by the learning input processing subsystem, the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
  • the method also includes creating, by a narrative creation subsystem, a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the second set of one or more keywords.
  • the method also includes identifying, by the narrative creation subsystem, a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
  • FIG. 1 is a block diagram of a system for creative learning in accordance with an embodiment of the present disclosure
  • FIG. 2 is a block diagram of an embodiment of a system for creative learning of FIG. 1 in accordance with an embodiment of the present disclosure
  • FIG. 3 is a block diagram of an exemplary system for creative learning in big data in accordance with an embodiment of the present disclosure
  • FIG. 4 illustrates a block diagram of a computer or a server of FIG. 1 in accordance with an embodiment of the present disclosure
  • FIG. 5 is a flow chart representing the steps involved in a method for creative learning in accordance with the embodiment of the present disclosure.
  • Embodiments of the present disclosure relate to a system and a method for creative learning.
  • the system includes a learning input receiving subsystem configured to receive one or more learning associated inputs from a learner in one or more formats.
  • the system also includes a learning input identification subsystem operatively coupled to the learning input receiving subsystem.
  • the learning input identification subsystem is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique.
  • the learning input identification subsystem is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique.
  • the system also includes a learning input processing subsystem operatively coupled to the learning input identification subsystem.
  • the learning input processing subsystem is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance.
  • the learning input processing subsystem is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
  • the system also includes a narrative creation subsystem operatively coupled to the learning input processing subsystem.
  • the narrative creation subsystem is configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the second set of one or more keywords.
  • the narrative creation subsystem is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning..
  • FIG. 1 is a block diagram of a system (100) for creative learning in accordance with an embodiment of the present disclosure.
  • the system (100) includes a learning input receiving subsystem (110) configured to receive one or more learning associated inputs from a learner in one or more formats.
  • the learning associated inputs may include a learning topic name, a learning concept detail, an article and the like.
  • the one or more formats may include at least one of a text format, an image format, a video format or a speech format.
  • the learning associated inputs may also include receiving a learner’s photograph, a learner’s favourite environment photographs and a learner’s preference and interest via an electronic device.
  • the learner’s favourite environment photographs are received from the learner for use in various scenes of the narratives. These photographs of the learner’s favourite environment activate spatial memory of the learner which enables better retention of the narratives.
  • the electronic device may include, but not limited to, a laptop, a desktop, a mobile phone, a tablet and the like.
  • the system (100) also includes a learning input identification subsystem (120) operatively coupled to the learning input receiving subsystem (110).
  • the learning input identification subsystem (120) is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique.
  • the language processing technique helps a machine to understand, interpret and manipulate human language.
  • the learning input identification subsystem (120) also extracts at least one entity from the one or more learning associated inputs upon identification of the language using an entity extraction technique.
  • the at least one entity may include at least one of words, numbers, punctuations, characters, colours, places / environment and the like.
  • the entity extraction technique may include, but not limited to, an image processing technique, an optical character recognition (OCR) technique, a speech recognition technique and the like.
  • OCR optical character recognition
  • the image processing technique, the OCR technique and the speech recognition technique are utilized for extraction of the at least one entity in case of an image input, a text input and a speech input respectively.
  • the learning input identification subsystem (120) is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique.
  • the term ‘first set of one or more keywords’ is defined as the one or more keywords derived from the input information directly.
  • the keyword extraction technique may include automatic identification of key phrases, key terms or key segments which describes the subject of the input information.
  • the keyword extraction technique may include a supervised, semi- supervised or an unsupervised learning technique.
  • the system (100) also includes a learning input processing subsystem (130) operatively coupled to the learning input identification subsystem (120).
  • the learning input processing subsystem (130) is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance.
  • the term ‘second set of one or more keywords’ is defined as a new list of keywords which are created based on the first set of the one or more keywords for creation of a narrative.
  • the plurality of keyword generation parameters may include, but not limited to, a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of first set of keywords, or a combination thereof.
  • the learning input processing subsystem is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
  • the term ‘learning information database’ is defined as a storage repository which stores the learning associated information and the learning preference and learning interest information collected from the learner.
  • the learning information database may store names of one or more popular places/environment, one or more popular things, one or more popular animals, one or more natural entities, one or more popular characters of movies, one or more popular plays, one or more popular comic books and the like.
  • the learning preference and interest information may include one or more preferences and interest associated with the learner fetched from one or more social media platform.
  • the learning preference and interest information may include one or more preferences collected from the learner via a questionnaire.
  • a learner’s feedback obtained corresponding to a narrative is also stored in the learning information database which further becomes historical learning preference and interest information for scenarios.
  • the learning preference and learning interest information may include information about one or more characters of movies, plays, media, places/environment, comic books, culture, genre, animals, things, natural entities such as mountains or oceans and the like.
  • the learning preference information may also include behavioral and emotional preference of the learner.
  • the system (100) also includes a narrative creation subsystem (140) operatively coupled to the learning input processing subsystem (130).
  • the narrative creation subsystem (140) is configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the plurality of keyword generation parameters.
  • the term ‘narrative’ is defined as a story which is generated to learn new or less abstract concepts in various topics by creating connections in the form of stories between the known subject and the subject to be learned based on a concept of associated memory.
  • the narratives are created by implementation of self-sufficient interactive digital assistant to communicate across regions, amongst other interactive digital assistants, communicate with social media and other channels to interpolate the input from the learner and perform data mining, lexical analysis, cognitive computing and analytics.
  • the narratives are produced which are relevant to the learner by the geographical location, culture and emotionally helps the learner to relate and remember the information in the field of study.
  • the interactive digital assistant may take the learner’s input and create the narrative on demand and time based on utilization of the internet of things (loT) technology.
  • the interactive digital assistant receives the voice input from the learner, processes the received voice input by connecting with one or more cloud-based services hosted on a remote server.
  • the interactive digital assistant on the cloud server generates the narrative on demand and on time based on determination of the first set of the one or more keywords, generating a second set of one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance and matching the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
  • the interactive digital assistant provides the narrative in a form of voice/video generated output to the learner from the cloud server.
  • the learner may choose to stream the video across different loT devices or listen to the audio on any play back devices or the current device in contention.
  • This loT technology considers the learner’s preference in decoding the generated information obtained from the interactive digital assistant and presents the same in an acceptable format based on the learner’s preference.
  • the loT devices in contention are not limited to just voice input but also receives inputs such as understanding the learner’s moods with different sensors deployed in wearable device associated with the learner for generation of the narrative.
  • the wearable device may include, but not limited to, an electronic wristwatch, a band, a ring and the like.
  • the learning technique generates the narrative or the story by matching the content with areas of interest or preference of the learner.
  • the one or more representational forms may include at least one of a text form, an animated video form, an audio form, an animated picture form, a hologram, a virtual reality format or a combination thereof.
  • the animated video form of the narrative may be created from the text form of the narrative using a deep learning based variational autoencoder (VAE) modelling technique.
  • VAE variational autoencoder
  • the narrative creation subsystem is also configured to generate a virtual animated model representative of the learner based on a photograph of the learner.
  • the virtual animated model may include a two-dimensional avatar representative of a character in the narrative.
  • the narrative creation subsystem (140) is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
  • the sentiment associated with the narrative may include, but not limited to, a positive or a happy sentiment, a negative or a sad sentiment, a neutral sentiment and the like.
  • the sentiment associated with the narrative is implemented into the narrative which helps in complete interpretation of the context of the narrative and enables the learner to connect and with the subject matter and remember it for a longer period of time.
  • the narrative creation subsystem also generates a relevant title for the narrative created based on the narrative content.
  • the narrative creation subsystem creates an index of the learning topic technical name versus the corresponding title of the narrative generated which further helps the learner to learn multiple learning topics easily. Also, the narrative creation subsystem utilizes a possibility of using a sound-alike term or a rhyming term to relate with the title of a concept of the narrative for active recall by the learner.
  • FIG. 2 is a block diagram of an embodiment of a system (100) for creative learning of FIG. 1 in accordance with an embodiment of the present disclosure.
  • the system (100) includes a learning input receiving subsystem (110), a learning input identification subsystem (120), a learning input processing subsystem (130) and a narrative creation subsystem (140).
  • the system (100) further includes a narrative quiz generation subsystem (150) operatively coupled to the narrative creation subsystem (140).
  • the narrative quiz generation subsystem (150) is configured to generate one or more questions and answers corresponding to the narrative for enabling active recall of the learning content by the learner.
  • the one or more questions corresponds to the narrative and one or more answers corresponds to a second set of the one or more keywords.
  • the second set of the one or more keywords further helps the learner in remembering and recalling a first set of one or more keywords.
  • the narrative quiz generation subsystem (150) also enables generation of visuals based on the contents or scenes of the narrative.
  • the narrative quiz generation subsystem (150) also generates a summary sheet for revision of the second set of the one or more keywords and corresponding first set of the one or more keywords which further indirectly helps in recalling the narrative along with the concept / topic details. .
  • FIG. 3 is a block diagram of an exemplary system (100) for creative learning in big data in accordance with an embodiment of the present disclosure.
  • the system (100) provides an innovative and a creative learning environment to promote computerised learning method with application of technology.
  • the system (100) is applicable for any group or any domain of a learner. Considering an example, wherein a learner (105) is a student of medical domain and a professor at a medical institute has assigned a task to the learner. The task assigned is to memorize types of cranial bones in human being. As the learner (105) is new to the domain of medical science, so remembering some jargons corresponding to the medical domain is a challenging task.
  • the system (100) in such a scenario provides a creative learning environment to the learner (105) to adapt grasping of complex concepts easily.
  • the system (100) includes a learning input receiving subsystem (110) to receive one or more learning associated inputs from the learner in one or more formats.
  • the learning associated input may include a learning topic such as types of cranial bones.
  • the learning associated inputs may also include receiving a learner’s photograph, a learner’s favourite environment photographs and a learner’s preference and interest via an electronic device.
  • the learner’s favourite environment photographs are received from the learner for use in various scenes of the narratives. These photographs of the learner’s favourite environment activate spatial memory of the learner which enables better retention of the narratives.
  • the one or more formats may include at least one of a text format of the learning concept details.
  • a learning input identification subsystem identifies a language of the one or more learning associated inputs received using a natural language processing technique.
  • the natural language processing technique helps a machine to understand the natural language of the concept details and converts the natural language into machine generated language.
  • the at least one entity may include at least one of words, numbers, punctuations, characters, colours, places / environment and the like.
  • the at least one entity is extracted upon performing an optical character recognition (OCR) technique to convert the text from printed format to machine-encoded text.
  • OCR optical character recognition
  • the learning input identification subsystem (120) determines a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique.
  • the types of cranial bones include sphenoid, temporal, parietal, ethmoid, occipital, and frontal.
  • the type of the cranial bones and title of the topic ‘cranial’ are the first set of the one or more keywords.
  • a learning input processing subsystem 130
  • the plurality of keyword generation parameters may include, but not limited to, a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of a first set of keyword, or a combination thereof.
  • the second set of one or more keywords corresponding to the first set of one or more keywords includes ‘spencer’ for sphenoid, ‘tempo’ for temporal, ‘parrot’ for parietal, ‘ethan’ for ethmoid, ‘occupied’ for occipital, ‘front’ for frontal, ‘crane’ for cranial respectively.
  • the first set of the one or more keywords and the second set of the one or more keywords are depicted below in Table I:
  • the learning input processing subsystem also matches the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database (115).
  • the learning preference and the learning interest information of the learner may include information about one or more characters of movies, plays, media, environment, comic books, culture, genre, animals, things, natural entities such as mountains, oceans and the like.
  • a narrative creation subsystem (140) creates a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the plurality of keyword generation parameters.
  • the narratives are created by implementation of self-sufficient interactive digital assistant to communicate across regions, amongst other interactive digital assistants, communicate with social media and other channels to interpolate the input from the learner and perform data mining, lexical analysis, cognitive computing and analytics.
  • the learning technique generates the narrative or the story by matching the content with areas of interest or preference of the learner.
  • the narrative which is created using the second set of the one or more keywords is depicted as follows. ‘ 7 travelled to the Spencer in my Tempo with my Parrot Ethan. Ethan Occupied the Front seat without wearing the seat belt. On the way, a Crane collided with our Tempo and Ethan died on the spot. The tempo was completely damaged. My dad later bought me another Parrot and a Tempo.”
  • the narrative creation subsystem (140) also identifies a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
  • the sentiment associated with the above-mentioned narrative includes a negative or a sad sentiment. So, the sentiment associated with the narrative as an emotion helps in complete interpretation of the context of the narrative and enables the learner to connect and with the subject matter and remember it for a longer period of time.
  • the narrative may be created in a text form and further from the text form, it may be converted into an animated video form.
  • the animated video form of the narrative may be created from the text form of the narrative using a deep learning based variational autoencoder (VAE) modelling technique.
  • VAE variational autoencoder
  • a photograph of the learner and the favourite environment of the learner in the photographs may also be considered and converted into an avatar or a virtual model for making the video more attractive and livelier.
  • the photograph of the learner and the learner’s favourite environment may also be captured in realtime using an image capturing unit of the electronic device associated with the learner.
  • a narrative quiz generation subsystem (150), generates one or more questions and answers corresponding to the narrative for enabling revision of the learning content by the learner.
  • the one or more questions corresponds to the narrative and one or more answers corresponds to a second set of the one or more keywords.
  • the second set of the one or more keywords further helps the learner in remembering and recalling a first set of one or more keywords.
  • the narrative quiz generation subsystem (150) also enables generation of visuals based on the contents or scenes of the narrative.
  • the narrative quiz generation subsystem (150) enables quick revision and recalling of the second set of the one or more keywords and indirectly the narrative easily by the learner
  • the questions for revision may include ‘what is the name of the parrotT Correct answer for the above-mentioned question includes ‘the name of the parrot is Ethan’. So, here, the answer helps in recalling the second set of the one or more keywords such as ‘parrot and Ethan ’ which further helps in remembering the first set of the one or more keywords such as ‘Parietal and Ethmoid’ respectively .
  • the narrative quiz generation subsystem (150) also generates a summary sheet for revision of the second set of the one or more keywords and corresponding first set of the one or more keywords which further indirectly helps in recalling the narrative along with the concept / topic details. Also, recalling the narrative helps the learner in grasping the new concept such as the types of the cranial bones easily without much effort and registers the information in the brain of the learner for a longer period of time.
  • FIG. 4 illustrates a block diagram of a computer or a server of FIG. 1 in accordance with an embodiment of the present disclosure.
  • the server (200) includes processor(s) (230), and memory (210) operatively coupled to the bus (220).
  • the processor(s) (230), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
  • the memory (210) includes several subsystems stored in the form of executable program which instructs the processor (230) to perform the method steps illustrated in FIG. 1.
  • the memory (210) is substantially similar to a system (100) of FIG.l.
  • the memory (210) has following subsystem: a learning input receiving subsystem (110), a learning input identification subsystem (120), a learning input processing subsystem (130) and a narrative creation subsystem (140).
  • the learning input receiving subsystem (110) configured to receive one or more learning associated inputs from a learner in one or more formats.
  • the learning input identification subsystem (120) is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique.
  • the learning input identification subsystem (120) is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique.
  • the learning input processing subsystem (130) is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance.
  • the learning input processing subsystem (130) is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
  • the narrative creation subsystem (140) configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms based on the second set of one or more keywords generated.
  • the narrative creation subsystem (140) is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
  • FIG. 5 is a flow chart representing the steps involved in a method (300) for creative learning in accordance with the embodiment of the present disclosure.
  • the method (300) includes receiving, by a learning input subsystem, one or more learning associated inputs from a learner in one or more formats in step 310.
  • receiving the one or more learning associated inputs from the learner may include receiving a learning topic name, a learning concept detail, an article and the like.
  • receiving the one or more learning associated inputs from the learner in the one or more formats may include receiving the one or more learning associated inputs in at least one of a text format, an image format, a video format or a speech format.
  • the method (300) also includes identifying, by a learning input identification subsystem, a language of the one or more learning associated inputs received using a natural language processing technique in step 320.
  • identifying the language of the one or more learning associated inputs may include identifying the language for the one or more learning associated inputs for a machine to understand, interpret and manipulate human language and extracting at least one entity from the one or more learning associated inputs upon identification of the language.
  • the method (300) also includes determining, by the learning input identification subsystem, a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique in step 330.
  • determining the first set of the one or more keywords from the one or more learning associated inputs may include determining the first set of the one or more keywords by automatic identification of key phrases, key terms or key segments describing the subject of the input information using the keyword extraction technique.
  • the keyword extraction technique may include a supervised, semi-supervised or an unsupervised learning technique.
  • the method (300) also includes generating, by a learning input processing subsystem, a second set of one or more keywords upon matching of the first set of the one or more determined keywords based on a plurality of keyword generation parameters with geographical and cultural relevance in step 340.
  • generating the second set of the one or more keywords upon matching the first set of the one or more keywords may include generating the second set of the one or more keywords based on at least one of a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of a first set of keyword or a combination thereof.
  • the method (300) also includes matching, by the learning input processing subsystem, the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database in step 350.
  • the method (300) also includes creating, by a narrative creation subsystem, a narrative corresponding to the one or more learning associated inputs in one or more representational forms based on the second set of the one or more keywords in step 360.
  • creating the narrative corresponding to the one or more learning associated inputs may include creating the narrative by implementation of self-sufficient interactive digital assistant to communicate across regions, amongst other interactive digital assistants, communicate with social media and other channels to interpolate the input from the learner and perform data mining, lexical analysis, cognitive computing and analytics technique.
  • the one or more representational forms may include at least one of a text form, an animated video form, an audio form, an animated picture form, a hologram, virtual reality format or a combination thereof.
  • the method (300) also includes identifying, by the narrative creation subsystem, a sentiment associated with the narrative a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning in step 370.
  • identifying the sentiment associated with the narrative for facilitating the creative learning may include identifying at least a positive or a happy sentiment, a negative or a sad sentiment, a neutral sentiment and the like.
  • the sentiment associated with the narrative is incorporated into the narrative which further helps in complete interpretation of the context of the narrative and enables the learner to connect and with the subject matter and remember it for a longer period of time.
  • Various embodiments of the present disclosure provide a creative learning environment in a form of the narrative for learning a new and an unfamiliar concept by the learner easily with less time and effort.
  • the present disclosed system utilizes machine learning technique for creation of the narrative, wherein the machine learning technique helps in automatic identification of learner’s preference and interests and gathers historical information from the social media platform of the learner in order to create the narrative.
  • the creation of the narrative based on the interest and the preferences of the learner motivates the learner in learning and also enables memorizing the learnt concept for a longer duration.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Technology (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Data Mining & Analysis (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A system for creative learning is disclosed. The system includes a learning input receiving subsystem to receive one or more learning associated inputs from a learner. A learning input identification subsystem to identify a language of the one or more learning associated inputs, determine a first set of one or more keywords from the one or more learning associated inputs. A learning input processing subsystem to generate a second set of one or more keywords using the first set of the one or more keywords, match the second set of the one or more keywords with a learning preference and learning interest information of the learner. A narrative creation subsystem to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms, to identify a sentiment and incorporate the sentiment associated with the narrative as an emotion thereby facilitating the creative learning.

Description

SYSTEM AND METHOD FOR CREATIVE LEARING
BACKGROUND
[0001] Embodiments of the present disclosure relate to a computerised learning approach and more particularly to a system and a method for creative learning.
[0002] In today’s competitive world, information overload is one of the constraints for learners to understand and remember complex information in various domains of learning. The complex information encountered in the learning environment hampers the rate of learning. In addition to this, not all information learnt by the learner penetrates into long term memory for retrieval when required as it is not present in the format which helps the brain to register in the long-term memory. As a result, to solve such issues creative learning environment is provided to the learner which acts as one of an effective learning strategy.
[0003] One of such conventional learning strategies includes use of narratives for learning purpose. Such narratives provide understanding of concepts and provide a scaffold for transferring of knowledge within specific contexts and environments among various types of learners. However, such conventional learning strategies use generic narratives for all kind of learners and do not provide a personalized learning for different learners having different area of interest. As a result, such conventional learning strategies involves lesser user interaction which further deviates concentration and interests of the learner from learning and creates difficulty in remembering the subject matter.
[0004] Hence, there is a need for an improved system and a method for creative learning in order to address the aforementioned issues.
BRIEF DESCRIPTION
[0005] In accordance with an embodiment of the present disclosure, a system for creative learning is disclosed. The system includes a learning input receiving subsystem configured to receive one or more learning associated inputs from a learner in one or more formats. The system also includes a learning input identification subsystem operatively coupled to the learning input receiving subsystem. The learning input identification subsystem is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique. The learning input identification subsystem is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique. The system also includes a learning input processing subsystem operatively coupled to the learning input identification subsystem. The learning input processing subsystem is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance. The learning input processing subsystem is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database. The system also includes a narrative creation subsystem operatively coupled to the learning input processing subsystem. The narrative creation subsystem is configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the second set of one or more keywords The narrative creation subsystem is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
[0006] In accordance with another embodiment of the present disclosure, a method for creative learning is disclosed. The method includes receiving, by a learning input subsystem, one or more learning associated inputs from a learner in one or more formats. The method also includes identifying, by a learning input identification subsystem, a language of the one or more learning associated inputs received using a natural language processing technique. The method also includes determining, by the learning input identification subsystem, a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique. The method also includes generating, by a learning input processing subsystem, a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance. The method also includes matching, by the learning input processing subsystem, the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database. The method also includes creating, by a narrative creation subsystem, a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the second set of one or more keywords. The method also includes identifying, by the narrative creation subsystem, a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
[0007] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0008] FIG. 1 is a block diagram of a system for creative learning in accordance with an embodiment of the present disclosure;
[0009] FIG. 2 is a block diagram of an embodiment of a system for creative learning of FIG. 1 in accordance with an embodiment of the present disclosure;
[0010] FIG. 3 is a block diagram of an exemplary system for creative learning in big data in accordance with an embodiment of the present disclosure;
[0011] FIG. 4 illustrates a block diagram of a computer or a server of FIG. 1 in accordance with an embodiment of the present disclosure; and
[0012] FIG. 5 is a flow chart representing the steps involved in a method for creative learning in accordance with the embodiment of the present disclosure.
[0013] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0014] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0015] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0016] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0017] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0018] Embodiments of the present disclosure relate to a system and a method for creative learning. The system includes a learning input receiving subsystem configured to receive one or more learning associated inputs from a learner in one or more formats. The system also includes a learning input identification subsystem operatively coupled to the learning input receiving subsystem. The learning input identification subsystem is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique. The learning input identification subsystem is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique. The system also includes a learning input processing subsystem operatively coupled to the learning input identification subsystem. The learning input processing subsystem is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance. The learning input processing subsystem is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.The system also includes a narrative creation subsystem operatively coupled to the learning input processing subsystem. The narrative creation subsystem is configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the second set of one or more keywords. The narrative creation subsystem is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning..
[0019] FIG. 1 is a block diagram of a system (100) for creative learning in accordance with an embodiment of the present disclosure. The system (100) includes a learning input receiving subsystem (110) configured to receive one or more learning associated inputs from a learner in one or more formats. In one embodiment, the learning associated inputs may include a learning topic name, a learning concept detail, an article and the like. In such embodiment, the one or more formats may include at least one of a text format, an image format, a video format or a speech format. In some embodiment, the learning associated inputs may also include receiving a learner’s photograph, a learner’s favourite environment photographs and a learner’s preference and interest via an electronic device. The learner’s favourite environment photographs are received from the learner for use in various scenes of the narratives. These photographs of the learner’s favourite environment activate spatial memory of the learner which enables better retention of the narratives. In such embodiment, the electronic device may include, but not limited to, a laptop, a desktop, a mobile phone, a tablet and the like.
[0020] The system (100) also includes a learning input identification subsystem (120) operatively coupled to the learning input receiving subsystem (110). The learning input identification subsystem (120) is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique. The language processing technique helps a machine to understand, interpret and manipulate human language. The learning input identification subsystem (120) also extracts at least one entity from the one or more learning associated inputs upon identification of the language using an entity extraction technique. In one embodiment, the at least one entity may include at least one of words, numbers, punctuations, characters, colours, places / environment and the like. In a specific embodiment, the entity extraction technique may include, but not limited to, an image processing technique, an optical character recognition (OCR) technique, a speech recognition technique and the like. The image processing technique, the OCR technique and the speech recognition technique are utilized for extraction of the at least one entity in case of an image input, a text input and a speech input respectively.
[0021] The learning input identification subsystem (120) is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique. As used herein, the term ‘first set of one or more keywords’ is defined as the one or more keywords derived from the input information directly. In one embodiment, the keyword extraction technique may include automatic identification of key phrases, key terms or key segments which describes the subject of the input information. In such embodiment, the keyword extraction technique may include a supervised, semi- supervised or an unsupervised learning technique.
[0022] The system (100) also includes a learning input processing subsystem (130) operatively coupled to the learning input identification subsystem (120). The learning input processing subsystem (130) is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance. As used herein, the term ‘second set of one or more keywords’ is defined as a new list of keywords which are created based on the first set of the one or more keywords for creation of a narrative. In one embodiment, the plurality of keyword generation parameters may include, but not limited to, a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of first set of keywords, or a combination thereof. The learning input processing subsystem is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database.
[0023] As used herein, the term ‘learning information database’ is defined as a storage repository which stores the learning associated information and the learning preference and learning interest information collected from the learner. In one embodiment, the learning information database may store names of one or more popular places/environment, one or more popular things, one or more popular animals, one or more natural entities, one or more popular characters of movies, one or more popular plays, one or more popular comic books and the like. In one embodiment, the learning preference and interest information may include one or more preferences and interest associated with the learner fetched from one or more social media platform. In another embodiment, the learning preference and interest information may include one or more preferences collected from the learner via a questionnaire. A learner’s feedback obtained corresponding to a narrative is also stored in the learning information database which further becomes historical learning preference and interest information for scenarios. In one embodiment, the learning preference and learning interest information may include information about one or more characters of movies, plays, media, places/environment, comic books, culture, genre, animals, things, natural entities such as mountains or oceans and the like. In another embodiment, the learning preference information may also include behavioral and emotional preference of the learner.
[0024] The system (100) also includes a narrative creation subsystem (140) operatively coupled to the learning input processing subsystem (130). The narrative creation subsystem (140) is configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the plurality of keyword generation parameters. As used herein, the term ‘narrative’ is defined as a story which is generated to learn new or less abstract concepts in various topics by creating connections in the form of stories between the known subject and the subject to be learned based on a concept of associated memory. The narratives are created by implementation of self-sufficient interactive digital assistant to communicate across regions, amongst other interactive digital assistants, communicate with social media and other channels to interpolate the input from the learner and perform data mining, lexical analysis, cognitive computing and analytics. The narratives are produced which are relevant to the learner by the geographical location, culture and emotionally helps the learner to relate and remember the information in the field of study. In a specific embodiment, the interactive digital assistant may take the learner’s input and create the narrative on demand and time based on utilization of the internet of things (loT) technology. In such embodiment, the interactive digital assistant receives the voice input from the learner, processes the received voice input by connecting with one or more cloud-based services hosted on a remote server. The interactive digital assistant on the cloud server generates the narrative on demand and on time based on determination of the first set of the one or more keywords, generating a second set of one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance and matching the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database. Once, the narrative is created, the interactive digital assistant provides the narrative in a form of voice/video generated output to the learner from the cloud server. The learner may choose to stream the video across different loT devices or listen to the audio on any play back devices or the current device in contention. This loT technology considers the learner’s preference in decoding the generated information obtained from the interactive digital assistant and presents the same in an acceptable format based on the learner’s preference. The loT devices in contention are not limited to just voice input but also receives inputs such as understanding the learner’s moods with different sensors deployed in wearable device associated with the learner for generation of the narrative. In one embodiment, the wearable device may include, but not limited to, an electronic wristwatch, a band, a ring and the like.
[0025] The learning technique generates the narrative or the story by matching the content with areas of interest or preference of the learner. In one embodiment, the one or more representational forms may include at least one of a text form, an animated video form, an audio form, an animated picture form, a hologram, a virtual reality format or a combination thereof. In such embodiment, the animated video form of the narrative may be created from the text form of the narrative using a deep learning based variational autoencoder (VAE) modelling technique. In a particular embodiment, the narrative creation subsystem is also configured to generate a virtual animated model representative of the learner based on a photograph of the learner. In such embodiment, the virtual animated model may include a two-dimensional avatar representative of a character in the narrative.
[0026] The narrative creation subsystem (140) is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning. In one embodiment, the sentiment associated with the narrative may include, but not limited to, a positive or a happy sentiment, a negative or a sad sentiment, a neutral sentiment and the like. In such embodiment, the sentiment associated with the narrative is implemented into the narrative which helps in complete interpretation of the context of the narrative and enables the learner to connect and with the subject matter and remember it for a longer period of time. In a specific embodiment, the narrative creation subsystem also generates a relevant title for the narrative created based on the narrative content. Also, the narrative creation subsystem creates an index of the learning topic technical name versus the corresponding title of the narrative generated which further helps the learner to learn multiple learning topics easily. Also, the narrative creation subsystem utilizes a possibility of using a sound-alike term or a rhyming term to relate with the title of a concept of the narrative for active recall by the learner.
[0027] FIG. 2 is a block diagram of an embodiment of a system (100) for creative learning of FIG. 1 in accordance with an embodiment of the present disclosure. As described in abovementioned FIG. 1, the system (100) includes a learning input receiving subsystem (110), a learning input identification subsystem (120), a learning input processing subsystem (130) and a narrative creation subsystem (140). In addition to that, the system (100) further includes a narrative quiz generation subsystem (150) operatively coupled to the narrative creation subsystem (140). The narrative quiz generation subsystem (150) is configured to generate one or more questions and answers corresponding to the narrative for enabling active recall of the learning content by the learner. In one embodiment, the one or more questions corresponds to the narrative and one or more answers corresponds to a second set of the one or more keywords. The second set of the one or more keywords further helps the learner in remembering and recalling a first set of one or more keywords. The narrative quiz generation subsystem (150) also enables generation of visuals based on the contents or scenes of the narrative. The narrative quiz generation subsystem (150) also generates a summary sheet for revision of the second set of the one or more keywords and corresponding first set of the one or more keywords which further indirectly helps in recalling the narrative along with the concept / topic details. .
[0028] FIG. 3 is a block diagram of an exemplary system (100) for creative learning in big data in accordance with an embodiment of the present disclosure. The system (100) provides an innovative and a creative learning environment to promote computerised learning method with application of technology. The system (100) is applicable for any group or any domain of a learner. Considering an example, wherein a learner (105) is a student of medical domain and a professor at a medical institute has assigned a task to the learner. The task assigned is to memorize types of cranial bones in human being. As the learner (105) is new to the domain of medical science, so remembering some jargons corresponding to the medical domain is a challenging task. The system (100) in such a scenario provides a creative learning environment to the learner (105) to adapt grasping of complex concepts easily. The system (100) includes a learning input receiving subsystem (110) to receive one or more learning associated inputs from the learner in one or more formats. For example, the learning associated input may include a learning topic such as types of cranial bones. Also, the learning associated inputs may also include receiving a learner’s photograph, a learner’s favourite environment photographs and a learner’s preference and interest via an electronic device. Here, the learner’s favourite environment photographs are received from the learner for use in various scenes of the narratives. These photographs of the learner’s favourite environment activate spatial memory of the learner which enables better retention of the narratives. In the example used herein, the one or more formats may include at least one of a text format of the learning concept details.
[0029] Once, the learning associated input is received, a learning input identification subsystem (120) identifies a language of the one or more learning associated inputs received using a natural language processing technique. The natural language processing technique helps a machine to understand the natural language of the concept details and converts the natural language into machine generated language. For example, the at least one entity may include at least one of words, numbers, punctuations, characters, colours, places / environment and the like. Here, the at least one entity is extracted upon performing an optical character recognition (OCR) technique to convert the text from printed format to machine-encoded text. Also, the learning input identification subsystem (120) determines a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique. For example, the types of cranial bones include sphenoid, temporal, parietal, ethmoid, occipital, and frontal. Here, the type of the cranial bones and title of the topic ‘cranial’ are the first set of the one or more keywords. Once, the first set of the one or more keywords are formed, a learning input processing subsystem (130) generates a second set of one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance. In the example used herein, the plurality of keyword generation parameters may include, but not limited to, a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of a first set of keyword, or a combination thereof. For example, the second set of one or more keywords corresponding to the first set of one or more keywords includes ‘spencer’ for sphenoid, ‘tempo’ for temporal, ‘parrot’ for parietal, ‘ethan’ for ethmoid, ‘occupied’ for occipital, ‘front’ for frontal, ‘crane’ for cranial respectively. The first set of the one or more keywords and the second set of the one or more keywords are depicted below in Table I:
Figure imgf000014_0001
TABLE I
[0030] The learning input processing subsystem also matches the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database (115). For example, the learning preference and the learning interest information of the learner may include information about one or more characters of movies, plays, media, environment, comic books, culture, genre, animals, things, natural entities such as mountains, oceans and the like. [0031] Once, the second set of the one or more keywords are generated and matched with the learning preference of the learner, a narrative creation subsystem (140) creates a narrative corresponding to the one or more learning associated inputs in one or more representational forms by utilizing the plurality of keyword generation parameters. The narratives are created by implementation of self-sufficient interactive digital assistant to communicate across regions, amongst other interactive digital assistants, communicate with social media and other channels to interpolate the input from the learner and perform data mining, lexical analysis, cognitive computing and analytics. The learning technique generates the narrative or the story by matching the content with areas of interest or preference of the learner.
[0032] For example, the narrative which is created using the second set of the one or more keywords is depicted as follows. ‘ 7 travelled to the Spencer in my Tempo with my Parrot Ethan. Ethan Occupied the Front seat without wearing the seat belt. On the way, a Crane collided with our Tempo and Ethan died on the spot. The tempo was completely damaged. My dad later bought me another Parrot and a Tempo.”
[0033] Further, the narrative creation subsystem (140) also identifies a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning. In the example used herein, the sentiment associated with the above-mentioned narrative includes a negative or a sad sentiment. So, the sentiment associated with the narrative as an emotion helps in complete interpretation of the context of the narrative and enables the learner to connect and with the subject matter and remember it for a longer period of time.
[0034] Here, the narrative may be created in a text form and further from the text form, it may be converted into an animated video form. The animated video form of the narrative may be created from the text form of the narrative using a deep learning based variational autoencoder (VAE) modelling technique. Also, for the animated video creation, a photograph of the learner and the favourite environment of the learner in the photographs may also be considered and converted into an avatar or a virtual model for making the video more attractive and livelier. For example, the photograph of the learner and the learner’s favourite environment may also be captured in realtime using an image capturing unit of the electronic device associated with the learner. [0035] In addition, a narrative quiz generation subsystem (150), generates one or more questions and answers corresponding to the narrative for enabling revision of the learning content by the learner. For example, the one or more questions corresponds to the narrative and one or more answers corresponds to a second set of the one or more keywords. The second set of the one or more keywords further helps the learner in remembering and recalling a first set of one or more keywords. The narrative quiz generation subsystem (150) also enables generation of visuals based on the contents or scenes of the narrative. Thus, the narrative quiz generation subsystem (150) enables quick revision and recalling of the second set of the one or more keywords and indirectly the narrative easily by the learner For example, the questions for revision may include ‘what is the name of the parrotT Correct answer for the above-mentioned question includes ‘the name of the parrot is Ethan’. So, here, the answer helps in recalling the second set of the one or more keywords such as ‘parrot and Ethan ’ which further helps in remembering the first set of the one or more keywords such as ‘Parietal and Ethmoid’ respectively . Similarly, if a question generated by the narrative quiz generation subsystem (130) is ‘where is Ethan sitting in the TempoT In such a case, the corresponding answer for the question includes, Ethan is occupied in the front seat in the tempo. ’ So, again, the second set of keywords such as ‘occupied, front, tempo and Ethan ’ helps in active recall of the first set of the one or more keywords such as ‘Occipital, Frontal, Temporal and Ethmoid bone respectively. ’ The narrative quiz generation subsystem (150) also generates a summary sheet for revision of the second set of the one or more keywords and corresponding first set of the one or more keywords which further indirectly helps in recalling the narrative along with the concept / topic details. Also, recalling the narrative helps the learner in grasping the new concept such as the types of the cranial bones easily without much effort and registers the information in the brain of the learner for a longer period of time.
[0036] FIG. 4 illustrates a block diagram of a computer or a server of FIG. 1 in accordance with an embodiment of the present disclosure. The server (200) includes processor(s) (230), and memory (210) operatively coupled to the bus (220). The processor(s) (230), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0037] The memory (210) includes several subsystems stored in the form of executable program which instructs the processor (230) to perform the method steps illustrated in FIG. 1. The memory (210) is substantially similar to a system (100) of FIG.l. The memory (210) has following subsystem: a learning input receiving subsystem (110), a learning input identification subsystem (120), a learning input processing subsystem (130) and a narrative creation subsystem (140).
[0038] The learning input receiving subsystem (110) configured to receive one or more learning associated inputs from a learner in one or more formats. The learning input identification subsystem (120) is configured to identify a language of the one or more learning associated inputs received using a natural language processing technique. The learning input identification subsystem (120) is also configured to determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique. The learning input processing subsystem (130) is configured to generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance. The learning input processing subsystem (130) is also configured to match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database. The narrative creation subsystem (140) configured to create a narrative corresponding to the one or more learning associated inputs in one or more representational forms based on the second set of one or more keywords generated. The narrative creation subsystem (140) is also configured to identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
[0039] FIG. 5 is a flow chart representing the steps involved in a method (300) for creative learning in accordance with the embodiment of the present disclosure. The method (300) includes receiving, by a learning input subsystem, one or more learning associated inputs from a learner in one or more formats in step 310. In one embodiment, receiving the one or more learning associated inputs from the learner may include receiving a learning topic name, a learning concept detail, an article and the like. In such embodiment, receiving the one or more learning associated inputs from the learner in the one or more formats may include receiving the one or more learning associated inputs in at least one of a text format, an image format, a video format or a speech format.
[0040] The method (300) also includes identifying, by a learning input identification subsystem, a language of the one or more learning associated inputs received using a natural language processing technique in step 320. In one embodiment, identifying the language of the one or more learning associated inputs may include identifying the language for the one or more learning associated inputs for a machine to understand, interpret and manipulate human language and extracting at least one entity from the one or more learning associated inputs upon identification of the language.
[0041] The method (300) also includes determining, by the learning input identification subsystem, a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique in step 330. In one embodiment, determining the first set of the one or more keywords from the one or more learning associated inputs may include determining the first set of the one or more keywords by automatic identification of key phrases, key terms or key segments describing the subject of the input information using the keyword extraction technique. In such embodiment, the keyword extraction technique may include a supervised, semi-supervised or an unsupervised learning technique.
[0042] The method (300) also includes generating, by a learning input processing subsystem, a second set of one or more keywords upon matching of the first set of the one or more determined keywords based on a plurality of keyword generation parameters with geographical and cultural relevance in step 340. In one embodiment, generating the second set of the one or more keywords upon matching the first set of the one or more keywords may include generating the second set of the one or more keywords based on at least one of a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of a first set of keyword or a combination thereof. The method (300) also includes matching, by the learning input processing subsystem, the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database in step 350.
[0043] The method (300) also includes creating, by a narrative creation subsystem, a narrative corresponding to the one or more learning associated inputs in one or more representational forms based on the second set of the one or more keywords in step 360. In one embodiment, creating the narrative corresponding to the one or more learning associated inputs may include creating the narrative by implementation of self-sufficient interactive digital assistant to communicate across regions, amongst other interactive digital assistants, communicate with social media and other channels to interpolate the input from the learner and perform data mining, lexical analysis, cognitive computing and analytics technique. In such embodiment, the one or more representational forms may include at least one of a text form, an animated video form, an audio form, an animated picture form, a hologram, virtual reality format or a combination thereof.
[0044] The method (300) also includes identifying, by the narrative creation subsystem, a sentiment associated with the narrative a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning in step 370. In one embodiment, identifying the sentiment associated with the narrative for facilitating the creative learning may include identifying at least a positive or a happy sentiment, a negative or a sad sentiment, a neutral sentiment and the like. In such embodiment, the sentiment associated with the narrative is incorporated into the narrative which further helps in complete interpretation of the context of the narrative and enables the learner to connect and with the subject matter and remember it for a longer period of time.
[0045] Various embodiments of the present disclosure provide a creative learning environment in a form of the narrative for learning a new and an unfamiliar concept by the learner easily with less time and effort.
[0046] Moreover, the present disclosed system utilizes machine learning technique for creation of the narrative, wherein the machine learning technique helps in automatic identification of learner’s preference and interests and gathers historical information from the social media platform of the learner in order to create the narrative. Thus, the creation of the narrative based on the interest and the preferences of the learner motivates the learner in learning and also enables memorizing the learnt concept for a longer duration.
[0047] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
[0048] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0049] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, the order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.

Claims

We Claim:
1. A system (100) for creative learning comprising: a learning input receiving subsystem (110) configured to receive one or more learning associated inputs from a learner in one or more formats; a learning input identification subsystem (120) operatively coupled to the learning input receiving subsystem (110), wherein the learning input identification subsystem (120) is configured to: identify a language of the one or more learning associated inputs received using a natural language processing technique; and determine a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique; a learning input processing subsystem (130) operatively coupled to the learning input identification subsystem (120), wherein the learning input processing subsystem (130) is configured to: generate a second set of one or more keywords upon determination of the first set of the one or more keywords based on a plurality of keyword generation parameters with geographical relevance and cultural relevance; match the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database; and a narrative creation subsystem (140) operatively coupled to the learning input processing subsystem (130), wherein the narrative creation subsystem (140) is configured to: create a narrative corresponding to the one or more learning associated inputs in one or more representational forms based on the second set of one or more keywords generated; and identify a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning.
2. The system (100) as claimed in claim 1, wherein the one or more learning associated inputs comprises at least one of a learning topic name, a learning concept detail or an article.
3. The system (100) as claimed in claim 1, wherein the one or more formats comprises at least one of a text format, an image format, a video format or a speech format.
4. The system (100) as claimed in claim 1, wherein the learning input identification subsystem (120) is configured to extract at least one entity from the one or more learning associated inputs upon identification of the language using an entity extraction technique.
5. The system (100) as claimed in claim 1, wherein the plurality of keyword generation parameters comprises at least one of a phonetic word, a rhyming word, an associated word, a predetermined number of matching letters of a first set of keyword or a combination thereof.
6. The system (100) as claimed in claim 1, wherein the one or more representational forms of the narrative comprises at least one of a text form, an animated video form, an audio form, an animated picture form, a hologram, a virtual reality media formator a combination thereof.
7. The system (100) as claimed in claim 1, wherein the narrative creation subsystem (140) is configured to generate a virtual animated model representative of the learner based on a photograph of the learner and a learner’s favourite environment photographs.
8. The system (100) as claimed in claim 1, wherein the narrative creation subsystem (140) is configured to create a video form of the narrative from a text form of the narrative using a deep learning based variable autoencoder modelling technique.
9. The system (100) as claimed in claim 1, comprising a narrative quiz generation subsystem (150) operatively coupled to the narrative creation subsystem (140), wherein the narrative quiz generation subsystem (150) is configured to generate one or more questions and answers corresponding to the narrative created for enabling active recall of the learning content by the learner.
10. The system (100) as claimed in claim 1, comprising an interactive digital assistant configured to: receive a learner’s mood information by accessing a wearable device worn by a learner based on utilization of an internet of things technology; and generate a narrative on demand and on time based on the learner’s mood information received from the learner.
11. A method (300) comprising: receiving, by a learning input subsystem, one or more learning associated inputs from a learner in one or more formats (310); identifying, by a learning input identification subsystem, a language of the one or more learning associated inputs received using a natural language processing technique (320); determining, by the learning input identification subsystem, a first set of one or more keywords from the one or more learning associated inputs by using a keyword extraction technique (330); generating, by a learning input processing subsystem, a second set of one or more keywords upon determination of the first set of the one or more determined keywords based on a plurality of keyword generation parameters with geographical and cultural relevance (340); 22 matching, by the learning input processing subsystem, the second set of the one or more keywords generated with a learning preference and learning interest information of the learner fetched from a learning information database (350); creating, by a narrative creation subsystem, a narrative corresponding to the one or more learning associated inputs in one or more representational forms based on the second set of one or more keywords generated (360); and identifying, by the narrative creation subsystem, a sentiment associated with the narrative to incorporate the sentiment into the narrative as an emotion based on a narrative content and context using a sentiment analysis technique and thereby facilitating the creative learning (370).
PCT/IB2020/060896 2020-10-01 2020-11-19 System and method for creative learning WO2022069929A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041042797 2020-10-01
IN202041042797 2020-10-01

Publications (1)

Publication Number Publication Date
WO2022069929A1 true WO2022069929A1 (en) 2022-04-07

Family

ID=80949771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/060896 WO2022069929A1 (en) 2020-10-01 2020-11-19 System and method for creative learning

Country Status (1)

Country Link
WO (1) WO2022069929A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022205987A1 (en) 2022-06-14 2023-12-14 Siemens Aktiengesellschaft Method for providing learning content to a person using an electronic computing device, computer program product, computer-readable storage medium and electronic computing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138923A1 (en) * 2017-11-03 2019-05-09 Andrea Gibb JACOBS System and method for obtaining and transforming interactive narrative information
US10325517B2 (en) * 2013-02-15 2019-06-18 Voxy, Inc. Systems and methods for extracting keywords in language learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10325517B2 (en) * 2013-02-15 2019-06-18 Voxy, Inc. Systems and methods for extracting keywords in language learning
US20190138923A1 (en) * 2017-11-03 2019-05-09 Andrea Gibb JACOBS System and method for obtaining and transforming interactive narrative information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022205987A1 (en) 2022-06-14 2023-12-14 Siemens Aktiengesellschaft Method for providing learning content to a person using an electronic computing device, computer program product, computer-readable storage medium and electronic computing device
WO2023241976A1 (en) * 2022-06-14 2023-12-21 Siemens Aktiengesellschaft Method for providing learning content for a person by means of an electronic computer unit, computer program product, computer-readable storage medium and electronic computer unit

Similar Documents

Publication Publication Date Title
Lee Language learning affordances of Instagram and TikTok
Williams Theories of translation
CN110674410B (en) User portrait construction and content recommendation method, device and equipment
Lindgren et al. The influence of exposure, parents, and linguistic distance on young European learners' foreign language comprehension
CN110717017A (en) Method for processing corpus
CN111507097B (en) Title text processing method and device, electronic equipment and storage medium
WO2020103899A1 (en) Method for generating inforgraphic information and method for generating image database
Haynes A literature review: Nigerian and Ghanaian videos
Lee Effects of contact with Korean popular culture on KFL learners' motivation
US10248644B2 (en) Method and system for transforming unstructured text to a suggestion
US10146856B2 (en) Computer-implemented method and system for creating scalable content
CN110569364A (en) online teaching method, device, server and storage medium
CN112163560A (en) Video information processing method and device, electronic equipment and storage medium
Ruta et al. Stylebabel: Artistic style tagging and captioning
Liebert Communicative strategies of popularization of science (including science exhibitions, museums, magazines)
Samarawickrama et al. Comic based learning for students with visual impairments
Gambier Revisiting certain concepts of translation studies through the study of media practices
CN112447073A (en) Explanation video generation method, explanation video display method and device
WO2022069929A1 (en) System and method for creative learning
Tran et al. LLQA-lifelog question answering dataset
KR102260222B1 (en) Apparatus and method for supporting to write reading review
CN111814488B (en) Poem generation method and device, electronic equipment and readable storage medium
Rahaman et al. Audio-augmented arboreality: wildflowers and language
Ashok Kumar et al. Deep learning research applications for natural language processing
Iminaxunova INNOVATIVE TECHNOLOGIES IN SELF-EDUCATION LEARNING OF FOREIGN LANGUAGE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20956150

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20956150

Country of ref document: EP

Kind code of ref document: A1