CN111696010A - Scene-based training method, server, terminal device and storage medium - Google Patents
Scene-based training method, server, terminal device and storage medium Download PDFInfo
- Publication number
- CN111696010A CN111696010A CN202010469630.9A CN202010469630A CN111696010A CN 111696010 A CN111696010 A CN 111696010A CN 202010469630 A CN202010469630 A CN 202010469630A CN 111696010 A CN111696010 A CN 111696010A
- Authority
- CN
- China
- Prior art keywords
- training
- user
- scene
- duration
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 522
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000004590 computer program Methods 0.000 claims description 32
- 239000000463 material Substances 0.000 claims description 8
- 238000012216 screening Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 9
- 230000003044 adaptive effect Effects 0.000 abstract description 4
- 239000000284 extract Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The application belongs to the technical field of computers and provides a training method, a device, terminal equipment and a storage medium based on scenes, wherein the training method based on the scenes comprises the following steps: acquiring training keywords and a scene where a user is located; determining training content according to the training keywords, and determining a training form and training duration according to the scene where the user is located; the training files corresponding to the training content, the training form and the training duration are sent to terminal equipment to train the user, so that the training files corresponding to the training keywords can be obtained to solve the problem requirements of the user, the obtained training files are adaptive to the scene where the user is located, and the user can achieve a better training effect according to the obtained training files.
Description
Technical Field
The application belongs to the technical field of computers, and particularly relates to a training method based on a scene, a server, terminal equipment and a storage medium.
Background
In actual work and life, users are not familiar with problems of certain products, and need to be trained in time to enhance the understanding of the products.
The training method of the product on the market is single at present, the training content is relatively fixed, and the electronic equipment is often associated to obtain the modularized training content according to the problems provided by the user or the input keywords, so that the training method is fixed, the requirements of the user cannot be met, and a good training effect cannot be achieved.
Disclosure of Invention
In view of this, embodiments of the present application provide a training method and apparatus based on a scene, a terminal device, and a storage medium, so as to adapt to training requirements of different scenes.
The first aspect of the embodiments of the present application provides a scene-based training method, which is applied to a server, and the scene-based training method includes:
acquiring training keywords sent by terminal equipment and a scene where a user is located;
determining training content according to the training keywords, and determining a training form and training duration according to the scene where the user is located;
and sending training files corresponding to the training content, the training form and the training duration to terminal equipment so as to train the user.
In a possible implementation manner of the first aspect, after determining a training form and a training duration according to a scene where the user is located, the scene-based training method further includes:
screening the training forms and the training duration according to user information to obtain screened training forms and screened training durations;
correspondingly, the sending the training files corresponding to the training content, the training form and the training duration to the terminal equipment comprises:
and sending training files corresponding to the training content, the screened training form and the screened training duration to the terminal equipment.
In a possible implementation manner of the first aspect, the acquiring training keywords sent by the terminal device and a scene where the user is located includes:
acquiring training requirements sent by terminal equipment, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
In a possible implementation manner of the first aspect, the sending, to a terminal device, a training file corresponding to the training content, the training form, and the training duration includes:
determining original training data corresponding to the training content;
and extracting training files corresponding to the training form and the training duration from the original training data, and sending the training files to terminal equipment.
In a possible implementation manner of the first aspect, the extracting training files corresponding to the training form and the training duration from the original training materials includes:
determining a reference object according to the training form, and determining preset similarity according to the training duration;
extracting a target object from the original training data according to the training form, wherein the similarity between the target object and the reference object is greater than the preset similarity;
and generating a training file according to the target object.
A second aspect of an embodiment of the present application provides a scene-based training method, which is applied to a terminal device, and the scene-based training method includes:
acquiring training keywords input by a user and a scene where the user is located;
the training keywords and the scene where the user is located are sent to a server, wherein the server is used for determining training content according to the training keywords, determining a training form and training duration according to the scene where the user is located, and generating training files corresponding to the training content, the training form and the training duration;
and receiving the training file sent by the server to train the user.
In a possible implementation manner of the second aspect, the acquiring the training keyword input by the user and the scene where the user is located includes:
acquiring training requirements input by a user, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
A third aspect of an embodiment of the present application provides a scene-based training apparatus, including:
the first acquisition module is used for acquiring training keywords sent by the terminal equipment and a scene where a user is located;
the determining module is used for determining training content according to the training keywords and determining a training form and training duration according to the scene where the user is located;
and the output module is used for sending the training files corresponding to the training content, the training form and the training duration to the terminal equipment.
In a possible implementation manner of the third aspect, the scene-based training apparatus further includes:
the screening module is used for screening the training forms and the training duration according to user information to obtain screened training forms and screened training durations;
correspondingly, the output module is specifically configured to:
and sending training files corresponding to the training content, the screened training form and the screened training duration to terminal equipment.
In a possible implementation manner of the third aspect, the first obtaining module is specifically configured to:
acquiring training requirements sent by terminal equipment, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
In a possible implementation manner of the third aspect, the output module includes:
the determining unit is used for determining original training data corresponding to the training content;
and the extracting unit is used for extracting training files corresponding to the training form and the training duration from the original training data and sending the training files to terminal equipment.
In a possible implementation manner of the third aspect, the extracting unit is specifically configured to:
determining a reference object according to the training form, and determining preset similarity according to the training duration;
extracting a target object from the original training data according to the training form, wherein the similarity between the target object and the reference object is greater than the preset similarity;
and generating a training file according to the target object.
A fourth aspect of an embodiment of the present application provides a scene-based training apparatus, including:
the second acquisition module is used for acquiring training keywords input by the user and the scene where the user is located;
the sending module is used for sending the training keywords and the scene where the user is located to a server, wherein the server is used for determining training content according to the training keywords, determining a training form and training duration according to the scene where the user is located, and generating training files corresponding to the training content, the training form and the training duration;
and the receiving module is used for receiving the training file sent by the server so as to train the user.
In a possible implementation manner of the fourth aspect, the second obtaining module is specifically configured to:
acquiring training requirements input by a user, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
A fifth aspect of embodiments of the present application provides a server comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
A sixth aspect of embodiments of the present application provides a terminal device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method according to the second aspect.
A seventh aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the method according to the first aspect or the second aspect.
Compared with the prior art, the embodiment of the application has the advantages that: the server determines training content according to the training keywords and the scene of the user by acquiring the training keywords and the scene of the user, determines a training form and training duration according to the scene of the user, and sends training files corresponding to the training content, the training form and the training duration to the terminal equipment so as to train the user. The training files are determined by the training keywords and the scene where the user is located, so that the training files corresponding to the training keywords can be obtained to solve the problem requirements of the user, the obtained training files are ensured to be adaptive to the scene where the user is located, and the user can achieve a better training effect according to the obtained training files.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
FIG. 1 is a diagram illustrating an application scenario of a scenario-based training method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating an implementation of a scenario-based training method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating sub-steps of a scenario-based training method provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating an implementation of a scenario-based training method according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a scenario-based training apparatus provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a scenario-based training apparatus provided in another embodiment of the present application;
FIG. 7 is a schematic diagram of a server provided by an embodiment of the present application;
fig. 8 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Referring to fig. 1, fig. 1 is a scene-based training system provided in an embodiment of the present application, where the training system includes a server 100 and a terminal device 200, the server 100 is configured to send a training file to the terminal device 200, and the terminal device 200 is configured to display the training file to train a user, where the terminal device may be a mobile phone, a tablet, a desktop computer, a palmtop computer, or the like.
The existing training method generally sends a fixed training file to a user according to the user's question to train the user. However, in different scenes, users need different degrees of understanding and available time for products, and in different scenes, the influence of the surrounding environment on the training effect is different, and if the training files are fixed, the training requirements of the users in different scenes cannot be met.
In the embodiment of the application, the terminal device 200 acquires training keywords sent by a user and a scene where the user is located, sends the training keywords and the scene where the user is located to the server 100, the server 100 determines training content according to the training keywords, determines a training form and training duration according to the scene where the user is located, and sends training files corresponding to the training content, the training form and the training duration to the terminal device 200 so as to train the user. The training files are determined by the training keywords and the scene where the user is located, so that the training files corresponding to the training keywords can be obtained to solve the problem requirements of the user, the obtained training files are ensured to be adaptive to the scene where the user is located, and the user can achieve a better training effect according to the obtained training files.
Referring to fig. 2, fig. 2 shows a scenario-based training method provided in an embodiment of the present application, where an execution subject of the method of the present embodiment is a server, and the method includes:
s101: the method comprises the steps of obtaining training keywords sent by terminal equipment and a scene where a user is located.
The training keywords are keywords of a problem that needs to be solved by a user or a problem that needs to be solved, for example, if the user encounters an engine fault during driving, a diagnosis process of the engine fault of a corresponding vehicle type needs to be obtained, and the training keywords sent by the terminal device may be "audi Q7", "engine fault", "engine diagnosis", or "engine". The scene of the user can be information such as the position, the state or the surrounding environment of the user. For example, for vehicle diagnosis, the scene where the user is located may be in the middle of self-driving travel, work in a maintenance shop, initial use of the vehicle, and the like.
In a possible implementation manner, the server acquires a training requirement sent by the terminal device, wherein the training requirement comprises at least one of text information, image information and voice information, and performs feature extraction on the text information, the image information or the voice information in the training requirement to identify information such as a product corresponding to training, a position where a user is located, an environment where the user is located, and the like, so as to determine a scene where the user is located. For example, the server extracts geographical position information in the text information, so as to determine the scene where the user is located; for another example, the server extracts a product image in the picture or the video, determines a product model or a product running state according to the product image, and determines a scene where the user is located according to the product model or the product running state.
S102: and determining training content according to the training keywords, and determining a training form and training duration according to the scene where the user is located.
The terminal equipment determines the user problems to be solved according to the training keywords and determines the training contents according to the user problems to be solved. The training form refers to a playing form of training data, such as video playing, voice playing, text playing, picture playing, and the like. The advantages of different training forms are different, the characters have the advantages of fast receiving and clear step arrangement; the picture has the advantages of intuition and general matching with characters; the video has the advantages of clear explanation, capacity of learning by working, long time consumption and inconvenient playback and positioning. In the embodiment of the application, the server determines the training form and the training duration according to the corresponding relation between the scene where the user is located and the training form and the training duration. For example, if the training content determined by the server according to the training keywords is Audi Q7 engine fault diagnosis training data, and the scene where the user is located is highway and self-driving tour, it indicates that the training time available for the user is short and needs to be intuitively and clearly trained, so the set training time is 5-10 minutes, and the set training form is image-text combination or picture form.
Due to the fact that different users have different familiarity, interest and attention duration on products, the training effect is influenced if the same training form and training duration are adopted for all the users. In a possible implementation manner, the server further obtains user information input by the user, wherein the user information comprises user age, identity, academic history, interest and the like, and after the training form and the training duration are determined according to the scene where the user is located, the server screens the training form and the training duration according to the user information to obtain the screened training form and the screened training duration. For example, if the training duration determined according to the scene where the user is located is 5 to 10 minutes, the training form is image-text combination or pictures, the user identity is a driving novice, and in order to ensure that the user can successfully perform fault diagnosis, the training duration is further screened as follows: more than 8 minutes, the training form is image-text combination.
S103: and sending training files corresponding to the training content, the training form and the training duration to terminal equipment so as to train the user.
Continuing the possible implementation mode, the server determines a training file corresponding to the training content, the screened training form and the screened training duration, the training file is sent to the terminal equipment, and the terminal equipment plays the training file according to the training form and the training duration so as to train the user.
In a possible implementation mode, the server determines original training data corresponding to training content, the original training data can be video files with subtitles and voice, then extracts training files corresponding to training forms and training duration from the determined original training data, and sends the extracted training files to the terminal equipment. For example, if the training form is a video, the server extracts a video segment corresponding to the training duration from the original training data as a training file, and if the training form is a voice, the server extracts a voice segment corresponding to the training duration from the original training data as the training file, so that the training form is diversified, and the training requirements of the user under different scenes are met.
As shown in fig. 3, in one possible implementation, training files corresponding to the training form and the training duration are extracted from the original training material, including S201-S203.
S201: and determining a reference object according to the training form, and determining preset similarity according to the training duration.
The reference object comprises at least one of a reference picture, a reference character and a reference voice. Specifically, each original training material corresponds to a plurality of reference objects, the reference objects are pictures, characters or voices extracted from the original training materials in advance, and the terminal equipment takes the pictures, characters or voices corresponding to a training form as the reference objects. The server stores the corresponding relation of the reference object, the training time length and the preset similarity in advance, and the corresponding preset similarity is determined according to the determined reference object and the training time length.
S202: and extracting a target object from the original training material according to the training form, wherein the similarity between the target object and the reference object is greater than the preset similarity.
Specifically, the server extracts videos, voices, characters or pictures with similarity greater than preset similarity with the reference object from the original training data according to the training form. Illustratively, if the training form comprises a video, the reference object is a picture, and the server takes a video segment with a similarity greater than the preset similarity to the reference object in the original training material as a target object and extracts the target object, wherein each video segment is segmented in advance from a video file of the original training material to ensure the integrity and accuracy of the training file. If the training form comprises voice, the reference object is voice, the server takes the voice section with the similarity larger than the preset similarity with the reference object in the original training data as a target object, and extracts the target object, wherein each voice section is divided from the voice file of the original training data in advance so as to ensure the integrity and the accuracy of the training file. If the training form comprises characters and the reference object is a word or a sentence, the server takes the sentence with the similarity higher than the preset similarity in the original training data as a target object, and extracts the target object, wherein each sentence is segmented from the subtitle file of the original training data so as to ensure the integrity and the accuracy of the training file. If the training form comprises a picture, the reference object is the picture, the server takes a video frame with the similarity with the reference object being greater than the preset similarity in the original training data as a target object, and extracts the target object.
S203: and generating a training file according to the target object.
Specifically, the server combines the target objects according to the time sequence of the target objects appearing in the original training data to generate a training file so as to meet the training requirement of the user in the current scene.
In the embodiment, the server acquires the training keywords sent by the terminal equipment and the scene where the user is located; determining training content according to the training requirements of users, and determining a training form and training duration according to scenes where the users are located; and sending training files corresponding to the training content, the training form and the training duration to the terminal equipment so as to train the user. The training files are determined by the training keywords and the scene where the user is located, so that the training files corresponding to the training keywords can be obtained to solve the problem requirements of the user, the obtained training files are ensured to be adaptive to the scene where the user is located, and the user can achieve a better training effect according to the obtained training files.
Referring to fig. 4, fig. 4 shows a scene-based training method according to another embodiment of the present application, where an execution subject of the method of the embodiment is a terminal device, and the method includes:
s301: and acquiring training keywords input by the user and a scene where the user is located.
In one possible implementation mode, the user inputs the training keywords and the scene where the user is located on the terminal device through a voice input mode or a text input mode.
In another possible implementation manner, the terminal device obtains a training requirement input by a user, where the training requirement includes at least one of text information, image information, and voice information, and performs feature extraction on the text information, the image information, or the voice information in the training requirement to identify information such as a product corresponding to training, a location where the user is located, and an environment where the user is located, so as to determine a scene where the user is located. For example, the terminal device extracts geographical position information from the text information, so as to determine the scene where the user is located; for another example, the terminal device extracts a product image in the picture or the video, determines a product model or a product running state according to the product image, and determines a scene where the user is located according to the product model or the product running state.
S302: and sending the training keywords and the scene where the user is located to a server, wherein the server is used for determining training content according to the training keywords and determining a training form and training duration according to the scene where the user is located.
The method for determining the training content by the server according to the training keyword and determining the training form and the training duration according to the scene where the user is located is the same as that of S102 in the above embodiment, and details are not repeated here.
S303: and receiving a training file which is sent by a server and corresponds to the training content, the training form and the training duration so as to train the user.
Specifically, the terminal equipment receives a training file sent by the server, and plays or displays the training file according to a training form and training duration so as to train the user.
In the embodiment, the terminal device obtains the training keywords input by the user and the scene where the user is located, and sends the training keywords and the scene where the user is located to the server to obtain the training files sent by the server, wherein the training files are generated by the server according to the training content, the training form and the training duration, the training content is determined by the training keywords, and the training form and the training duration are determined by the scene where the user is located, so that the training files obtained by the terminal device can not only solve the problem requirements of the user, but also adapt to the scene where the user is located, and the user can achieve a better training effect according to the obtained training files.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the scene-based training method described in the above embodiment, fig. 5 and fig. 6 respectively show a structural block diagram of the scene-based training apparatus provided in the embodiment of the present application, and for convenience of explanation, only the parts related to the embodiment of the present application are shown.
As shown in fig. 5, a scene-based training apparatus provided in an embodiment of the present application includes,
the first acquisition module 10 is used for acquiring training keywords sent by the terminal equipment and a scene where a user is located;
the determining module 20 is configured to determine training content according to the training keywords, and determine a training form and training duration according to a scene where the user is located;
and the output module 30 is used for sending the training files corresponding to the training content, the training form and the training duration to the terminal equipment so as to train the user.
In one possible implementation, the scene-based training apparatus further includes:
the screening module is used for screening the training forms and the training duration according to user information to obtain screened training forms and screened training durations;
correspondingly, the output module 30 is specifically configured to:
and sending training files corresponding to the training content, the screened training form and the screened training duration to terminal equipment.
In a possible implementation manner, the first obtaining module 10 is specifically configured to:
acquiring training requirements sent by terminal equipment, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
In one possible implementation, the output module 30 includes:
the determining unit is used for determining original training data corresponding to the training content;
and the extracting unit is used for extracting training files corresponding to the training form and the training duration from the original training data and sending the training files to terminal equipment.
In a possible implementation manner, the extracting unit is specifically configured to:
determining a reference object according to the training form, and determining preset similarity according to the training duration;
extracting a target object from the original training data according to the training form, wherein the similarity between the target object and the reference object is greater than the preset similarity;
and generating a training file according to the target object.
As shown in fig. 6, another embodiment of the present application provides a scene-based training apparatus comprising,
the second obtaining module 40 is used for obtaining training keywords input by the user and scenes where the user is located;
a sending module 50, configured to send the training keywords and the scene where the user is located to a server, where the server is configured to determine training content according to the training keywords, determine a training form and training duration according to the scene where the user is located, and generate a training file corresponding to the training content, the training form, and the training duration;
and the receiving module 60 is used for receiving the training file sent by the server so as to train the user.
In a possible implementation manner, the second obtaining module 40 is specifically configured to:
acquiring training requirements input by a user, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 7 is a schematic diagram of a server provided in an embodiment of the present application. As shown in fig. 7, the server of this embodiment includes: a processor 11, a memory 12 and a computer program 13 stored in said memory 12 and executable on said processor 11. The processor 11, when executing the computer program 13, implements:
acquiring training keywords sent by terminal equipment and a scene where a user is located;
determining training content according to the training keywords, and determining a training form and training duration according to the scene where the user is located;
and sending training files corresponding to the training content, the training form and the training duration to terminal equipment so as to train the user.
In one possible implementation, the processor 11, when executing the computer program 13, further implements:
screening the training forms and the training duration according to user information to obtain screened training forms and screened training durations;
and sending training files corresponding to the training content, the screened training form and the screened training duration to terminal equipment.
In one possible implementation, the processor 11, when executing the computer program 13, further implements:
acquiring training requirements sent by terminal equipment, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
In one possible implementation, the processor 11, when executing the computer program 13, further implements:
determining original training data corresponding to the training content;
and extracting training files corresponding to the training form and the training duration from the original training data, and sending the training files to terminal equipment.
In one possible implementation, the processor 11, when executing the computer program 13, further implements:
determining a reference object according to the training form, and determining preset similarity according to the training duration;
extracting a target object from the original training data according to the training form, wherein the similarity between the target object and the reference object is greater than the preset similarity;
and generating a training file according to the target object.
Illustratively, the computer program 13 may be partitioned into one or more modules/units, which are stored in the memory 12 and executed by the processor 11 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 13 in the server.
Those skilled in the art will appreciate that fig. 7 is merely an example of a server and is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., the server may also include input-output devices, network access devices, buses, etc.
The Processor 11 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 12 may be an internal storage unit of the server, such as a hard disk or a memory of the server. The memory 12 may also be an external storage device of the server, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash memory Card (FlashCard), and the like, provided on the server. Further, the memory 12 may also include both an internal storage unit of the server and an external storage device. The memory 12 is used for storing the computer program and other programs and data required by the server. The memory 12 may also be used to temporarily store data that has been output or is to be output.
Fig. 8 is a schematic diagram of a terminal device provided in an embodiment of the present application. As shown in fig. 8, the terminal device of this embodiment includes: a processor 21, a memory 22 and a computer program 23 stored in said memory 22 and executable on said processor 21. The processor 21, when executing the computer program 23, implements:
acquiring training keywords input by a user and a scene where the user is located;
the training keywords and the scene where the user is located are sent to a server, wherein the server is used for determining training content according to the training keywords, determining a training form and training duration according to the scene where the user is located, and generating training files corresponding to the training content, the training form and the training duration;
and receiving the training file sent by the server to train the user.
In one possible implementation, the processor 21, when executing the computer program 23, further implements:
acquiring training requirements input by a user, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
Illustratively, the computer program 23 may be partitioned into one or more modules/units, which are stored in the memory 22 and executed by the processor 21 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 23 in the terminal device.
Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 22 may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory 22 may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device. Further, the memory 22 may also include both an internal storage unit and an external storage device of the terminal device. The memory 22 is used for storing the computer program and other programs and data required by the terminal device. The memory 22 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. A scene-based training method is applied to a server and comprises the following steps:
acquiring training keywords sent by terminal equipment and a scene where a user is located;
determining training content according to the training keywords, and determining a training form and training duration according to the scene where the user is located;
and sending training files corresponding to the training content, the training form and the training duration to the terminal equipment so as to train the user.
2. The method of claim 1, wherein after determining a training form and a training duration based on the context of the user, the method further comprises:
screening the training forms and the training duration according to user information to obtain screened training forms and screened training durations;
correspondingly, the sending the training files corresponding to the training content, the training form and the training duration to the terminal equipment comprises:
and sending training files corresponding to the training content, the screened training form and the screened training duration to the terminal equipment.
3. The scene-based training method according to claim 1, wherein the acquiring of the training keywords sent by the terminal device and the scene where the user is located comprises:
acquiring training requirements sent by terminal equipment, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
4. The scene-based training method according to claim 1, wherein the transmitting a training file corresponding to the training content, the training form, and the training duration to the terminal device includes:
determining original training data corresponding to the training content;
and extracting training files corresponding to the training form and the training duration from the original training data, and sending the training files to the terminal equipment.
5. The method of scene-based training as defined in claim 4, wherein said extracting training files corresponding to the training form and the training duration from the original training material comprises:
determining a reference object according to the training form, and determining preset similarity according to the training duration;
extracting a target object from the original training data according to the training form, wherein the similarity between the target object and the reference object is greater than the preset similarity;
and generating a training file according to the target object.
6. A scene-based training method is applied to terminal equipment and comprises the following steps:
acquiring training keywords input by a user and a scene where the user is located;
the training keywords and the scene where the user is located are sent to a server, wherein the server is used for determining training content according to the training keywords, determining a training form and training duration according to the scene where the user is located, and generating training files corresponding to the training content, the training form and the training duration;
and receiving the training file sent by the server to train the user.
7. The method for training based on scene as claimed in claim 6, wherein the obtaining of the training keywords inputted by the user and the scene where the user is located comprises:
acquiring training requirements input by a user, wherein the training requirements comprise text information, and/or image information, and/or voice information;
and determining a training keyword and a scene where the user is located according to the training requirement.
8. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 6 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5 or 6 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010469630.9A CN111696010A (en) | 2020-05-28 | 2020-05-28 | Scene-based training method, server, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010469630.9A CN111696010A (en) | 2020-05-28 | 2020-05-28 | Scene-based training method, server, terminal device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111696010A true CN111696010A (en) | 2020-09-22 |
Family
ID=72478517
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010469630.9A Pending CN111696010A (en) | 2020-05-28 | 2020-05-28 | Scene-based training method, server, terminal device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111696010A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116170870A (en) * | 2023-04-21 | 2023-05-26 | Tcl通讯科技(成都)有限公司 | Network registration method and device, storage medium and electronic equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR9703210A (en) * | 1997-05-07 | 1998-12-22 | Viasoft Informatica Ltda Me | Computer interactive business training system |
CN107766482A (en) * | 2017-10-13 | 2018-03-06 | 北京猎户星空科技有限公司 | Information pushes and sending method, device, electronic equipment, storage medium |
CN107909338A (en) * | 2017-10-30 | 2018-04-13 | 平安科技(深圳)有限公司 | Training Management method, apparatus, computer equipment and storage medium |
CN108039092A (en) * | 2018-01-02 | 2018-05-15 | 北京易驾佳信息科技有限公司 | Displaying teaching method, device and electronic equipment |
CN108093044A (en) * | 2017-12-15 | 2018-05-29 | 中广热点云科技有限公司 | A kind of training courseware playback method and system |
CN108305629A (en) * | 2017-12-25 | 2018-07-20 | 广东小天才科技有限公司 | Scene learning content acquisition method and device, learning equipment and storage medium |
CN108536672A (en) * | 2018-03-12 | 2018-09-14 | 平安科技(深圳)有限公司 | Intelligent robot Training Methodology, device, computer equipment and storage medium |
CN109347980A (en) * | 2018-11-23 | 2019-02-15 | 网易有道信息技术(北京)有限公司 | Presentation, the method for pushed information, medium, device and calculating equipment |
CN109871438A (en) * | 2019-01-28 | 2019-06-11 | 平安科技(深圳)有限公司 | Problem answers recommended method, device, storage medium and server |
CN109978359A (en) * | 2019-03-18 | 2019-07-05 | 重庆替比网络科技有限公司 | A kind of vocational training system, method and business model |
CN110660286A (en) * | 2019-09-05 | 2020-01-07 | 北京安锐卓越信息技术股份有限公司 | Intelligent ecological marketing energizing training platform |
-
2020
- 2020-05-28 CN CN202010469630.9A patent/CN111696010A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR9703210A (en) * | 1997-05-07 | 1998-12-22 | Viasoft Informatica Ltda Me | Computer interactive business training system |
CN107766482A (en) * | 2017-10-13 | 2018-03-06 | 北京猎户星空科技有限公司 | Information pushes and sending method, device, electronic equipment, storage medium |
CN107909338A (en) * | 2017-10-30 | 2018-04-13 | 平安科技(深圳)有限公司 | Training Management method, apparatus, computer equipment and storage medium |
CN108093044A (en) * | 2017-12-15 | 2018-05-29 | 中广热点云科技有限公司 | A kind of training courseware playback method and system |
CN108305629A (en) * | 2017-12-25 | 2018-07-20 | 广东小天才科技有限公司 | Scene learning content acquisition method and device, learning equipment and storage medium |
CN108039092A (en) * | 2018-01-02 | 2018-05-15 | 北京易驾佳信息科技有限公司 | Displaying teaching method, device and electronic equipment |
CN108536672A (en) * | 2018-03-12 | 2018-09-14 | 平安科技(深圳)有限公司 | Intelligent robot Training Methodology, device, computer equipment and storage medium |
CN109347980A (en) * | 2018-11-23 | 2019-02-15 | 网易有道信息技术(北京)有限公司 | Presentation, the method for pushed information, medium, device and calculating equipment |
CN109871438A (en) * | 2019-01-28 | 2019-06-11 | 平安科技(深圳)有限公司 | Problem answers recommended method, device, storage medium and server |
CN109978359A (en) * | 2019-03-18 | 2019-07-05 | 重庆替比网络科技有限公司 | A kind of vocational training system, method and business model |
CN110660286A (en) * | 2019-09-05 | 2020-01-07 | 北京安锐卓越信息技术股份有限公司 | Intelligent ecological marketing energizing training platform |
Non-Patent Citations (1)
Title |
---|
吴昊天: "《精品课堂》" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116170870A (en) * | 2023-04-21 | 2023-05-26 | Tcl通讯科技(成都)有限公司 | Network registration method and device, storage medium and electronic equipment |
CN116170870B (en) * | 2023-04-21 | 2023-08-11 | Tcl通讯科技(成都)有限公司 | Network registration method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10621972B2 (en) | Method and device extracting acoustic feature based on convolution neural network and terminal device | |
CN110321958B (en) | Training method of neural network model and video similarity determination method | |
US20190026367A1 (en) | Navigating video scenes using cognitive insights | |
CN109474847B (en) | Search method, device and equipment based on video barrage content and storage medium | |
CN110381368A (en) | Video cover generation method, device and electronic equipment | |
US10110933B2 (en) | Video file processing | |
CN111368562A (en) | Method and device for translating characters in picture, electronic equipment and storage medium | |
CN110221747B (en) | Presentation method of e-book reading page, computing device and computer storage medium | |
CN107948730B (en) | Method, device and equipment for generating video based on picture and storage medium | |
CN109697245A (en) | Voice search method and device based on video web page | |
CN110349161B (en) | Image segmentation method, image segmentation device, electronic equipment and storage medium | |
CN111797345B (en) | Application page display method, device, computer equipment and storage medium | |
EP3885934A1 (en) | Video search method and apparatus, computer device, and storage medium | |
CN111696010A (en) | Scene-based training method, server, terminal device and storage medium | |
CN109740094A (en) | Page monitoring method, equipment and computer storage medium | |
CN110543449A (en) | chat record searching method based on AR equipment | |
CN110223718B (en) | Data processing method, device and storage medium | |
CN116434000A (en) | Model training and article classification method and device, storage medium and electronic equipment | |
CN109460511B (en) | Method and device for acquiring user portrait, electronic equipment and storage medium | |
CN112699687A (en) | Content cataloging method and device and electronic equipment | |
CN111352772A (en) | External memory card processing method and related product | |
CN114154003B (en) | Picture acquisition method and device and electronic equipment | |
CN117641004B (en) | Short video recommendation method and device, electronic equipment and storage medium | |
CN111641867B (en) | Video output method, device, electronic equipment and storage medium | |
CN110392313B (en) | Method, system, medium and electronic device for displaying specific voice comments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200922 |