[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111563514A - Three-dimensional character display method and device, electronic equipment and storage medium - Google Patents

Three-dimensional character display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111563514A
CN111563514A CN202010411695.8A CN202010411695A CN111563514A CN 111563514 A CN111563514 A CN 111563514A CN 202010411695 A CN202010411695 A CN 202010411695A CN 111563514 A CN111563514 A CN 111563514A
Authority
CN
China
Prior art keywords
dimensional character
dimensional
display
characters
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010411695.8A
Other languages
Chinese (zh)
Other versions
CN111563514B (en
Inventor
崔颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010411695.8A priority Critical patent/CN111563514B/en
Publication of CN111563514A publication Critical patent/CN111563514A/en
Application granted granted Critical
Publication of CN111563514B publication Critical patent/CN111563514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application relates to the technical field of computers, and discloses a three-dimensional character display method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring the selected learning content; recognizing each two-dimensional character contained in the learning content; acquiring a three-dimensional character corresponding to each two-dimensional character in each two-dimensional character; and displaying the three-dimensional character corresponding to the target two-dimensional character after detecting that any target two-dimensional character in the two-dimensional characters is written. By implementing the embodiment of the application, the man-machine interaction of the student when writing the learning content (such as words) can be improved, so that the enthusiasm and the interest of the student when writing the learning content (such as words) can be improved.

Description

Three-dimensional character display method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a three-dimensional character display method and device, electronic equipment and a storage medium.
Background
At present, many students write words (such as English words) on paper to increase the memory of the words, however, the method for writing words on paper is very boring and tasteless, and the enthusiasm and interest of the students for writing words are difficult to be promoted.
Disclosure of Invention
The embodiment of the application discloses a three-dimensional character display method and device, electronic equipment and a storage medium, which are beneficial to improving the enthusiasm and interest of students when writing learning contents (such as words).
A first aspect of an embodiment of the present application discloses a method for displaying a three-dimensional character, where the method includes:
acquiring the selected learning content;
recognizing each two-dimensional character contained in the learning content;
acquiring a three-dimensional character corresponding to each two-dimensional character in each two-dimensional character;
and displaying a three-dimensional character corresponding to any target two-dimensional character in the two-dimensional characters after detecting that the target two-dimensional character is written.
With reference to the first aspect of the embodiments of the present application, in some optional embodiments, after it is detected that any target two-dimensional character in the respective two-dimensional characters is written, displaying a three-dimensional character corresponding to the target two-dimensional character includes:
after any target two-dimensional character in the two-dimensional characters is detected to be written, identifying the writing position of the target two-dimensional character;
and displaying the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality mode.
With reference to the first aspect of the embodiment of the present application, in some optional embodiments, after the displaying the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality manner, the method further includes:
judging whether the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality mode or not, and if so, detecting whether the learning content is associated with an object to be unlocked or not;
if the learning content is associated with the object to be unlocked, acquiring the appointed display posture of the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured on the object to be unlocked;
determining an instant display gesture of the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters when the three-dimensional character is displayed in the augmented reality mode;
and checking whether the instant display posture of any three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, and unlocking the object to be unlocked if the instant display posture of the three-dimensional character corresponding to the two-dimensional character is not the same.
In combination with the first aspect of the embodiments of the present application, in some optional embodiments, the method further includes:
if the fact that the instant display posture of the three-dimensional character corresponding to at least one two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked is verified, the fact that the instant display posture of the three-dimensional character corresponding to at least one two-dimensional character when displayed in the augmented reality mode is controlled in a gesture mode is prompted to be adjusted, and after the instant display posture is detected to be adjusted when the three-dimensional character corresponding to the at least one two-dimensional character is controlled to be displayed in the augmented reality mode in a gesture mode, and executing the step of checking whether the instant display posture of any three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked.
With reference to the first aspect of the embodiment of the present application, in some optional embodiments, after verifying that an instant display posture when no three-dimensional character corresponding to any one of the two-dimensional characters is displayed in the augmented reality manner is different from an appointed display posture of a three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked, the method further includes:
outputting a spoken language evaluation task, wherein the spoken language evaluation task requires that the learning content be read aloud in a spoken language mode;
collecting reading pronunciation when the learning content is read aloud in a spoken language mode;
comparing the reading pronunciation with standard pronunciation of the learning content, thereby obtaining the spoken language assessment skill level;
and judging whether the spoken language evaluation skill level is higher than a specified level, and if so, executing the step of unlocking the object to be unlocked.
A second aspect of the embodiments of the present application discloses a display device for three-dimensional characters, including:
a first acquisition unit configured to acquire the selected learning content;
a recognition unit configured to recognize each two-dimensional character included in the learning content;
the second acquisition unit is used for acquiring a three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters;
and the display unit is used for displaying the three-dimensional character corresponding to any target two-dimensional character in the two-dimensional characters after detecting that the target two-dimensional character is written.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the display unit includes:
the recognition subunit is used for recognizing the writing position of any target two-dimensional character in the two-dimensional characters after the writing of the target two-dimensional character is detected;
and the display subunit is used for displaying the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality mode.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the display device further includes:
the first judging unit is used for judging whether the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality mode or not after the display subunit displays the three-dimensional characters corresponding to the target two-dimensional characters at the writing position in the augmented reality mode;
the detection unit is used for detecting whether the learning content is associated with an object to be unlocked when the first judgment unit judges that the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality mode;
a third obtaining unit, configured to, when the detecting unit detects that the learning content is associated with an object to be unlocked, obtain a specified display posture of a three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured for the object to be unlocked;
the determining unit is used for determining the instant display posture when the three-dimensional character corresponding to each two-dimensional character in each two-dimensional character is displayed in the augmented reality mode;
the verification unit is used for verifying whether the instant display posture of any three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked;
and the unlocking unit is used for unlocking the object to be unlocked when the verification unit verifies that the instant display posture when the three-dimensional character corresponding to any one of the two-dimensional characters is not displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the display device further includes:
a control unit, configured to prompt, when the verification unit verifies that an instant display gesture of a three-dimensional character corresponding to at least one two-dimensional character when displayed in the augmented reality manner is different from an appointed display gesture of a three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked, to control, in a gesture manner, an instant display gesture of a three-dimensional character corresponding to the at least one two-dimensional character when displayed in the augmented reality manner to adjust, and after detecting that the instant display gesture of the three-dimensional character corresponding to the at least one two-dimensional character when displayed in the augmented reality manner is completely adjusted, trigger the verification unit to perform verification whether there is any one of the instant display gesture of the three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality manner and the appointed display gesture of the three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked Whether the poses are different.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the display device further includes:
the output unit is used for outputting a spoken language evaluation task when the verification unit verifies that the instant display posture of the three-dimensional character corresponding to any one of the two-dimensional characters displayed in the augmented reality mode is not the same as the specified display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, and the spoken language evaluation task requires to read the learning content in a spoken language mode;
a collecting unit for collecting reading voices when the learning content is read aloud in a spoken language manner;
a comparison unit, configured to compare the reading pronunciation with a standard pronunciation of the learning content, so as to obtain the spoken language assessment skill level;
and the second judgment unit is used for judging whether the spoken language evaluation skill level is higher than a specified level or not, and if so, triggering the unlocking unit to execute unlocking the object to be unlocked when the instant display posture of the three-dimensional character corresponding to any one of the two-dimensional characters, which is verified by the verification unit to be absent and displayed in the augmented reality mode, is different from the specified display posture of the three-dimensional character corresponding to the two-dimensional character, which is configured on the object to be unlocked.
A third aspect of the embodiments of the present application discloses an electronic device, including a display device for three-dimensional characters described in any optional embodiment of the second aspect or the second aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of the method for displaying the three-dimensional character described in the first aspect or any optional embodiment of the first aspect of the embodiments of the present application.
In a fifth aspect of the embodiments of the present application, a computer-readable storage medium has computer instructions stored thereon, where the computer instructions, when executed, cause a computer to perform all or part of the steps of the method for displaying a three-dimensional character described in the first aspect or any optional embodiment of the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, after the two-dimensional characters contained in the selected learning content (such as words) are identified, the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters can be obtained, and after any target two-dimensional character in the two-dimensional characters is detected to be written, the three-dimensional character corresponding to the target two-dimensional character is displayed, so that the man-machine interactivity of a student when the student writes the learning content (such as words) can be improved, and the enthusiasm and interest of the student when the student writes the learning content including the words can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first embodiment of a method for displaying three-dimensional characters disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a method for displaying three-dimensional characters according to a second embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a display method of a three-dimensional character according to a third embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a first embodiment of a display device for three-dimensional characters disclosed in the embodiments of the present application;
fig. 5 is a schematic structural diagram of a second embodiment of a display device for three-dimensional characters disclosed in the embodiments of the present application;
fig. 6 is a schematic structural diagram of a third embodiment of a display device for three-dimensional characters disclosed in the embodiments of the present application;
fig. 7 is a schematic structural diagram of a first embodiment of an electronic device disclosed in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a second embodiment of the electronic device disclosed in the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a display method and device of three-dimensional characters, electronic equipment and a storage medium, which are beneficial to improving the enthusiasm and interest of students in writing learning contents including words. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for displaying three-dimensional characters according to a first embodiment of the present disclosure. The three-dimensional character display method described in fig. 1 is suitable for various electronic devices such as education devices (e.g., family education devices, classroom electronic devices), computers (e.g., student tablets, personal PCs), mobile phones, smart home devices (e.g., smart televisions, smart speakers, and smart robots), and the like, and the embodiment of the present application is not limited thereto. In the display method of the three-dimensional character described in fig. 1, the display method of the three-dimensional character is described with an electronic apparatus as an execution subject. As shown in fig. 1, the method for displaying three-dimensional characters may include the steps of:
101. the electronic device obtains the selected learning content.
For example, the electronic device may capture the learning content selected by the user (e.g., clicked by the user) in the learning module through a camera device (e.g., a camera).
For example, the learning module may be a certain learning page (e.g., a paper learning page or an electronic learning page) corresponding to the user, or may be a certain learning section included in a certain learning page corresponding to the user.
In some examples, the electronic device may locate the learning module selected by the user's finger, stylus, or voice and treat the learning module selected by the user's finger, stylus, or voice as the learning module corresponding to the user. For example, the electronic device may use a camera (e.g., a camera) to capture a learning module selected by a finger or a writing pen of a user as a learning module corresponding to the user; alternatively, the electronic device may use a sound pickup device (e.g., a microphone) to pick up a learning module selected by the voice uttered by the user as the learning module corresponding to the user. In some embodiments, the camera device (e.g., a camera) may be disposed on a ring worn by a finger of a user, and when the ring detects that the finger of the user worn by the ring is straightened, the ring may start the camera device (e.g., the camera) to shoot a learning module selected by the finger of the user, and the ring transmits the shot learning module selected by the finger of the user to the electronic device, so that the electronic device may determine the learning module corresponding to the user. By the implementation of the implementation mode, power consumption caused by the fact that the electronic equipment shoots the learning module selected by the finger of the user can be reduced, and therefore battery endurance of the electronic equipment can be improved.
In other examples, the electronic device may obtain the learning module selected by the other external device for the user, and use the learning module selected by the other external device for the user as the learning module corresponding to the user. For example, the electronic device may establish a communication connection with a wrist wearable device worn by a supervisor (such as a classroom teacher or a parent) of a user in advance, the supervisor holds a certain finger of a palm where a wrist wearing the wrist wearable device is located against a root of an ear to make the ear form a closed sound cavity, and the supervisor may send out a voice signal with a volume lower than a certain threshold value for selecting a learning module for the user; the voice signal is transmitted into the wrist type wearing equipment as a vibration signal through a bone medium of a palm, and the voice signal is transmitted to the electronic equipment through the wrist type wearing equipment. By implementing the implementation mode, a supervisor (such as a classroom teacher or a parent) of the user can flexibly select the learning module for the user, and sound interference to surrounding people is avoided in the process of selecting the learning module for the user.
In some embodiments, when the external device may be a wrist-worn device worn by a classroom teacher, the wrist-worn device may establish a communication connection with electronic devices used by multiple users (i.e., students) in a classroom simultaneously, and accordingly, the voice signal sent by the supervisor and used for selecting a learning module for the user, the voice signal having a volume below a certain threshold, may include an identifier (e.g., a chapter number) of the selected learning module and an identifier (e.g., a name and/or a seat number) of the user; further, the wrist-worn device may transmit the voice signal to the electronic device used by the user according to an identification (such as a name and/or a seat number) of the user, so that the electronic device used by the user may determine the learning module corresponding to the user according to an identification (such as a chapter number) of the selected learning module included in the voice signal. By implementing the implementation mode, a classroom teacher can respectively select different learning modules for a plurality of users in a classroom according to different respective learning progresses of the users in the classroom (such as a training classroom), so that the flexibility and convenience of respectively selecting different learning modules for the users in the classroom can be improved.
For example, the learning content selected by the user (e.g., clicked by the user) may include foreign language words (e.g., english words), musical notation strings (e.g., musical notation strings represented by numerals 1-7), chinese characters, and so on, and the embodiments of the present application are not limited thereto.
102. The electronic device recognizes each two-dimensional character included in the learning content.
Illustratively, if the learning content is the English word "ruler", the electronic device identifies five two-dimensional characters, r, u, l, e, and r, contained in the English word "ruler".
103. And the electronic equipment acquires the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters.
In some examples, the electronic device may convert each of the respective two-dimensional characters into a three-dimensional character based on a 3D synthesis technique of 3D structure reconstruction, so that a three-dimensional character corresponding to each of the respective two-dimensional characters may be obtained.
104. And after detecting that any target two-dimensional character in the two-dimensional characters is written, the electronic equipment displays a three-dimensional character corresponding to the target two-dimensional character.
In some examples, after detecting that any one of the target two-dimensional characters is written, the electronic device may display a three-dimensional character corresponding to the target two-dimensional character on a display screen of the electronic device. For example, after detecting that any target two-dimensional character r in the english word "ruler" is written, the electronic device may display a three-dimensional character corresponding to the target two-dimensional character r on a display screen of the electronic device. After the electronic device detects that the five target two-dimensional characters of r, u, l, e and r in the english word "ruler" are all written, three-dimensional characters corresponding to the five target two-dimensional characters of r, u, l, e and r can be displayed on a display screen of the electronic device, so that a user can see the three-dimensional english word "ruler" formed by the three-dimensional characters corresponding to the five target two-dimensional characters of r, u, l, e and r on the display screen of the electronic device.
In other examples, after detecting that any one of the target two-dimensional characters is written, the electronic device may first recognize a writing position where the target two-dimensional character is written, and display a three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality manner. For example, after detecting that any target two-dimensional character r in the english word "ruler" is written, the electronic device may recognize a writing position where the target two-dimensional character r is written, and if the writing position where the target two-dimensional character r is written is located on a certain paper page, the electronic device may display a three-dimensional character corresponding to the target two-dimensional character r on the writing position on the paper page in an augmented reality manner. When the three-dimensional characters corresponding to the five target two-dimensional characters of r, u, l, e and r in the english word "ruler" are displayed on the corresponding writing positions of the paper page in an augmented reality manner, the user can see the three-dimensional english word "ruler" consisting of the three-dimensional characters corresponding to the five target two-dimensional characters of r, u, l, e and r on the paper page.
Therefore, the implementation of the three-dimensional character display method described in fig. 1 can improve the man-machine interaction of the student when writing words, thereby being beneficial to improving the enthusiasm and interest of the student when writing learning contents including words.
In addition, by implementing the method for displaying three-dimensional characters described in fig. 1, power consumption caused by the fact that the electronic device shoots the learning module selected by the user's finger can be reduced, and thus battery endurance of the electronic device can be improved.
In addition, by implementing the method for displaying three-dimensional characters as described in fig. 1, a supervisor (such as a teacher or a parent) of a user can flexibly select a learning module for the user, and does not cause sound interference to surrounding people in the process of selecting the learning module for the user.
In addition, by implementing the three-dimensional character display method described in fig. 1, different learning modules can be respectively selected for a plurality of users in a classroom (such as a training classroom) according to respective different learning progresses of the plurality of users in the classroom, so that flexibility and convenience in selecting different learning modules for the plurality of users in the classroom can be improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for displaying three-dimensional characters according to a second embodiment of the present disclosure. In the display method of the three-dimensional character described in fig. 2, the display method of the three-dimensional character is described with an electronic apparatus as an execution subject. As shown in fig. 2, the method for displaying three-dimensional characters may include the steps of:
201. the electronic device obtains the selected learning content.
The implementation manner of step 3201 may be the same as that of step 101, and is not described herein again in this embodiment of the application.
202. The electronic device recognizes each two-dimensional character included in the learning content.
203. And the electronic equipment acquires the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters.
204. The electronic device recognizes a writing position where any one of the target two-dimensional characters is written after detecting that the target two-dimensional character is written.
For example, the written position where the target two-dimensional character is written may be a written position on a display screen of the electronic device; alternatively, the writing position where the target two-dimensional character is written may be a written position on a paper page.
205. And the electronic equipment displays the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality mode.
206. The electronic device judges whether the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in an augmented reality mode, if so, step 207 is executed; if not, return to step 204.
207. The electronic equipment detects whether the learning content is associated with an object to be unlocked, and if the learning content is associated with the object to be unlocked, the steps 208-210 are executed; if the learning content is not associated with the object to be unlocked, the process is ended.
For example, the object to be unlocked may be an APP to be unlocked, an electronic screen to be unlocked, an intelligent door lock to be unlocked, and the like, which is not limited in the embodiment of the present application.
208. The electronic equipment acquires the appointed display gesture of the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured on the object to be unlocked.
For example, the designated display pose of the three-dimensional character may include a designated display direction of the three-dimensional character, and a designated display inclination in the designated display direction.
209. And the electronic equipment determines the instant display posture when the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters is displayed in an augmented reality mode.
For example, the instantaneous display posture of the three-dimensional character when displayed in the augmented reality manner may include an instantaneous display direction of the three-dimensional character when displayed in the augmented reality manner, and an instantaneous display inclination in the instantaneous display direction.
210. The electronic equipment checks whether the instant display posture of the three-dimensional character corresponding to any two-dimensional character when the three-dimensional character is displayed in an augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, and if the instant display posture is different from the appointed display posture, the steps 211 to 212 are executed; if not, go to step 213.
For example, if it is verified that the instant display direction of the three-dimensional character corresponding to any one two-dimensional character displayed in the augmented reality manner is different from the specified display direction of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked, the electronic device executes step 211; and/or if it is verified that the instant display gradient of the three-dimensional character corresponding to any one two-dimensional character displayed in the augmented reality manner is different from the specified display gradient of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked, the electronic device executes step 211.
Illustratively, if it is verified that the instant display direction of the three-dimensional character corresponding to any two-dimensional character does not exist in the electronic device when the three-dimensional character corresponding to the two-dimensional character is displayed in an augmented reality manner is different from the specified display direction of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked; and, the electronic device verifies that the instant display gradient when the three-dimensional character corresponding to any one of the two-dimensional characters is not displayed in the augmented reality manner is different from the specified display gradient of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked, then step 213 is executed.
211. If the electronic equipment verifies that the instant display posture when the three-dimensional character corresponding to the at least one two-dimensional character is displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, the electronic equipment prompts that the instant display posture when the three-dimensional character corresponding to the at least one two-dimensional character is displayed in the augmented reality mode is controlled in a gesture mode to be adjusted.
Illustratively, the electronic device may prompt, in a voice manner, to control, in a gesture manner, an instant display posture of the three-dimensional character corresponding to the at least one two-dimensional character when the three-dimensional character is displayed in the augmented reality manner to be adjusted; accordingly, the user can control, by means of a gesture, an instant display posture of the three-dimensional character corresponding to the at least one two-dimensional character when the three-dimensional character is displayed in an augmented reality manner to be adjusted (e.g., instant display direction adjustment and/or instant display inclination adjustment) under the prompt.
212. The electronic equipment detects whether the instant display posture of the three-dimensional character corresponding to the at least one two-dimensional character is controlled in a gesture mode to be adjusted when the three-dimensional character is displayed in the augmented reality mode, if yes, 210 is executed; if not, the flow is ended.
213. And the electronic equipment unlocks the object to be unlocked.
Therefore, the implementation of the three-dimensional character display method described in fig. 2 can improve the man-machine interaction of the student when writing words, thereby being beneficial to improving the enthusiasm and interest of the student when writing learning contents including words.
In addition, the implementation of the method for displaying three-dimensional characters described in fig. 2 can reduce power consumption caused by the fact that the electronic device shoots the learning module selected by the user's finger, so that the battery endurance of the electronic device can be improved.
In addition, by implementing the method for displaying three-dimensional characters as described in fig. 2, a supervisor (such as a teacher or a parent) of a user can flexibly select a learning module for the user, and does not cause sound interference to surrounding people in the process of selecting the learning module for the user.
In addition, by implementing the three-dimensional character display method described in fig. 2, different learning modules can be respectively selected for a plurality of users in a classroom (such as a training classroom) according to respective different learning progresses of the plurality of users in the classroom, so that flexibility and convenience in selecting different learning modules for the plurality of users in the classroom can be improved.
In addition, with the implementation of the three-dimensional character display method described in fig. 2, a user can display three-dimensional characters corresponding to two-dimensional characters by writing the two-dimensional characters of learning content (such as english words), and can unlock the object to be unlocked associated with the learning content by accurately adjusting the instant display posture of the three-dimensional characters corresponding to the two-dimensional characters when the three-dimensional characters corresponding to the two-dimensional characters are displayed in an augmented reality manner, so that the instant display posture of the three-dimensional characters corresponding to the non-adjusted two-dimensional characters when the three-dimensional characters are displayed in the augmented reality manner is the same as the specified display posture of the three-dimensional characters corresponding to the two-dimensional characters configured for the object to be unlocked, thereby improving the security when unlocking the object to be unlocked associated with the learning content while improving the enthusiasm and interest of the student when writing the learning content including words, and improving the enthusiasm and interest of the student when writing the learning content including words and the safety of (1) is fused.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for displaying three-dimensional characters according to a third embodiment of the present disclosure. In the display method of the three-dimensional character described in fig. 3, the display method of the three-dimensional character is described with an electronic apparatus as an execution subject. As shown in fig. 3, the method for displaying three-dimensional characters may include the steps of:
301. the electronic device obtains the selected learning content.
302. The electronic device recognizes each two-dimensional character included in the learning content.
303. And the electronic equipment acquires the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters.
304. The electronic device recognizes a writing position where any one of the target two-dimensional characters is written after detecting that the target two-dimensional character is written.
305. And the electronic equipment displays the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality mode.
306. The electronic device determines whether all three-dimensional characters corresponding to the two-dimensional characters are displayed in an augmented reality manner, if yes, step 307 is executed; if not, return to step 304.
307. The electronic equipment detects whether the learning content is associated with an object to be unlocked, and if the learning content is associated with the object to be unlocked, the steps 308-310 are executed; if the learning content is not associated with the object to be unlocked, the process is ended.
308. The electronic equipment acquires the appointed display gesture of the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured on the object to be unlocked.
309. And the electronic equipment determines the instant display posture when the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters is displayed in an augmented reality mode.
310. The electronic equipment checks whether an instant display gesture of a three-dimensional character corresponding to any two-dimensional character when the three-dimensional character is displayed in an augmented reality mode is different from an appointed display gesture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, and if the instant display gesture is different from the appointed display gesture, the steps 311 to 312 are executed; if not, go to step 313-316.
311. If the electronic equipment verifies that the instant display posture when the three-dimensional character corresponding to the at least one two-dimensional character is displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, the electronic equipment prompts that the instant display posture when the three-dimensional character corresponding to the at least one two-dimensional character is displayed in the augmented reality mode is controlled in a gesture mode to be adjusted.
312. The electronic equipment detects whether the instant display posture of the three-dimensional character corresponding to the at least one two-dimensional character is controlled in a gesture mode to be adjusted when the three-dimensional character is displayed in the augmented reality mode, if yes, 310 is executed; if not, the flow is ended.
313. The electronic equipment outputs a spoken language evaluation task which requires reading the learning content in a spoken language manner.
For example, in step 310, if it is verified that the instant display posture when the three-dimensional character corresponding to any one two-dimensional character is not displayed in the augmented reality manner is different from the specified display posture of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked, the electronic device may first obtain the learning target corresponding to the learning content, and find out the target skill mapped to the learning target corresponding to the learning content from the mapping relationship between the learning target and the skill, and if the learning target corresponding to the learning content is "reading aloud" and the target skill mapped to the learning target "reading aloud" corresponding to the learning content is the spoken language assessment skill, the electronic device performs steps 313 to 316. For example, the electronic device may find out that the learning objective corresponding to the learning content is "read aloud" from a preset mapping relationship between the chapter number of the learning content and the learning objective according to the identifier (such as the chapter number) of the learning content.
314. The electronic device collects speakable readings when the learning content is spoken.
315. The electronic equipment compares the reading pronunciation with the standard pronunciation of the learning content, so as to obtain the skill level of the spoken language assessment.
For example, when the electronic device compares the reading pronunciation with the standard pronunciation of the learning content, if the user is found not to read the learning content, the electronic device may obtain that the spoken language assessment skill of the user is "poor"; if the user reads the learning content but the reading of the learning content is not smooth compared with the standard pronunciation of the learning content, the electronic equipment can obtain that the spoken language assessment skill of the user is of a middle level; if the user is found to read the learning content and the learning content is read by the user as normal as the standard pronunciation of the learning content, and the user is read by the user as emotional as the standard pronunciation of the learning content, the electronic device can obtain that the spoken language assessment skill of the user is "excellent level".
In some embodiments, after obtaining the spoken language evaluation skill of the user, the electronic device may further perform the following steps:
the electronic equipment outputs the spoken language evaluation skill of the user to the user in a text and/or voice mode.
Further, the electronic device may detect whether the instant battery power of the electronic device is higher than a first specified power value, and if so, the electronic device may transmit the spoken language assessment skill of the user to a supervisor of the user (e.g., a classroom teacher or a parent). Wherein the first specified electric quantity value may be obtained by:
the electronic equipment determines an electric quantity value increment corresponding to the total times according to the total times of the user executing the spoken language evaluation task in a specified historical period (such as the previous week), subtracts a preset electric quantity value of the electronic equipment from the electric quantity value increment, and takes a subtraction result as a first specified electric quantity value; wherein, the total number of times of executing the oral evaluation task in the appointed historical period (such as the previous week) by the user is in direct proportion to the increment of the electric quantity value corresponding to the total number of times. Therefore, when the user executes more spoken language evaluation tasks in a specified historical period (such as the previous week), the probability that the electronic equipment judges that the instant battery electricity is higher than the first specified electricity value can be remarkably improved, and the probability that the electronic equipment transmits the spoken language evaluation skills of the user to a supervisor (such as a classroom teacher or a parent) of the user can be further improved, so that the supervisor (such as the classroom teacher or the parent) of the user can learn the spoken language evaluation skills of the user with a high probability.
316. The electronic equipment judges whether the spoken language evaluation skill level is higher than a specified level, and if so, the execution is carried out 317; otherwise, the flow is ended.
317. And the electronic equipment unlocks the object to be unlocked.
In some application scenarios, the electronic device may be located in a certain indoor environment, and the to-be-unlocked smart door lock set in the indoor environment may be used as the to-be-unlocked object. In this application scenario, the method for the electronic device to unlock the object to be unlocked in step 317 may be as follows:
the electronic equipment determines current spatial position information of a user using the electronic equipment based on an indoor image shot by an internal camera of the intelligent door lock to be unlocked;
the electronic device can check whether the current spatial position information of the user using the electronic device is matched with the three-dimensional position information of the monitored object which is specially configured by a supervisor (such as a parent) of the user (belonging to the monitored object) relative to the internal camera of the intelligent door lock to be unlocked, and if the current spatial position information is matched with the three-dimensional position information, the intelligent door lock to be unlocked is controlled to be unlocked; wherein, when the user is located in the three-dimensional position information of the monitored object (belonging to the monitored object) specially configured for the user by the user's supervisor (such as a parent) relative to the internal camera of the intelligent door lock to be unlocked, the user can be directly observed by the user's supervisor in the indoor environment. Therefore, the user of the electronic equipment can be required to allow the electronic equipment to control the intelligent door lock to be unlocked to unlock only at a certain spatial position which is specially configured by the supervisor and visible to the supervisor, so that the supervisor can intuitively know which monitored object unlocks the intelligent door lock to be unlocked, the visibility of the user of the electronic equipment when the intelligent door lock to be unlocked is unlocked can be improved, and accidents (such as children being turned) caused by the fact that the user of the electronic equipment secretly unlocks the intelligent door lock to be unlocked under the condition that the supervisor is not aware of are prevented.
In addition, the display method of the three-dimensional character depicted in fig. 3 has the following beneficial effects:
can play the purpose of urging indoor children to carry out spoken language assessment to promote spoken language assessment grade.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a display device for three-dimensional characters according to a first embodiment of the present disclosure. The display device of the three-dimensional character may include:
a first obtaining unit 401, configured to obtain the selected learning content;
a recognition unit 402 configured to recognize each two-dimensional character included in the learning content;
a second obtaining unit 403, configured to obtain a three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters;
a display unit 404, configured to display a three-dimensional character corresponding to any target two-dimensional character in the two-dimensional characters after detecting that the target two-dimensional character is written.
For example, the electronic device may capture the learning content selected by the user (e.g., clicked by the user) in the learning module through a camera device (e.g., a camera).
For example, the learning module may be a certain learning page (e.g., a paper learning page or an electronic learning page) corresponding to the user, or may be a certain learning section included in a certain learning page corresponding to the user.
In some examples, the display apparatus described in fig. 4 may be transferred to an electronic device, and the electronic device may locate a learning module selected by a finger, a pen, or a voice of a user, and use the learning module selected by the finger, the pen, or the voice of the user as a corresponding learning module of the user. For example, the electronic device may use a camera (e.g., a camera) to capture a learning module selected by a finger or a writing pen of a user as a learning module corresponding to the user; alternatively, the electronic device may use a sound pickup device (e.g., a microphone) to pick up a learning module selected by the voice uttered by the user as the learning module corresponding to the user. In some embodiments, the camera device (e.g., a camera) may be disposed on a ring worn by a finger of a user, and when the ring detects that the finger of the user worn by the ring is straightened, the ring may start the camera device (e.g., the camera) to shoot a learning module selected by the finger of the user, and the ring transmits the shot learning module selected by the finger of the user to the electronic device, so that the electronic device may determine the learning module corresponding to the user. By the implementation of the implementation mode, power consumption caused by the fact that the electronic equipment shoots the learning module selected by the finger of the user can be reduced, and therefore battery endurance of the electronic equipment can be improved.
In other examples, the electronic device may obtain the learning module selected by the other external device for the user, and use the learning module selected by the other external device for the user as the learning module corresponding to the user. For example, the electronic device may establish a communication connection with a wrist wearable device worn by a supervisor (such as a classroom teacher or a parent) of a user in advance, the supervisor holds a certain finger of a palm where a wrist wearing the wrist wearable device is located against a root of an ear to make the ear form a closed sound cavity, and the supervisor may send out a voice signal with a volume lower than a certain threshold value for selecting a learning module for the user; the voice signal is transmitted into the wrist type wearing equipment as a vibration signal through a bone medium of a palm, and the voice signal is transmitted to the electronic equipment through the wrist type wearing equipment. By implementing the implementation mode, a supervisor (such as a classroom teacher or a parent) of the user can flexibly select the learning module for the user, and sound interference to surrounding people is avoided in the process of selecting the learning module for the user.
In some embodiments, when the external device may be a wrist-worn device worn by a classroom teacher, the wrist-worn device may establish a communication connection with electronic devices used by multiple users (i.e., students) in a classroom simultaneously, and accordingly, the voice signal sent by the supervisor and used for selecting a learning module for the user, the voice signal having a volume below a certain threshold, may include an identifier (e.g., a chapter number) of the selected learning module and an identifier (e.g., a name and/or a seat number) of the user; further, the wrist-worn device may transmit the voice signal to the electronic device used by the user according to an identification (such as a name and/or a seat number) of the user, so that the electronic device used by the user may determine the learning module corresponding to the user according to an identification (such as a chapter number) of the selected learning module included in the voice signal. By implementing the implementation mode, a classroom teacher can respectively select different learning modules for a plurality of users in a classroom according to different respective learning progresses of the users in the classroom (such as a training classroom), so that the flexibility and convenience of respectively selecting different learning modules for the users in the classroom can be improved.
Therefore, the display device for the three-dimensional characters depicted in fig. 4 can improve the man-machine interaction of the students when writing words, thereby being beneficial to improving the enthusiasm and interest of the students when writing words.
In addition, the display device for displaying the three-dimensional character described in fig. 4 can reduce power consumption caused by the learning module selected by the user's finger when the electronic device shoots, so that the battery life of the electronic device can be improved.
In addition, by implementing the display device of the three-dimensional character described in fig. 4, a supervisor (such as a teacher or a parent) of a user can flexibly select a learning module for the user, and does not cause sound interference to surrounding people in the process of selecting the learning module for the user.
In addition, the display device for displaying three-dimensional characters depicted in fig. 4 can select different learning modules for a plurality of users in a classroom (e.g., a training classroom) according to their respective different learning progresses, thereby improving flexibility and convenience when selecting different learning modules for a plurality of users in a classroom.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a display device for three-dimensional characters according to a second embodiment of the present disclosure. The display device for the three-dimensional character shown in fig. 5 is optimized from the display device for the three-dimensional character shown in fig. 4. In the display device of the three-dimensional character shown in fig. 5, the display unit 404 includes:
an identifying subunit 4041 configured to identify a writing position where any one of the target two-dimensional characters is written, after detecting that the target two-dimensional character is written;
the display sub-unit 4042 is configured to display, in an augmented reality manner, the three-dimensional character corresponding to the target two-dimensional character at the writing position.
Optionally, the display device further comprises:
a first determining unit 405, configured to determine, after the displaying sub-unit 4041 displays the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality manner, whether all three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality manner;
a detecting unit 406, configured to detect whether the learning content is associated with an object to be unlocked when the first determining unit 405 determines that all three-dimensional characters corresponding to the two-dimensional characters are displayed in an augmented reality manner;
a third obtaining unit 407, configured to, when the detecting unit 406 detects that the learning content is associated with an object to be unlocked, obtain a specified display posture of a three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured for the object to be unlocked;
a determining unit 408, configured to determine an instant display gesture when the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters is displayed in an augmented reality manner;
the checking unit 409 is configured to check whether an instant display posture of a three-dimensional character corresponding to any one two-dimensional character when the three-dimensional character is displayed in an augmented reality manner is different from an appointed display posture of a three-dimensional character corresponding to the two-dimensional character configured for an object to be unlocked;
the unlocking unit 410 is configured to unlock the object to be unlocked when the verification unit verifies that the instant display posture of the three-dimensional character corresponding to the any one two-dimensional character displayed in the augmented reality mode is not the same as the specified display posture of the three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked.
Optionally, the display device further includes:
a control unit 411, configured to prompt, when the verification unit 409 verifies that an instant display posture when a three-dimensional character corresponding to at least one two-dimensional character is displayed in an augmented reality manner is different from an appointed display posture of a three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked, to control, in a gesture manner, an instant display posture when the three-dimensional character corresponding to the at least one two-dimensional character is displayed in an augmented reality manner to be adjusted, and after detecting that the instant display posture when the three-dimensional character corresponding to the at least one two-dimensional character is controlled in a gesture manner is completely adjusted, trigger the verification unit 409 to perform the verification whether an instant display posture when any one three-dimensional character corresponding to the two-dimensional character is displayed in the augmented reality manner and whether the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked is not adjusted The same is true.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a display device for three-dimensional characters according to a third embodiment of the present disclosure. The display device for the three-dimensional character shown in fig. 6 is optimized from the display device for the three-dimensional character shown in fig. 5. In the display device of the three-dimensional character shown in fig. 6, the display device further includes:
the output unit 412 is configured to output a spoken language evaluation task when the verification unit 409 verifies that the instant display posture of the three-dimensional character corresponding to no one two-dimensional character displayed in the augmented reality manner is different from the specified display posture of the three-dimensional character corresponding to the two-dimensional character configured for the object to be unlocked, where the spoken language evaluation task requires reading the learning content in a spoken language manner;
a collecting unit 413 for collecting reading voices when reading the learning content in a spoken language;
a comparing unit 414, configured to compare the reading pronunciation with the standard pronunciation of the learning content, so as to obtain a spoken language assessment skill level;
the second determining unit 415 is configured to determine whether the spoken language evaluation skill level is higher than a specified level, and if so, trigger the unlocking unit 410 to execute unlocking the object to be unlocked when the verification unit 409 verifies that the instant display posture of the three-dimensional character corresponding to no one two-dimensional character displayed in the augmented reality manner is different from the specified display posture of the three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to a first embodiment of the disclosure. As shown in fig. 7, the electronic device may include any one of the three-dimensional character display devices in the above embodiments.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to a second embodiment of the disclosure. As shown in fig. 8, may include:
memory 801 storing executable program code
A processor 802 coupled with the memory;
the processor 802 calls the executable program code stored in the memory 801 to execute all or part of the steps of the three-dimensional character display method.
It should be noted that, in this embodiment of the application, the electronic device shown in fig. 8 may further include components that are not displayed, such as a speaker module, a display screen, a light projection module, a battery module, a wireless communication module (such as a mobile communication module, a WIFI module, a bluetooth module, and the like), a sensor module (such as a proximity sensor, and the like), an input module (such as a microphone, a button), and a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired headset interface, and the like).
The embodiment of the invention discloses a computer readable storage medium, which is stored with computer instructions, and the computer instructions make a computer execute all or part of the steps of the display method of the three-dimensional characters when running.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The method and apparatus for displaying three-dimensional characters, the electronic device, and the storage medium disclosed in the embodiments of the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (13)

1. A method of displaying three-dimensional characters, the method comprising:
acquiring the selected learning content;
recognizing each two-dimensional character contained in the learning content;
acquiring a three-dimensional character corresponding to each two-dimensional character in each two-dimensional character;
and displaying a three-dimensional character corresponding to any target two-dimensional character in the two-dimensional characters after detecting that the target two-dimensional character is written.
2. The method according to claim 1, wherein displaying the three-dimensional character corresponding to the target two-dimensional character after detecting that any one of the target two-dimensional characters is written comprises:
after any target two-dimensional character in the two-dimensional characters is detected to be written, identifying the writing position of the target two-dimensional character;
and displaying the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality mode.
3. The display method according to claim 2, wherein the three-dimensional character corresponding to the target two-dimensional character is displayed after the writing position by way of augmented reality, and the method further comprises:
judging whether the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality mode or not, and if so, detecting whether the learning content is associated with an object to be unlocked or not;
if the learning content is associated with the object to be unlocked, acquiring the appointed display posture of the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured on the object to be unlocked;
determining an instant display gesture of the three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters when the three-dimensional character is displayed in the augmented reality mode;
and checking whether the instant display posture of any three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, and unlocking the object to be unlocked if the instant display posture of the three-dimensional character corresponding to the two-dimensional character is not the same.
4. The display method according to claim 3, characterized in that the method further comprises:
if the fact that the instant display posture of the three-dimensional character corresponding to at least one two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked is verified, the fact that the instant display posture of the three-dimensional character corresponding to at least one two-dimensional character when displayed in the augmented reality mode is controlled in a gesture mode is prompted to be adjusted, and after the instant display posture is detected to be adjusted when the three-dimensional character corresponding to the at least one two-dimensional character is controlled to be displayed in the augmented reality mode in a gesture mode, and executing the step of checking whether the instant display posture of any three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked.
5. The display method according to claim 3 or 4, wherein after it is verified that an instant display posture when no three-dimensional character corresponding to any one of the two-dimensional characters is displayed in the augmented reality manner is different from a specified display posture of a three-dimensional character corresponding to the two-dimensional character in which the object to be unlocked is configured, the method further comprises:
outputting a spoken language evaluation task, wherein the spoken language evaluation task requires that the learning content be read aloud in a spoken language mode;
collecting reading pronunciation when the learning content is read aloud in a spoken language mode;
comparing the reading pronunciation with standard pronunciation of the learning content, thereby obtaining the spoken language assessment skill level;
and judging whether the spoken language evaluation skill level is higher than a specified level, and if so, executing the step of unlocking the object to be unlocked.
6. A display device for three-dimensional characters, comprising:
a first acquisition unit configured to acquire the selected learning content;
a recognition unit configured to recognize each two-dimensional character included in the learning content;
the second acquisition unit is used for acquiring a three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters;
and the display unit is used for displaying the three-dimensional character corresponding to any target two-dimensional character in the two-dimensional characters after detecting that the target two-dimensional character is written.
7. The display device according to claim 6, wherein the display unit comprises:
the recognition subunit is used for recognizing the writing position of any target two-dimensional character in the two-dimensional characters after the writing of the target two-dimensional character is detected;
and the display subunit is used for displaying the three-dimensional character corresponding to the target two-dimensional character at the writing position in an augmented reality mode.
8. The display device according to claim 7, further comprising:
the first judging unit is used for judging whether the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality mode or not after the display subunit displays the three-dimensional characters corresponding to the target two-dimensional characters at the writing position in the augmented reality mode;
the detection unit is used for detecting whether the learning content is associated with an object to be unlocked when the first judgment unit judges that the three-dimensional characters corresponding to the two-dimensional characters are completely displayed in the augmented reality mode;
a third obtaining unit, configured to, when the detecting unit detects that the learning content is associated with an object to be unlocked, obtain a specified display posture of a three-dimensional character corresponding to each two-dimensional character in the two-dimensional characters configured for the object to be unlocked;
the determining unit is used for determining the instant display posture when the three-dimensional character corresponding to each two-dimensional character in each two-dimensional character is displayed in the augmented reality mode;
the verification unit is used for verifying whether the instant display posture of any three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked;
and the unlocking unit is used for unlocking the object to be unlocked when the verification unit verifies that the instant display posture when the three-dimensional character corresponding to any one of the two-dimensional characters is not displayed in the augmented reality mode is different from the appointed display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked.
9. The display device according to claim 8, further comprising:
a control unit, configured to prompt, when the verification unit verifies that an instant display gesture of a three-dimensional character corresponding to at least one two-dimensional character when displayed in the augmented reality manner is different from an appointed display gesture of a three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked, to control, in a gesture manner, an instant display gesture of a three-dimensional character corresponding to the at least one two-dimensional character when displayed in the augmented reality manner to adjust, and after detecting that the instant display gesture of the three-dimensional character corresponding to the at least one two-dimensional character when displayed in the augmented reality manner is completely adjusted, trigger the verification unit to perform verification whether there is any one of the instant display gesture of the three-dimensional character corresponding to the two-dimensional character when displayed in the augmented reality manner and the appointed display gesture of the three-dimensional character corresponding to the two-dimensional character configured to the object to be unlocked Whether the poses are different.
10. The display device according to claim 7 or 8, characterized in that the display device further comprises:
the output unit is used for outputting a spoken language evaluation task when the verification unit verifies that the instant display posture of the three-dimensional character corresponding to any one of the two-dimensional characters displayed in the augmented reality mode is not the same as the specified display posture of the three-dimensional character corresponding to the two-dimensional character configured on the object to be unlocked, and the spoken language evaluation task requires to read the learning content in a spoken language mode;
a collecting unit for collecting reading voices when the learning content is read aloud in a spoken language manner;
a comparison unit, configured to compare the reading pronunciation with a standard pronunciation of the learning content, so as to obtain the spoken language assessment skill level;
and the second judgment unit is used for judging whether the spoken language evaluation skill level is higher than a specified level or not, and if so, triggering the unlocking unit to execute unlocking the object to be unlocked when the instant display posture of the three-dimensional character corresponding to any one of the two-dimensional characters, which is verified by the verification unit to be absent and displayed in the augmented reality mode, is different from the specified display posture of the three-dimensional character corresponding to the two-dimensional character, which is configured on the object to be unlocked.
11. An electronic device comprising the three-dimensional character display device according to any one of claims 6 to 10.
12. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of the three-dimensional character display method of any one of claims 1 to 5.
13. A computer-readable storage medium having stored thereon computer instructions which, when executed, cause a computer to perform all or part of the steps of the method for displaying three-dimensional characters according to any one of claims 1 to 5.
CN202010411695.8A 2020-05-14 2020-05-14 Three-dimensional character display method and device, electronic equipment and storage medium Active CN111563514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010411695.8A CN111563514B (en) 2020-05-14 2020-05-14 Three-dimensional character display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010411695.8A CN111563514B (en) 2020-05-14 2020-05-14 Three-dimensional character display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111563514A true CN111563514A (en) 2020-08-21
CN111563514B CN111563514B (en) 2023-12-22

Family

ID=72071030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010411695.8A Active CN111563514B (en) 2020-05-14 2020-05-14 Three-dimensional character display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111563514B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100058843A (en) * 2008-11-25 2010-06-04 세종대학교산학협력단 Method and system for teaching hangeul using augmented reality
CN102662465A (en) * 2012-03-26 2012-09-12 北京国铁华晨通信信息技术有限公司 Method and system for inputting visual character based on dynamic track
CN103455264A (en) * 2012-06-01 2013-12-18 鸿富锦精密工业(深圳)有限公司 Handwritten Chinese character input method and electronic device with same
CN104156185A (en) * 2013-05-13 2014-11-19 中国移动通信集团公司 Three-dimensional font display method and device
CN105022487A (en) * 2015-07-20 2015-11-04 北京易讯理想科技有限公司 Reading method and apparatus based on augmented reality
CN105225549A (en) * 2015-10-13 2016-01-06 安阳师范学院 A kind of Language for English learning system
KR101592534B1 (en) * 2015-03-12 2016-02-05 김선심 Mat applied augmented reality for children learning
CN105518712A (en) * 2015-05-28 2016-04-20 北京旷视科技有限公司 Keyword notification method, equipment and computer program product based on character recognition
CN105761559A (en) * 2016-04-29 2016-07-13 东北电力大学 Reversely resonant foreign language learning method based on strongest first impressions
CN107871410A (en) * 2017-11-24 2018-04-03 南宁市远才教育咨询有限公司 A kind of Fast Learning Chinese character servicing unit
CN108877344A (en) * 2018-07-20 2018-11-23 荆明明 A kind of Multifunctional English learning system based on augmented reality
CN109215416A (en) * 2018-10-24 2019-01-15 天津工业大学 A kind of Chinese character assistant learning system and method based on augmented reality
CN109358766A (en) * 2012-09-26 2019-02-19 谷歌有限责任公司 The progress of handwriting input is shown
CN110162164A (en) * 2018-09-10 2019-08-23 腾讯数码(天津)有限公司 A kind of learning interaction method, apparatus and storage medium based on augmented reality
KR20190120847A (en) * 2018-04-16 2019-10-25 인영조 Ar-based writing practice method and program
CN111028566A (en) * 2019-12-12 2020-04-17 广州三人行壹佰教育科技有限公司 Live broadcast teaching method, device, terminal and storage medium
CN111028590A (en) * 2019-03-29 2020-04-17 广东小天才科技有限公司 Method for guiding user to write in dictation process and learning device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100058843A (en) * 2008-11-25 2010-06-04 세종대학교산학협력단 Method and system for teaching hangeul using augmented reality
CN102662465A (en) * 2012-03-26 2012-09-12 北京国铁华晨通信信息技术有限公司 Method and system for inputting visual character based on dynamic track
CN103455264A (en) * 2012-06-01 2013-12-18 鸿富锦精密工业(深圳)有限公司 Handwritten Chinese character input method and electronic device with same
CN109358766A (en) * 2012-09-26 2019-02-19 谷歌有限责任公司 The progress of handwriting input is shown
CN104156185A (en) * 2013-05-13 2014-11-19 中国移动通信集团公司 Three-dimensional font display method and device
KR101592534B1 (en) * 2015-03-12 2016-02-05 김선심 Mat applied augmented reality for children learning
CN105518712A (en) * 2015-05-28 2016-04-20 北京旷视科技有限公司 Keyword notification method, equipment and computer program product based on character recognition
CN105022487A (en) * 2015-07-20 2015-11-04 北京易讯理想科技有限公司 Reading method and apparatus based on augmented reality
CN105225549A (en) * 2015-10-13 2016-01-06 安阳师范学院 A kind of Language for English learning system
CN105761559A (en) * 2016-04-29 2016-07-13 东北电力大学 Reversely resonant foreign language learning method based on strongest first impressions
CN107871410A (en) * 2017-11-24 2018-04-03 南宁市远才教育咨询有限公司 A kind of Fast Learning Chinese character servicing unit
KR20190120847A (en) * 2018-04-16 2019-10-25 인영조 Ar-based writing practice method and program
CN108877344A (en) * 2018-07-20 2018-11-23 荆明明 A kind of Multifunctional English learning system based on augmented reality
CN110162164A (en) * 2018-09-10 2019-08-23 腾讯数码(天津)有限公司 A kind of learning interaction method, apparatus and storage medium based on augmented reality
CN109215416A (en) * 2018-10-24 2019-01-15 天津工业大学 A kind of Chinese character assistant learning system and method based on augmented reality
CN111028590A (en) * 2019-03-29 2020-04-17 广东小天才科技有限公司 Method for guiding user to write in dictation process and learning device
CN111028566A (en) * 2019-12-12 2020-04-17 广州三人行壹佰教育科技有限公司 Live broadcast teaching method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN111563514B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
US10777193B2 (en) System and device for selecting speech recognition model
KR102717792B1 (en) Method for executing function and Electronic device using the same
CN111919248B (en) System for processing user utterance and control method thereof
KR102193029B1 (en) Display apparatus and method for performing videotelephony using the same
KR20200076169A (en) Electronic device for recommending a play content and operating method thereof
CN108877334B (en) Voice question searching method and electronic equipment
KR20200077775A (en) Electronic device and method for providing information thereof
KR20200090355A (en) Multi-Channel-Network broadcasting System with translating speech on moving picture and Method thererof
CN107798322A (en) A kind of smart pen
CN111179923A (en) Audio playing method based on wearable device and wearable device
KR101793607B1 (en) System, method and program for educating sign language
CN111176435A (en) User behavior-based man-machine interaction method and sound box
KR101567154B1 (en) Method for processing dialogue based on multiple user and apparatus for performing the same
CN111563514B (en) Three-dimensional character display method and device, electronic equipment and storage medium
KR20190101100A (en) Voice input processing method and electronic device supportingthe same
CN111081102B (en) Dictation result detection method and learning equipment
CN113409770A (en) Pronunciation feature processing method, pronunciation feature processing device, pronunciation feature processing server and pronunciation feature processing medium
CN111639227B (en) Spoken language control method of virtual character, electronic equipment and storage medium
CN111639567B (en) Interactive display method of three-dimensional model, electronic equipment and storage medium
CN111638781B (en) AR-based pronunciation guide method and device, electronic equipment and storage medium
KR20200067421A (en) Generation method of user prediction model recognizing user by learning data, electronic device applying the model, and metohd for applying the model
CN111639220A (en) Spoken language evaluation method and device, electronic equipment and storage medium
CN111639635B (en) Processing method and device for shooting pictures, electronic equipment and storage medium
CN108831230B (en) Learning interaction method capable of automatically tracking learning content and intelligent desk lamp
CN111077989B (en) Screen control method based on electronic equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant