[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103914128B - Wear-type electronic equipment and input method - Google Patents

Wear-type electronic equipment and input method Download PDF

Info

Publication number
CN103914128B
CN103914128B CN201210593624.XA CN201210593624A CN103914128B CN 103914128 B CN103914128 B CN 103914128B CN 201210593624 A CN201210593624 A CN 201210593624A CN 103914128 B CN103914128 B CN 103914128B
Authority
CN
China
Prior art keywords
image
input
head
character
mounted electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210593624.XA
Other languages
Chinese (zh)
Other versions
CN103914128A (en
Inventor
刘俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210593624.XA priority Critical patent/CN103914128B/en
Publication of CN103914128A publication Critical patent/CN103914128A/en
Application granted granted Critical
Publication of CN103914128B publication Critical patent/CN103914128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiments of the invention provide a kind of wear-type electronic equipment and input method.Wear-type electronic equipment according to embodiments of the present invention, including:Fixed cell, wear-type electronic equipment can be worn on the head of user by the fixed cell;Graphics processing unit, configure to obtain the first image to be shown;First image is sent to display unit by image transmitting unit, configuration;Display unit, configuration carry out the first image that display image processing unit obtains;Input block, it is arranged on fixed cell and/or display unit, configures to detect first position of the operating body relative to wear-type electronic equipment;Graphics processing unit is further configured to generate the second image according to first position, and according to the second image and the first image, generates the 3rd image;And display unit is further configured to show the 3rd image.

Description

Head-mounted electronic device and input method
Technical Field
Embodiments of the present invention relate to a head-mounted electronic device and an input method applied to the head-mounted electronic device.
Background
With the development of communication technology, various portable electronic devices are widely used, such as tablet computers, smart phones, game machines, and portable multimedia players. However, when using current portable electronic devices, a user is often required to hold the electronic device with a hand and maintain a particular posture to operate the electronic device or view content displayed by the electronic device. This makes it difficult for the user to perform other operations when operating the electronic device, and makes the user easily feel fatigue in his hands, shoulders, neck, and the like after a certain period of operation.
In order to change the operation posture of the user and bring a better use experience to the user, for example, a head-mounted electronic apparatus having a communication function, an image display function, an audio playback function, and the like has been proposed. However, since the user cannot see the head-mounted electronic device while wearing the device, it is inconvenient to perform a complicated input operation through the device. Therefore, most of the head-mounted electronic devices are only provided with simple control keys. Although the head-mounted electronic device may be connected with an external input device such as a mouse, a keyboard, or the like to realize a complicated input, the external input device is not carried by a user and is also difficult to use when the user moves.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a head-mounted electronic device and an input method, so as to solve the above problems.
One embodiment of the present invention provides a head-mounted electronic device, including: a fixing unit by which the head-mounted electronic apparatus can be worn on a head of a user; an image processing unit provided in the fixing unit and configured to obtain a first image to be displayed; an image transmission unit configured to transmit the first image to the display unit; a display unit configured to display the first image obtained by the image processing unit, wherein the display unit is connected with the fixing unit, and when the display device is worn on the head of a user through the fixing unit, at least a first part of the display unit is positioned in a visual area of the user and faces the user; an input unit disposed on the fixing unit and/or the display unit and configured to detect a first position of the operating body relative to the head-mounted electronic device; the image processing unit is further configured to generate a second image from the first position, and to generate a third image from the second image and the first image; and the display unit is further configured to display a third image.
Another embodiment of the present invention further provides an input method applied to a head-mounted electronic device, where the input method includes: displaying a first image; detecting a first position of an operator relative to a head-mounted electronic device; generating a second image from the first position and a third image from the second image and the first image; and displaying the third image.
According to the head-mounted electronic equipment and the input method, when a user wears the head-mounted electronic equipment, the user does not need to use external input equipment such as a mouse and a keyboard, and only the head-mounted electronic equipment can realize complex input, so that the constraint of the external input equipment on the user is eliminated, and the head-mounted electronic equipment and the input method are convenient for the user to carry and use under the condition of movement.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments of the present invention will be briefly described below.
FIG. 1 is an exemplary block diagram illustrating a head mounted electronic device according to one embodiment of the invention.
Fig. 2 is an explanatory diagram showing a schematic case of input by an input unit according to still another example of the present invention.
Fig. 3 is an exemplary block diagram illustrating a display unit according to one example of the present invention.
Fig. 4 is an explanatory diagram showing one schematic situation of the display unit shown in fig. 3.
Fig. 5 is an explanatory diagram showing one example situation of the head mounted electronic apparatus shown in fig. 1.
Fig. 6 is an explanatory diagram showing another example situation of the electronic apparatus shown in fig. 1.
FIG. 7 is a flow chart depicting an input method 700 according to an embodiment of the invention.
Fig. 8a to 8c are explanatory views showing one schematic case of generating a mapping flag indicating a mapping position by generating a mapping flag according to the present invention, and generating a second image of a second target character corresponding to the mapping flag.
Fig. 9a to 9e are explanatory views showing another schematic case of generating a mapping flag indicating a mapping position by generating a mapping flag according to the present invention, and generating a second image of a second target character corresponding to the mapping flag.
Fig. 10a to 10c are explanatory diagrams showing one schematic case of generating a mapping flag indicating a mapping position by the present invention, and generating a second image of a second target character corresponding to the mapping flag.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Note that in the present specification and the drawings, steps and elements having substantially the same structure are denoted by the same reference numerals, and repeated explanation of the steps and elements will be omitted.
Next, a head-mounted electronic apparatus according to an embodiment of the present invention will be described with reference to fig. 1. FIG. 1 shows an exemplary block diagram of a head mounted electronic device 100 according to one embodiment of the invention. As shown in fig. 1, the head-mounted electronic apparatus 100 includes a fixing unit 110, an image processing unit 120, an image transmission unit 130, a display unit 140, and an input unit 150.
The head-mounted electronic apparatus 100 can be worn on the head of the user through the fixing unit 110. For example, the fixing unit 110 may include wearing parts such as a helmet, a headband, and the like. Alternatively, the fixing unit 110 may also include a support arm that can be supported on the ear of the user.
The image processing unit 120 is provided in the fixing unit. The image processing unit 120 may obtain a first image to be displayed. For example, an image file may be stored in advance in the head mounted electronic apparatus 100. Obtaining the first image to be displayed 120 may obtain a stored image file and perform a play operation on the file to output the first image. For another example, the head-mounted electronic device 100 may further include a transmitting/receiving unit to receive an image file transmitted from another electronic device. Obtaining the first image to be displayed 120 may obtain a received image file and perform a play operation on the file to output the first image.
The image transmission unit 130 may transmit the first image to the display unit 140. And the display unit 140 may display the first image obtained by the image processing unit. The display unit 140 is connected to the fixing unit 110, and at least a first portion of the display unit 140 is located within a visual area of a user and faces the user when the display device is worn on the head of the user through the fixing unit.
The input unit 150 may be disposed on the fixing unit and/or the display unit, and detect a first position of the operating body with respect to the head-mounted electronic apparatus. The image processing unit 120 may generate a second image according to the first position detected by the input unit 150, and generate a third image according to the second image and the first image. Then, the display unit 140 may display the third image generated by the image processing unit 120.
According to an example of the present invention, the input unit 150 may include a sensing panel. The sensing panel can detect the operation body and obtain a second position of the operation body on the sensing panel as a first position of the operation body relative to the head-mounted electronic device 100.
For example, the second position may be a position of the operating body on the sensing panel when the operating body is in contact with the sensing panel. In this case, the display image processing unit 120 may generate the second image according to a position of the operating body on the sensing panel when the operating body is in contact with the sensing panel.
Optionally, the head-mounted electronic device 100 may further include a first instruction generating unit. The first instruction generating unit may determine whether contact of the operating body with the sensing panel satisfies a first generation condition according to a detection result of the sensing panel with respect to the operating body, and generate the first control instruction when it is determined that the first generation condition is satisfied. The image processing unit may generate a second image according to a position of the operating body on the sensing panel when the operating body is in contact with the sensing panel in response to the first control instruction. For example, the first generation condition may be a first time threshold. When the contact time of the operating body with a specific position on the sensing panel is greater than a first time threshold value according to the detection result of the sensing panel for the operating body, the first instruction generating unit can determine that a first generating condition is met and generate a first control instruction. The image processing unit can respond to the first control instruction and generate a second image according to the position of the current operating body on the sensing panel, which is in contact with the sensing panel.
For another example, the second position may include a position of a projection of the operating body on the sensing panel when a distance between the operating body and the sensing panel is less than or equal to a predetermined distance. In addition, optionally, the second position further includes a position of the operating body on the sensing panel when the distance between the operating body and the sensing panel is zero. In this case, the display image processing unit 120 may generate the second image according to a position of the manipulation body projected on the sensing panel when the distance between the manipulation body and the sensing panel is less than or equal to a predetermined distance and/or a position of the manipulation body on the sensing panel when the distance between the manipulation body and the sensing panel is zero.
Optionally, the head-mounted electronic device 100 may further include a second instruction generating unit. The second instruction generating unit may determine whether an operation body having a distance from the sensing panel less than or equal to a predetermined distance satisfies a second generation condition according to a detection result of the sensing panel for the operation body, and generate the second control instruction when it is determined that the second generation condition is satisfied. The image processing unit may generate the second image according to a position of the manipulation body projected on the sensing panel when the distance between the manipulation body and the sensing panel is less than or equal to a predetermined distance and/or a position of the manipulation body on the sensing panel when the distance between the manipulation body and the sensing panel is zero, in response to the second control instruction. For example, in a case where the second position includes a position of the operating body projected on the sensing panel when the distance between the operating body and the sensing panel is less than or equal to a predetermined distance, the second generation condition may be a first distance threshold between the operating body and the sensing panel, where the first distance threshold may be less than or equal to the predetermined distance. The second instruction generating unit may determine that a second generation condition is satisfied and generate a second control instruction when a distance between the operating body and the sensing panel gradually decreases to a first distance threshold according to a detection result of the sensing panel for the operating body. The image processing unit can respond to the second control instruction and generate a second image according to the position of the current operation body projected on the sensing panel. For another example, in the case where the second position includes a position of the operating body projected on the sensing panel when the distance between the operating body and the sensing panel is less than or equal to a predetermined distance and a position of the operating body on the sensing panel when the distance between the operating body and the sensing panel is zero generates the second image, the second generation condition may be that the distance between the operating body and the sensing panel gradually decreases, and a difference in distance between the position of the operating body projected on the sensing panel when the distance between the operating body and the sensing panel is less than or equal to the predetermined distance and the position of the operating body on the sensing panel when the distance between the operating body and the sensing panel is zero is less than or equal to a second distance threshold value. The second instruction generating unit may determine that a second generation condition is satisfied and generate a second control instruction when a distance between the operating body and the sensing panel is gradually decreased according to a detection result of the sensing panel for the operating body, and a distance difference between a position of the operating body projected on the sensing panel when the distance between the operating body and the sensing panel is less than or equal to a predetermined distance and a position of the operating body on the sensing panel when the distance between the operating body and the sensing panel is zero is less than or equal to a second distance threshold. The image processing unit may generate a second image according to a position of the operating body on the sensing panel when a distance between the operating body and the sensing panel is zero in response to the second control instruction.
According to another example of the present invention, the input unit 150 may include an image acquisition module and an image recognition module. Specifically, the image acquisition module may perform image acquisition on a spatial control area of the head-mounted electronic device 100 and obtain an acquisition result. The image recognition module determines a third position of the operating body in the spatial control region according to the acquisition result obtained by the image acquisition unit, as the first position of the operating body relative to the head-mounted electronic device 100. When the head-mounted electronic device 100 is worn on the head of the user, the user can view the first image obtained by the image processing unit displayed by the display unit 140 along a first direction, and the acquisition unit 150 acquires the spatial control region along a second direction, wherein an included angle between the first direction and the second direction is within a predetermined included angle range. For example, when the user performs a control operation in the spatial control region using an operation body such as a finger, the second direction of the operation body with respect to the head of the user is not parallel to the first direction of the first image displayed by the display unit 140 with respect to the user, so that the user does not need to lift the operation body up to its visible region to perform an operation on the head-mounted electronic device, that is, the user may not see the operation body when the operation is in the spatial control region.
Optionally, the head-mounted electronic device 100 may further include a third instruction generating unit. The third instruction generating unit may determine whether the operation body in the spatial control region satisfies a third generation condition according to the acquisition result, and generate a third control instruction according to which the image processing unit generates the second image when it is determined that the third generation condition is satisfied. For example, the third generation condition may be a second time threshold. When the image recognition module determines that the time of the operator held at a specific position in the spatial control region is greater than the second time threshold value according to the acquisition result obtained by the image acquisition unit, the third instruction generation unit may determine that a third generation condition is satisfied, and generate a third control instruction. The image processing unit 120 may generate the second image according to the position of the current operator in the spatial control region in response to the third control instruction.
Further, according to still another example of the present invention, the input unit 150 may include a plurality of input regions. The input unit 150 may detect the operation body to determine a target input region corresponding to the operation body among the plurality of input regions, and detect an operation of the operation body in the target input region to obtain the first position according to the operation of the operation body in the target input region.
For example, the input unit 150 may be a touch input unit and include a plurality of sequentially arranged touch input regions. When the operation body is in contact with the touch input unit, the input unit 150 may determine, among the plurality of touch input areas, a target touch input area to which a touch position of the operation body when in contact with the touch input unit belongs, and further detect an operation of the operation body in the target touch input area to obtain the first position according to an operation of the operation body in the target touch input area, such as a long-time touch held at a specific position, a left movement, a right movement, a forward movement, a backward movement, and the like.
Fig. 2 is an explanatory diagram showing a schematic case of input by an input unit according to still another example of the present invention. In the example shown in fig. 2, the input unit 150 is a touch input unit, and includes three sequentially arranged touch input regions 210, 220, and 230. As shown in fig. 2, the touch input regions 210, 220, and 230 correspond to character regions 240, 250, and 260, respectively. The characters contained in the character areas 240, 250, and 260 are different from each other. Each character area contains a first character located at the center of the character area, and a plurality of second characters arranged around the first character. As shown in fig. 2, the first character of the character area 240 is S, the first character of the character area 250 is G, and the first character of the character area 260 is K. When a finger of a user as an operation body is in contact with the touch input area 220 and moves rightward in the direction indicated by the arrow in the touch input area 220, the input unit 150 determines the touch input area 220 as a target touch input area in the touch input areas 210, 220, and 230, and obtains that the operation body moves rightward in the direction indicated by the arrow in the target touch input area 220 to the first position a. The image processing unit may determine the character region 250 corresponding to the target touch input region 220 as a target character region among the character regions 240, 250, and 260 according to the target input region 220, determine a second character H positioned at the right side of the first character G as a first target character in the target character region according to the movement locus and the first position a of the operator, and generate a second image with respect to the first target character H. According to the present example, since the number of input regions is small, and a structure such as a projection can be provided in the input region, the user can recognize each touch region easily even without seeing it and input quickly.
It should be noted that in the example shown in fig. 2, the description has been made taking the example in which the input unit 150 is a touch input unit, but the present invention is not limited thereto, and for example, as described above, the input unit 150 may also be a proximity sensing unit or an image capturing unit or the like that does not require an operating body to contact therewith. The number of input regions is not limited to 3, and the input unit 150 may include 2 touch regions or 4 or more input regions. And the input regions may be respectively provided at portions of the fixing unit or the display unit near left and right sides of the user's head when the head-mounted electronic apparatus 100 is worn on the user's head.
According to the head-mounted electronic equipment provided by the embodiment of the invention, when a user wears the head-mounted electronic equipment, the user does not need external input equipment such as a mouse and a keyboard, and only the head-mounted electronic equipment can realize complex input, so that the constraint of the external input equipment on the user is eliminated, and the head-mounted electronic equipment is convenient for the user to carry and use under the condition of movement.
Further, according to an example of the present invention, the image processing unit 120 of the head-mounted electronic device 100 may further obtain a mapping position of the first position with respect to the first image before generating the second image, and when it is determined that the mapping position is located in the first image, generate a mapping flag indicating the mapping position in the first image to help the user determine the position of the operation body by viewing the first image including the mapping flag, avoiding erroneous input.
For example, a keyboard region including a plurality of third characters may be included in the first image. The image processing unit 120 may determine a mapping position of the first position obtained from the input unit with respect to the first image, and generate a mapping flag indicating the mapping position in the first image when it is determined that the mapping position is located in the first image. The user can know the third character in the keyboard area corresponding to the current first position through the displayed mapping identification, so that the user can adjust the first position of the operator relative to the head-mounted electronic equipment according to the displayed mapping identification so as to select the desired character in the keyboard area.
Specifically, after generating the map flag indicating the map position in the first image, the input unit 150 may further receive a character input operation from the operator (e.g., the input operation for generating the first instruction, the second instruction, and the third instruction described above), and the image processing unit may determine a second target character corresponding to the map flag among the plurality of third characters according to the character input operation and generate a second image regarding the second target character.
According to another example of the present invention, the image transmission unit 130 of fig. 1 may include a data transmission line provided in the fixing unit 110. The data transmission line may transmit the first image to the display unit 140. Fig. 3 is an exemplary block diagram illustrating a display unit according to one example of the present invention. As shown in fig. 3, the display unit 300 may include a first display module 310, a first optical system 320, a first light guide member 330, a second light guide member 340, a frame member 350, and a lens member 360. Fig. 4 is an explanatory diagram showing one schematic situation of the display unit 300 shown in fig. 3.
The first display module 310 may be disposed in the frame member 350 and connected with the first data transmission line. The first display module 310 can display a first image according to the first video signal transmitted by the first data transmission line. According to an example of the present invention, the first display module 310 may be a display module of a micro display screen having a small size.
The first optical system 320 may also be provided in the frame member 350. The first optical system 320 may receive light emitted from the first display module and perform optical path conversion on the light emitted from the first display module to form a first enlarged virtual image. That is, the first optical system 320 has a positive refractive power. So that the user can clearly view the first image and the size of the image viewed by the user is not limited by the size of the display unit. For example, the optical system may include a convex lens. Alternatively, the optical system may also form a lens assembly from a plurality of lenses including convex and concave lenses in order to reduce aberrations, avoid interference with imaging by chromatic dispersion, etc., resulting in a better visual experience for the user.
As shown in fig. 4, after the first optical system 320 receives the light emitted from the first display module 310 and performs optical path conversion on the light emitted from the first display module 310, the first light guide part 330 may transmit the light passing through the first optical system to the second light guide part 340. The second light guide member 340 may be disposed in the lens member 360. And the second light guide member may receive the light transmitted through the first light guide member 330 and reflect the light transmitted by the first light guide member 330 toward the eyes of the user wearing the head-mounted electronic device.
Fig. 5 is an explanatory diagram showing one example situation of the head mounted electronic apparatus shown in fig. 1. The head-mounted electronic device 500 is a glasses-type electronic device. The head-mounted electronic device 500 includes a fixing unit, an image processing unit, an image transmission unit, and an input unit similar to the fixing unit 110, the image processing unit 120, the image transmission unit 130, the display unit 140, and the input unit 150 in the head-mounted electronic device 100, and a display unit similar to the display unit in the electronic device 100 and described in conjunction with fig. 3, and thus, a detailed description thereof is omitted.
As shown in fig. 5, the fixing unit of the head-mounted electronic device 500 includes a first support arm 510, a second support arm 520, and a third holding portion (not shown). Specifically, the first support arm 510 includes a first connecting portion and a first holding portion (shown in phantom in the first support arm 510), wherein the first connecting portion is configured to connect the frame member and the first holding portion. The second support arm 520 includes a second connecting portion configured to connect the frame member and a second retaining portion (shown in phantom in the first support arm 520). Similar to the display unit described above in connection with fig. 3, the display unit of the head mounted electronic device 500 includes a frame member 530 and a lens member 540 connected to the frame member 530. The third holding part of the fixing unit is provided on the frame member 530. In particular, the third retaining portion may be provided at a location on the frame member 530 between the two lens members. The head-mounted electronic device is held on the head of the user by the first holding portion, the second holding portion, and the third holding portion. Specifically, the first and second retaining portions may be used to support the first and second support arms 510, 520 at the user's ears, while the third retaining portion may be used to support the frame member 530 at the user's nose bridge.
The input unit of the head mounted electronic device 500 may be disposed on the first support arm 510, the second support arm 520, and/or the frame member 530. For example, when the input unit includes a sensing panel, the input unit may be disposed on the first support arm 510 and/or the second support arm 520 to facilitate an input operation such as a touch or proximity by a user. And when the input unit includes an image pickup module, the image pickup module in the input unit may be provided on the frame member 530 so as to pick up the manipulation body in the space control region.
Further, as shown in fig. 5, according to one example of the present invention, the frame member 530 may include a first stub portion 531 connected to the first support arm and a first stub portion 532 connected to the second support arm (as shown in the circle of fig. 5). The first display module and the first optical system comprised by the display unit may be arranged in the first stub head part 531 and/or the first stub head part 532.
Furthermore, according to another example of the present invention, the head mounted electronic device shown in fig. 5 may further include an audio processing unit and a bone conduction unit. The audio processing unit may perform audio processing and output a first audio signal. For example, an audio file may be stored in the head mounted electronic device 500 in advance. The audio processing unit may obtain a stored audio file and perform a play operation on the file to output a first audio signal. For another example, the head-mounted electronic device 500 may further include a transmitting/receiving unit to receive an audio file transmitted from another electronic device. The audio processing unit may obtain the received audio file and perform a play operation on the file to output a first audio signal. Preferably, the audio processing unit is disposed in the first retaining portion of the first support arm 510 and/or the first retaining portion of the second support arm 520. Further, preferably, the bone conduction unit may be disposed inside the first connection portion of the first support arm 510 and/or inside the second connection portion of the second support arm 520, and generate vibration according to the first audio signal. In an example of the present invention, the inner sides of the first and second connection portions are sides of the first and second connection portions that are close to the head of the user when the head-mounted electronic device is worn on the head of the user. The bone conduction unit may generate vibration according to the audio signal from the audio processing unit so that the user can listen to audio through the generated vibration. Specifically, when the head mounted electronic device 500 is worn on the head of the user, the bone conduction unit is in contact with the head of the user, so that the user can perceive the vibration generated by the bone conduction unit.
According to an example of the present invention, the bone conduction unit may directly receive the audio signal from the audio processing unit and generate vibration according to the audio signal. Alternatively, according to another example of the present invention, the head-mounted electronic device 500 may further include a power amplifying unit provided in the fixing unit. The power amplifying unit can receive the audio signal from the audio processing unit and amplify the audio signal, wherein the amplified audio signal is an alternating voltage signal. The power amplification unit may apply the amplified audio signal to the bone conduction unit. The bone conduction unit may be driven by the amplified audio signal to generate vibrations.
By providing the bone conduction unit in the head mounted electronic apparatus, a user can listen to audio by using the inner side bone conduction unit provided in the head mounted electronic apparatus, improving audio output quality. Further, since it is not necessary to provide a conventional audio playing unit such as a speaker, an earphone, or the like in the head mounted electronic apparatus, the content listened to by the user is prevented from being known to others while reducing the space occupied by the head mounted electronic apparatus.
Further, preferably, in the first and/or second support arm, a protective layer of the bone conduction unit that is in contact with the head of the user, a body of the first bone conduction unit, a data transmission unit layer (which may include a data transmission line, for example), and an input unit layer may be sequentially disposed in order from an inner side of the support arm (i.e., a side close to the head of the user when the head-mounted electronic device is worn on the head of the user) to an outer side of the support arm (i.e., a side away from the head of the user when the head-mounted electronic device is worn on the head of the user). In addition, in the case that the input unit includes the above-described sensing panel, the input unit layer may include a sensing panel layer and a protective layer of the sensing panel layer, and in addition, a spacer layer may be further disposed between the data transmission unit layer and the sensing panel layer to prevent interference of an electric signal in the data transmission unit layer with the sensing panel layer. Through this structure, rationally set up bone conduction unit and the position of input unit on head-mounted electronic equipment, optimized head-mounted electronic equipment's product appearance design when having improved audio output quality to user's use and then operation has been made things convenient for. Fig. 6 is an explanatory diagram showing another example situation of the electronic apparatus shown in fig. 1. In the example shown in fig. 6, the head-mounted electronic device 600 includes a fixing unit, an image processing unit, an image transmission unit, and an input unit similar to the fixing unit 110, the image processing unit 120, the image transmission unit 130, the display unit 140, and the input unit 150 in the head-mounted electronic device 100, and a display unit similar to the display unit in the electronic device 100 and described in conjunction with fig. 3, and thus, a detailed description thereof is omitted.
As shown in fig. 6, the fixing unit of the head-mounted electronic device 600 includes a headband part 610 and connection parts 620 and 630. Similar to the display unit described above in connection with fig. 3, the display unit of the head mounted electronic device 600 includes frame members 640 and 650 and lens members 660 and 670 connected to the frame members 640 and 650, respectively. The connecting members 620 and 630 are connected to the frame members 640 and 650, respectively. When the headband component 610 is worn on a user's head, the headband component 610 is capable of flexibly deforming such that first and second ends of the headband component press against the user's left and right ears, respectively. Preferably, as shown in fig. 6, both ends of the headband member 610 may be provided with a first shell 611 and a first shell 612, respectively. Alternatively, speaker units may be provided in the first housing 611 and the first housing 612 so that a user can play audio using the head-mounted electronic apparatus 600.
The input unit of the head-mounted electronic device 600 may be provided on the headband member 610, the connection members 620 and 630, and/or the frame members 640 and 650. For example, when the input unit includes a sensing panel, the input unit may be provided on the connection parts 620 and 630 to facilitate an input operation such as a touch or proximity by a user. And when the input unit includes an image pickup module, the image pickup module in the input unit may be provided on the headband part 610 and/or the mirror frame parts 640 and 650 so as to pick up an operation body in the space control region.
Next, an input method of an embodiment of the present invention is explained with reference to fig. 7. FIG. 7 is a flow chart depicting an input method 700 according to an embodiment of the invention. The input method 700 may be applied to the head mounted electronic devices shown in fig. 1 to 6. The head-mounted electronic device according to the embodiment of the present invention has been described in detail above with reference to fig. 1 to 6. Therefore, for brevity of description, no further description is provided.
As shown in fig. 7, in step S701, a first image is displayed. For example, an image file may be stored in advance in the head-mounted electronic device. The first image 120 to be displayed may be obtained in step S701, a stored image file may be obtained, and a play operation is performed on the file to display the first image. For another example, the head-mounted electronic device may further include a transmitting/receiving unit to receive an image file transmitted from another electronic device. The received image file may be obtained and a play operation may be performed on the file to display the first image in step S701.
In step S702, a first position of an operator with respect to the head-mounted electronic device may be detected. Then in step S703, a second image is generated from the first position detected in step S702, and a third image is generated from the second image and the first image. Finally, the generated third image may be displayed in step S704.
Optionally, before step S703, the method shown in fig. 7 may further include obtaining a mapping position of the first position with respect to the first image, determining whether the mapping position is located in the first image, and generating a mapping identifier indicating the mapping position in the first image when the mapping position is determined to be located in the first image. Thereby helping the user to determine the position of the operator by viewing the first image including the mapping identifier, avoiding erroneous input.
Further, according to an example of the present invention, a plurality of third characters may be included in the first image. After generating the mapping identifier indicating the mapping position in the first image, the input method 700 shown in fig. 7 may further include receiving a character input operation from the operator. And in step S703, a second target character corresponding to the mapping identifier is determined among the plurality of third characters according to the character input operation, and then a second image regarding the second target character is generated.
Fig. 8a to 8c are explanatory views showing one schematic case of generating a mapping flag indicating a mapping position by generating a mapping flag according to the present invention, and generating a second image of a second target character corresponding to the mapping flag. In the example shown in fig. 8 a-8 c, the head mounted electronic device includes a sensing panel 810. According to step S701, a first image is displayed. As shown in fig. 8a, a keyboard region including a plurality of third characters may be included in the first image 810.
As shown in fig. 8b, according to step S702, the operation body is detected, and the position of the operation body on the sensing panel 820 when the operation body is in contact with the sensing panel is obtained as the first position of the operation body relative to the head-mounted electronic device. Then, as described above, a mapped position of the first position with respect to the first image is obtained, and a mapped mark indicating the mapped position is generated in the first image (as shown by the finger mark 830 in fig. 8 c).
In the example shown in fig. 8a to 8c, when a character input operation from the operator is received, as described above, in step S703, a second target character corresponding to the mapping identification is determined among the plurality of third characters according to the character input operation, and then a second image regarding the second target character is generated. Specifically, when the contact of the operation body with the sensing panel 820 is detected, in step S703, it may be determined whether the contact of the operation body with the sensing panel 820 satisfies the first generation condition. For example, the first generation condition may be a first time threshold. When the detection result of the sensing panel 820 for the operation body indicates that the contact time of the operation body with a specific position on the sensing panel is greater than the first time threshold, it may be determined in step S703 that the first generation condition is satisfied, and it may be further determined that the current input operation of the operation body is a character input operation, and a first control instruction is generated in response to the character input operation, and then a second image is generated based on the position of the current operation body on the sensing panel according to the first control instruction.
Fig. 9a to 9e are explanatory views showing another schematic case of generating a mapping flag indicating a mapping position by generating a mapping flag according to the present invention, and generating a second image of a second target character corresponding to the mapping flag. In the example shown in fig. 9 a-9 e, the head mounted electronic device includes a sensing panel 920. According to step S701, a first image is displayed. As shown in fig. 9a, a keyboard region including a plurality of third characters may be included in the first image 910.
Then, according to step S702, the operation body is detected, and a position b (shown in fig. 9 b) of the operation body projected on the sensing panel when the distance between the operation body and the sensing panel 920 is less than or equal to the predetermined distance and/or a position of the operation body on the sensing panel when the distance between the operation body and the sensing panel is zero is obtained as the first position of the operation body relative to the head-mounted electronic device. Then, as described above, a mapped position of the first position with respect to the first image is obtained, and a mapped mark indicating the mapped position is generated in the first image (as shown by the finger mark 930 in fig. 9 c).
In the example shown in fig. 9a to 9e, when it is detected that the distance between the operating body and the sensing panel 920 is less than or equal to the predetermined distance H1, as described above, in step S703, a second target character corresponding to the mapping identifier is determined among the plurality of third characters according to the character input operation, and then a second image regarding the second target character is generated. For example, in the example shown in fig. 9d, when it is detected that the distance between the operating body and the sensing panel 920 is less than or equal to the predetermined distance H1, in step S703, it may be determined whether the second generation condition is satisfied by the operating body having the distance from the sensing panel that is less than or equal to the predetermined distance. The second generation condition may be a first distance threshold H2 between the operating body and the sensing panel, wherein the first distance threshold may be less than or equal to the predetermined distance. When the detection result of the sensing panel 920 for the operation body indicates that the distance between the operation body and the sensing panel gradually decreases to the first distance threshold H2, in step S703, it may be determined that the second generation condition is satisfied, and a second control instruction is generated, and in response to the second control instruction, a second image is generated according to the position of the current operation body projected on the sensing panel. For another example, the second generation condition may be that the distance between the operating body and the sensing panel gradually decreases, and the difference in distance between the position of the operating body projected on the sensing panel when the distance between the operating body and the sensing panel is less than or equal to a predetermined distance and the position of the operating body on the sensing panel when the distance between the operating body and the sensing panel is zero is less than or equal to a second distance threshold. As shown in fig. 9e, when the detection result of the sensing panel for the operation body indicates that the distance between the operation body and the sensing panel gradually decreases, and the distance difference between the position of the operation body projected on the sensing panel when the distance between the operation body and the sensing panel is less than or equal to the predetermined distance and the position of the operation body on the sensing panel when the distance between the operation body and the sensing panel is zero is less than or equal to the second distance threshold, in step S703, it may be determined that the second generation condition is satisfied, and a second control instruction is generated, and in response to the second control instruction, a second image is generated according to the position of the operation body on the sensing panel when the distance between the operation body and the sensing panel is zero.
Fig. 10a to 10c are explanatory diagrams showing one schematic case of generating a mapping flag indicating a mapping position by the present invention, and generating a second image of a second target character corresponding to the mapping flag. In the example shown in fig. 10a to 10c, the head mounted electronic device comprises an image acquisition module. According to step S701, a first image is displayed. As shown in fig. 10a, a keyboard region including a plurality of third characters may be included in the first image 1010.
As shown in fig. 10b, according to step S702, an image of the spatial control area 1020 of the head-mounted electronic device is acquired by the image acquisition module, and an acquisition result is obtained; and determining a first position of the operator in the spatial control region 1020 relative to the head-mounted electronic device according to the acquisition result, wherein when the head-mounted electronic device is worn on the head of the user, the user can view the displayed first image along a first direction, and the image acquisition module acquires the spatial control region along a second direction, wherein an included angle between the first direction and the second direction is within a predetermined included angle range. Then, as described above, a mapped position of the first position with respect to the first image is obtained, and a mapped mark indicating the mapped position is generated in the first image (as shown by the finger mark 1030 in fig. 10 c).
In the example shown in fig. 10a to 10c, when a character input operation from the operator is received, as described above, in step S703, a second target character corresponding to the mapping identification is determined among the plurality of third characters according to the character input operation, and then a second image regarding the second target character is generated. Specifically, when it is determined from the acquisition result that the operating body is located in the spatial control region 1020, in step S703, it may be determined whether the operating body in the spatial control region 1020 satisfies the third generation condition. For example, the third generation condition may be a second time threshold. When the detection result of the sensing panel 820 for the operation body indicates that the held time of the operation body at a specific position in the spatial control region 1020 is greater than the second time threshold, it may be determined in step S703 that the third generation condition is satisfied, and it may be further determined that the current input operation of the operation body is a character input operation, and a third control instruction is generated in response to the character input operation, and then a second image is generated according to the third control instruction based on the position of the current operation body in the spatial control region.
Further, according to another example of the present invention, the head mounted electronic device may include a plurality of input areas. In this case, in step S702, an image of the spatial control area of the head-mounted electronic device may be acquired by the image acquisition module, and an acquisition result is obtained; and determining a third position of the operating body in the space control area according to the acquisition result as the first position of the operating body relative to the head-mounted electronic equipment.
Preferably, the plurality of input regions may correspond to a plurality of character regions, wherein characters contained in the plurality of character regions are different from each other. Each character area contains a first character located at the center of the character area, and a plurality of second characters arranged around the first character. In step S703, a target character region may be determined among the plurality of character regions according to the target input region; and determining a first target character in the target character region according to the first position, and generating a second image regarding the first target character. According to the present example, since the number of input regions is small, and a structure such as a projection can be provided in the input region, the user can recognize each touch region easily even without seeing it and input quickly.
According to the control method of the above embodiment of the present invention, when the user wears the head-mounted electronic device, the user can implement complex input only by using the head-mounted electronic device itself without using an external input device such as a mouse, a keyboard, or the like, so as to get rid of the constraint of the external input device on the user, and the user can conveniently carry and use the head-mounted electronic device under the condition of movement.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It should be understood by those skilled in the art that various modifications, combinations, partial combinations and substitutions may be made in the present invention depending on design requirements and other factors as long as they are within the scope of the appended claims and their equivalents.

Claims (26)

1. A head-mounted electronic device, comprising:
a fixing unit by which the head-mounted electronic apparatus can be worn on a head of a user;
an image processing unit provided in the fixing unit and configured to obtain a first image to be displayed;
an image transmission unit configured to transmit the first image to a display unit;
the display unit is configured to display the first image obtained by the image processing unit, wherein the display unit is connected with the fixing unit, and when the display unit is worn on the head of a user through the fixing unit, at least a first part of the display unit is positioned in a visual area of the user and faces the user;
an input unit disposed on the fixing unit and/or the display unit and configured to detect a first position of an operating body with respect to the head-mounted electronic device;
the image processing unit is further configured to generate a second image from the first position and a third image from the second image and the first image; and
the display unit is further configured to display the third image;
wherein,
the input unit includes a plurality of input regions,
the input unit detects the operation body to determine a target input region corresponding to the operation body among the plurality of input regions, and detects an operation of the operation body in the target input region to obtain the first position according to the operation of the operation body in the target input region;
the input unit is a touch input unit and comprises a plurality of touch input areas which are sequentially arranged, when the operation body is in contact with the touch input unit, the touch input unit determines a target touch input area to which the touch position of the operation body is in contact with the touch input unit in the plurality of touch input areas, and further detects the operation of the operation body in the target touch input area so as to obtain the first position according to the operation of the operation body in the target touch input area;
or,
the input unit comprises an image acquisition module and an image recognition module, wherein the image acquisition module is used for acquiring images of a space control area of the head-mounted electronic equipment and acquiring an acquisition result; the image recognition module determines a third position of the operating body in the space control area according to the acquisition result obtained by the image acquisition module, and the third position is used as the first position of the operating body relative to the head-mounted electronic equipment.
2. The head-mounted electronic device of claim 1, wherein
The input unit includes a sensing panel,
the induction panel detects the operation body to obtain a second position of the operation body on the induction panel as the first position.
3. The head-mounted electronic device of claim 2, wherein
The second position is a position of the operating body on the induction panel when the operating body is in contact with the induction panel.
4. The head-mounted electronic device of claim 3, further comprising:
a first instruction generating unit configured to determine whether contact of the operating body with the sensing panel satisfies a first generation condition according to a detection result of the sensing panel for the operating body, and generate a first control instruction when it is determined that the first generation condition is satisfied,
the image processing unit generates a second image according to the first control instruction.
5. The head-mounted electronic device of claim 2, wherein
The second position includes a position of a projection of the operating body on the sensing panel when a distance between the operating body and the sensing panel is less than or equal to a predetermined distance.
6. The head-mounted electronic device of claim 5, wherein
The second position further includes a position of the operating body on the sensing panel when a distance between the operating body and the sensing panel is zero.
7. The head-mounted electronic device of claim 5 or 6, further comprising:
a second instruction generating unit configured to determine whether the operation body having a distance from the sensing panel smaller than or equal to a predetermined distance satisfies a second generation condition according to a detection result of the sensing panel for the operation body, and generate a second control instruction when it is determined that the second generation condition is satisfied,
the image processing unit generates a second image according to the second control instruction.
8. The head-mounted electronic device of claim 1,
when the head-mounted electronic equipment is worn on the head of a user, the user can watch the first image displayed by the display unit and obtained by the image processing unit along a first direction, and the image acquisition module acquires the space control area along a second direction, wherein an included angle between the first direction and the second direction is within a preset included angle range.
9. The head-mounted electronic device of claim 8, further comprising:
a third instruction generation unit configured to determine whether the operation body in the spatial control region satisfies a third generation condition according to the acquisition result, and generate a third control instruction when it is determined that the third generation condition is satisfied,
the image processing unit generates a second image according to the third control instruction.
10. The head-mounted electronic device of claim 1, wherein
The plurality of input regions correspond to a plurality of character regions, wherein characters contained in the plurality of character regions are different from each other,
each of the character areas includes:
a first character located at the center of the character area; and
a plurality of second characters disposed around the first character,
the image processing unit determines a target character region among the plurality of character regions according to the target input region, determines a first target character in the target character region according to a first position, and generates the second image regarding the first target character.
11. The head-mounted electronic device of claim 2 or 8, wherein
The image processing unit is further configured to obtain a mapped position of the first position relative to the first image, and to generate a mapping identifier in the first image indicating the mapped position when it is determined that the mapped position is located in the first image.
12. The head-mounted electronic device of claim 11, wherein
The first image includes a plurality of third characters therein,
the input unit is further configured to receive a character input operation from an operation body, an
The image processing unit determines a second target character corresponding to the mapping identifier among the plurality of third characters according to the character input operation, and generates the second image regarding the second target character.
13. The head-mounted electronic device of claim 1, wherein
The image transmission unit includes: a data transmission line disposed in the fixing unit and configured to transmit the first image to a display unit,
the display unit includes:
a frame member;
a lens member connected to the frame member;
a first display module, disposed in the frame member, configured to display a first image according to a first video signal transmitted by the data transmission line;
a first optical system provided in the frame member, configured to receive light emitted from the first display module and perform optical path conversion on the light emitted from the first display module to form a first enlarged virtual image;
a first light guide member configured to transmit light passing through the first optical system to a second light guide member;
the second light guide member is disposed in the lens member and configured to reflect the light transmitted by the first light guide member toward glasses of a user wearing the head-mounted electronic device.
14. The head mounted electronic device of claim 13, wherein the head mounted electronic device is a glasses type electronic device, wherein the fixing unit comprises:
a first support arm comprising a first connecting portion and a first holding portion, wherein the first connecting portion is configured to connect the frame member and the first holding portion;
a second support arm including a second connecting portion and a second holding portion, wherein the second connecting portion is configured to connect the frame member and the second holding portion; and
a third holding portion provided on the frame member, an
The first, second, and third retention portions are configured to retain the head-mounted electronic device on a head of a user.
15. An input method applied to a head-mounted electronic device, the input method comprising:
displaying a first image;
detecting a first position of an operator relative to the head-mounted electronic device;
generating a second image from the first position and a third image from the second image and the first image; and
displaying the third image;
the head-mounted electronic device comprises a plurality of input areas, wherein the detecting a first position of an operator relative to the head-mounted electronic device comprises:
detecting the operation body to determine a target input area corresponding to the operation body in the plurality of input areas; and
detecting the operation of the operation body in the target input area so as to obtain the first position according to the operation of the operation body in the target input area;
wherein the step of detecting a first position of an operator relative to the head-mounted electronic device comprises: detecting, by an input unit, a first position of an operator with respect to the head-mounted electronic device;
the input unit is a touch input unit and comprises a plurality of touch input areas which are sequentially arranged, and the operation body is detected so as to determine a target input area corresponding to the operation body in the plurality of input areas; and detecting the operation of the operation body in the target input area to obtain the first position according to the operation of the operation body in the target input area comprises the following steps:
when the operation body is in contact with the touch input unit, the touch input unit determines a target touch input area to which the touch position of the operation body when the operation body is in contact with the touch input unit belongs in the plurality of touch input areas, and further detects the operation of the operation body in the target touch input area to obtain the first position according to the operation of the operation body in the target touch input area;
or,
the input unit comprises an image acquisition module and an image recognition module, and the operation body is detected to determine a target input area corresponding to the operation body in the plurality of input areas; and detecting the operation of the operation body in the target input area to obtain the first position according to the operation of the operation body in the target input area comprises the following steps:
the image acquisition module acquires images of a space control area of the head-mounted electronic equipment and obtains acquisition results; the image recognition module determines a third position of the operating body in the space control area according to the acquisition result obtained by the image acquisition module, and the third position is used as the first position of the operating body relative to the head-mounted electronic equipment.
16. The input method of claim 15, further comprising, prior to said generating a second image from said first location:
obtaining a mapped position of the first location relative to the first image;
determining whether the mapped location is located in the first image; and
when the mapping position is determined to be located in the first image, generating a mapping identifier for indicating the mapping position in the first image.
17. The input method of claim 16, wherein
The first image includes a plurality of third characters therein,
the input method further comprises:
receiving a character input operation from the operation body,
the generating a second image from the first location comprises:
determining a second target character corresponding to the mapping identifier in the plurality of third characters according to the character input operation; and
generating the second image for the second target character.
18. The input method of claim 17, the head-mounted electronic device comprising a sensing panel, wherein the detecting a first position of an operator relative to the head-mounted electronic device comprises:
and detecting the operating body to obtain a second position of the operating body on the induction panel as the first position.
19. The input method of claim 18, wherein
The second position is a position of the operating body on the induction panel when the operating body is in contact with the induction panel.
20. The input method of claim 19, wherein
The receiving of the character input operation from the operation body comprises:
determining whether the contact of the operating body and the induction panel meets a first generation condition;
determining that the current input operation of the operation body is a character input operation when it is determined that the first generation condition is satisfied,
the determining, according to the character input operation, a second target character corresponding to the mapping identifier in the third characters includes:
when the current input operation of the operation body is determined to be a character input operation, generating a first control instruction; and
and generating the second image according to the first control instruction.
21. The input method of claim 18, wherein
The second position includes a position of a projection of the operating body on the sensing panel when a distance between the operating body and the sensing panel is less than or equal to a predetermined distance.
22. The input method of claim 21, wherein
The second position further includes a position of the operating body on the sensing panel when a distance between the operating body and the sensing panel is zero.
23. The input method of claim 21 or 22, further comprising:
the receiving of the character input operation from the operation body comprises:
determining whether the operation body with the distance from the induction panel smaller than or equal to a preset distance meets a second generation condition; and
determining that the current input operation of the operation body is the character input operation when it is determined that the second generation condition is satisfied,
the determining, according to the character input operation, a second target character corresponding to the mapping identifier in the third characters includes:
when the current input operation of the operation body is determined to be a character input operation, generating a second control instruction; and
and generating the second image according to the second control instruction.
24. The input method as set forth in claim 17,
when the head-mounted electronic equipment is worn on the head of a user, the user can watch the displayed first image along a first direction, and the image acquisition module acquires the space control area along a second direction, wherein an included angle between the first direction and the second direction is within a preset included angle range.
25. The input method of claim 24, wherein
The receiving of the character input operation from the operation body comprises:
determining whether the operation body in the spatial control region satisfies a third generation condition;
determining that the current input operation of the operation body is a character input operation when it is determined that the third generation condition is satisfied,
the determining, according to the character input operation, a second target character corresponding to the mapping identifier in the third characters includes:
when the current input operation of the operation body is determined to be a character input operation, generating a third control instruction; and
and generating the second image according to the third control instruction.
26. The input method of claim 15, wherein
The plurality of input regions correspond to a plurality of character regions, wherein characters contained in the plurality of character regions are different from each other,
each of the character areas includes:
a first character located at the center of the character area; and
a plurality of second characters disposed around the first character,
the generating a second image from the first location comprises:
determining a target character area in the plurality of character areas according to the target input area; and
a first target character is determined in the target character region according to the first position, and the second image about the first target character is generated.
CN201210593624.XA 2012-12-31 2012-12-31 Wear-type electronic equipment and input method Active CN103914128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210593624.XA CN103914128B (en) 2012-12-31 2012-12-31 Wear-type electronic equipment and input method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210593624.XA CN103914128B (en) 2012-12-31 2012-12-31 Wear-type electronic equipment and input method

Publications (2)

Publication Number Publication Date
CN103914128A CN103914128A (en) 2014-07-09
CN103914128B true CN103914128B (en) 2017-12-29

Family

ID=51039881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210593624.XA Active CN103914128B (en) 2012-12-31 2012-12-31 Wear-type electronic equipment and input method

Country Status (1)

Country Link
CN (1) CN103914128B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6340301B2 (en) 2014-10-22 2018-06-06 株式会社ソニー・インタラクティブエンタテインメント Head mounted display, portable information terminal, image processing apparatus, display control program, display control method, and display system
CN104391575A (en) * 2014-11-21 2015-03-04 深圳市哲理网络科技有限公司 Head mounted display device
KR20160063812A (en) * 2014-11-27 2016-06-07 삼성전자주식회사 Method for configuring screen, electronic apparatus and storage medium
CN106155284B (en) * 2015-04-02 2019-03-08 联想(北京)有限公司 Electronic equipment and information processing method
CN107466396A (en) * 2016-03-22 2017-12-12 深圳市柔宇科技有限公司 Head-mounted display apparatus and its control method
US11022802B2 (en) * 2018-09-28 2021-06-01 Apple Inc. Dynamic ambient lighting control
CN111445393B (en) * 2019-10-22 2020-11-20 合肥耀世同辉科技有限公司 Electronic device content driving platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673161A (en) * 2009-10-15 2010-03-17 复旦大学 Visual, operable and non-solid touch screen system
CN101719014A (en) * 2008-10-09 2010-06-02 联想(北京)有限公司 Input display method
WO2011106798A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102445988A (en) * 2010-10-01 2012-05-09 索尼公司 Input device
CN102779000A (en) * 2012-05-03 2012-11-14 乾行讯科(北京)科技有限公司 User interaction system and method
CN102812417A (en) * 2010-02-02 2012-12-05 寇平公司 Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719014A (en) * 2008-10-09 2010-06-02 联想(北京)有限公司 Input display method
CN101673161A (en) * 2009-10-15 2010-03-17 复旦大学 Visual, operable and non-solid touch screen system
CN102812417A (en) * 2010-02-02 2012-12-05 寇平公司 Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands
WO2011106798A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102445988A (en) * 2010-10-01 2012-05-09 索尼公司 Input device
CN102779000A (en) * 2012-05-03 2012-11-14 乾行讯科(北京)科技有限公司 User interaction system and method

Also Published As

Publication number Publication date
CN103914128A (en) 2014-07-09

Similar Documents

Publication Publication Date Title
CN103914128B (en) Wear-type electronic equipment and input method
US9542958B2 (en) Display device, head-mount type display device, method of controlling display device, and method of controlling head-mount type display device
EP3029550B1 (en) Virtual reality system
KR102349716B1 (en) Method for sharing images and electronic device performing thereof
JP6250041B2 (en) Reduction of external vibration in bone conduction speakers
KR101726676B1 (en) Head mounted display
EP3714318B1 (en) Position tracking system for head-mounted displays that includes sensor integrated circuits
US20160034032A1 (en) Wearable glasses and method of displaying image via the wearable glasses
EP3370102B1 (en) Hmd device and method for controlling same
US20130241927A1 (en) Computer device in form of wearable glasses and user interface thereof
US20150009309A1 (en) Optical Frame for Glasses and the Like with Built-In Camera and Special Actuator Feature
US20130265300A1 (en) Computer device in form of wearable glasses and user interface thereof
US11309947B2 (en) Systems and methods for maintaining directional wireless links of motile devices
US20170090557A1 (en) Systems and Devices for Implementing a Side-Mounted Optical Sensor
KR102110208B1 (en) Glasses type terminal and control method therefor
CN104298340A (en) Control method and electronic equipment
CN112835445A (en) Interaction method, device and system in virtual reality scene
JP2019114078A (en) Information processing device, information processing method and program
WO2019150880A1 (en) Information processing device, information processing method, and program
KR20160049687A (en) Terminal and operating method thereof
WO2019142621A1 (en) Information processing device, information processing method, and program
JP2020154569A (en) Display device, display control method, and display system
WO2022235250A1 (en) Handheld controller with thumb pressure sensing
CN117063142A (en) System and method for adaptive input thresholding
US20240225545A9 (en) Electrode placement calibration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant