[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109901290B - Method and device for determining gazing area and wearable device - Google Patents

Method and device for determining gazing area and wearable device Download PDF

Info

Publication number
CN109901290B
CN109901290B CN201910333506.7A CN201910333506A CN109901290B CN 109901290 B CN109901290 B CN 109901290B CN 201910333506 A CN201910333506 A CN 201910333506A CN 109901290 B CN109901290 B CN 109901290B
Authority
CN
China
Prior art keywords
target
eye
determining
area
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910333506.7A
Other languages
Chinese (zh)
Other versions
CN109901290A (en
Inventor
李文宇
苗京花
孙玉坤
王雪丰
彭金豹
李治富
赵斌
李茜
范清文
索健文
刘亚丽
栗可
陈丽莉
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201910333506.7A priority Critical patent/CN109901290B/en
Publication of CN109901290A publication Critical patent/CN109901290A/en
Priority to PCT/CN2020/080961 priority patent/WO2020215960A1/en
Application granted granted Critical
Publication of CN109901290B publication Critical patent/CN109901290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method and a device for determining an gazing area and wearable equipment, and belongs to the field of electronic technology application. According to the method, a target virtual interval is determined through a fixation point of a first target eye on a first display screen and a field angle of the first target eye, the target virtual interval is determined to be an area within a field angle range of a second target eye, the field angle of the second target eye is determined, a first virtual image seen by the first target eye currently and a second virtual image seen by the second target eye currently can be determined, and therefore a first fixation area of the first target eye in an image displayed on the first display screen and a second fixation area of the second target eye in an image displayed on the second display screen can be determined. Therefore, the first watching area and the second watching area can be accurately overlapped, the problem of poor display effect of images in the wearable equipment in the related art is solved, and the display effect of the images in the wearable equipment is effectively improved.

Description

Method and device for determining gazing area and wearable device
Technical Field
The invention relates to the field of electronic technology application, in particular to a method and a device for determining an gazing area and wearable equipment.
Background
Virtual Reality (VR) technology is a technology that is favored by the market in recent years. VR technology enables the construction of a three-dimensional environment (i.e., a virtual scene) through which a user is provided with a sense of immersion.
At present, the definition of images for presenting the three-dimensional environment is more and more required by users, and in order to avoid the transmission pressure of high-definition images, wearable equipment adopting the VR technology can pointedly present partial images watched by users in images displayed by a display screen of the wearable equipment as high-definition images, and present other partial images as non-high-definition images. The related art provides a method for determining a gazing area, which can be used for determining a partial image watched by a user.
However, since the positions of the visual fields of the left and right eyes of the same object are different, it is difficult for the left-eye gaze region and the right-eye gaze region determined separately in the related art to completely overlap each other, and it is difficult for high-definition images of the left and right eyes determined separately based on the gaze regions of the left and right eyes to completely overlap each other, which affects the display effect of images in wearable devices.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining a gazing area and wearable equipment, which can solve the technical problem of poor image display effect in the wearable equipment in the related art. The technical scheme is as follows:
in a first aspect, a method for determining a gazing area is provided, where the method is applied to a wearable device, the wearable device includes a first display component and a second display component, the first display component includes a first display screen and a first lens located at a light exit side of the first display screen, the second display component includes a second display screen and a second lens located at a light exit side of the second display screen, and the method includes:
acquiring a current fixation point of a first target eye on the first display screen, wherein the first target eye is a left eye or a right eye;
determining a target virtual area according to the fixation point and the field angle of the first target eye, wherein the target virtual area is an area which is currently located in the visual range of the first target eye in the three-dimensional environment presented by the wearable device;
determining a first target virtual image according to the gaze point and the field angle of the first target eye, wherein the first target virtual image is a virtual image which is positioned in the visible range of the first target eye and is formed by the current image displayed by the first display screen through the first lens;
determining the target virtual area as an area currently located within a visual range of the second target eye in the three-dimensional environment presented by the wearable device, wherein the second target eye is an eye other than the first target eye in the left eye and the right eye;
determining a second target virtual image according to the target virtual area and the position of the second target eye, wherein the second target virtual image is a virtual image which is positioned in a visible range of the second target eye and is formed by the image currently displayed by the second display screen through the second lens;
according to the first target virtual image and the second target virtual image, a first watching area of the first target eye in the image displayed by the first display screen and a second watching area of the second target eye in the image displayed by the second display screen are determined.
Optionally, the determining a target virtual area according to the gaze point and the field angle of the first target eye includes:
determining the current visual range of the first target eye according to the fixation point and the field angle of the first target eye;
and determining a region in the three-dimensional environment, which is located in the current visible range of the first target eye, as the target virtual region.
Optionally, the determining a second target virtual image according to the target virtual area and the position of the second target eye includes:
determining the current visual range of the second target eye according to the target virtual area and the position of the second target eye;
and determining a part of the second virtual image, which is positioned in the current visible range of the second target eye, as the second target virtual image.
Optionally, the determining a first target virtual image according to the gaze point and the field angle of the first target eye includes:
determining the current visual range of the first target eye according to the position of the first target eye, the fixation point and the field angle of the first target eye;
and determining a part of the first virtual image, which is positioned in the current visible range of the first target eye, as the first target virtual image.
Optionally, the determining, according to the first target virtual image and the second target virtual image, a first gaze region of the first target eye in the image displayed on the first display screen and a second gaze region of the second target eye in the image displayed on the second display screen includes:
acquiring a first corresponding area of the first target virtual image in an image displayed by the first display screen and a second corresponding area of the second target virtual image in an image displayed by the second display screen;
determining the first corresponding region as the first gaze region;
determining the second corresponding region as the second gaze region.
In a second aspect, a device for determining a gazing area is provided, where the device is applied to a wearable device, where the wearable device includes a first display component and a second display component, the first display component includes a first display screen and a first lens located at a light exit side of the first display screen, the second display component includes a second display screen and a second lens located at a light exit side of the second display screen, and the device includes:
the acquisition module is used for acquiring a fixation point of a first target eye on the first display screen currently, wherein the first target eye is a left eye or a right eye;
a first determining module, configured to determine a target virtual area according to the gaze point and a field angle of the first target eye, where the target virtual area is an area currently located within a visible range of the first target eye in a three-dimensional environment presented by the wearable device;
a second determining module, configured to determine a first target virtual image according to the gaze point and a field angle of the first target eye, where the first target virtual image is a virtual image that is located within a visible range of the first target eye and is formed by the first lens through an image displayed by the first display screen at present;
a third determining module, configured to determine the target virtual area as an area currently located within a visible range of the second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is an eye other than the first target eye in the left eye and the right eye;
a fourth determining module, configured to determine a second target virtual image according to the target virtual area and the position of the second target eye, where the second target virtual image is a virtual image that is located within a visible range of the second target eye and is formed by the image currently displayed by the second display screen through the second lens;
a fifth determining module, configured to determine, according to the first target virtual image and the second target virtual image, a first gazing area of the first target eye in the image displayed on the first display screen, and a second gazing area of the second target eye in the image displayed on the second display screen.
Optionally, the first determining module is configured to:
determining the current visual range of the first target eye according to the fixation point and the field angle of the first target eye;
and determining a region in the three-dimensional environment, which is located in the current visible range of the first target eye, as the target virtual region.
Optionally, the fourth determining module is configured to:
determining the current visual range of the second target eye according to the target virtual area and the position of the second target eye;
and determining a part of the second virtual image, which is positioned in the current visible range of the second target eye, as the second target virtual image.
Optionally, the second determining module is configured to:
determining the current visual range of the first target eye according to the position of the first target eye, the fixation point and the field angle of the first target eye;
and determining a part of the first virtual image, which is positioned in the current visible range of the first target eye, as the first target virtual image.
In a third aspect, a wearable device is provided, the wearable device comprising: the device comprises a determination device of a watching area, an image acquisition component, a first display component and a second display component;
the first display assembly comprises a first display screen and a first lens positioned on the light emergent side of the first display screen, and the second display assembly comprises a second display screen and a second lens positioned on the light emergent side of the second display screen;
the means for determining the region of gaze is as described in the second aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the method comprises the steps of determining a target virtual area through a fixation point of a first target eye on a first display screen at present and a field angle of the first target eye, determining the target virtual area as an area in a visual range of a second target eye, determining the visual range of the second target eye, determining a first virtual image seen by the first target eye at present and a second virtual image seen by the second target eye at present, and determining a first fixation area of the first target eye in an image displayed on the first display screen and a second fixation area of the second target eye in an image displayed on the second display screen. Because the gaze area of two target eyes on the display screen is confirmed by same target virtual area, therefore first region of gazing and the region is watched to the second can accurate coincidence, has solved the problem that the region is watched to the left and right eyes among the correlation technique and is difficult to the complete coincidence and leads to the display effect of image relatively poor among the wearable equipment, has effectively improved the display effect of image among the wearable equipment, has promoted user's impression and has experienced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of high-definition images of left and right eyes determined by a method for determining a gazing area in the related art;
fig. 2 is a schematic structural diagram of a wearable device provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a human eye viewing an image on a display screen through a lens according to an embodiment of the invention;
fig. 4 is a flowchart of a method for determining a gazing area according to an embodiment of the present invention;
fig. 5 is a flowchart of another method for determining a gaze region according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for determining a target virtual area according to a gaze point and a field angle of a first target eye according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a current viewing range of a first target eye, in accordance with embodiments of the present invention;
fig. 8 is a flowchart of a method for determining a virtual first target image according to a gaze point and a field angle of a first target eye according to an embodiment of the present invention;
fig. 9 is a flowchart of a method for determining a second virtual target image according to a target virtual area and a position of a second target eye according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a method for determining a gaze region provided by an embodiment of the invention;
fig. 11 is a block diagram of an apparatus for determining gazing area according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a wearable device according to an embodiment of the present invention.
Detailed Description
As the objects, technical solutions and advantages of the present invention will become more apparent, embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
To assist the reader, before describing the embodiments of the present invention in detail, it is to be understood that the terminology used in the embodiments of the present invention is interpreted:
VR technology is a technology for closing human vision and even hearing from the outside by using a wearable device to guide a user to generate a feeling of being in a virtual three-dimensional environment. The display principle is that the display screens corresponding to the left and right eyes respectively display images for the left and right eyes to watch, and due to the fact that parallax exists in human eyes, the brain generates a stereoscopic impression close to reality after the images with the difference are acquired through the human eyes. VR technology is typically implemented by a VR system, which may include a wearable device and a VR host. Wherein, the VR host can be integrated in the wearable device or be an external device capable of wired or wireless connection with the wearable device. The VR host is used for rendering images and sending the rendered images to the wearable device, and the wearable device is used for receiving and displaying the rendered images.
Eye tracking, also known as eyeball tracking, is a technology for analyzing eyeball movement information of human eyes by collecting human eye images of the human eyes and determining a current fixation point of the human eyes on a display screen based on the eyeball movement information. Further, in the eye tracking technology, the current gazing area of the human eye on the display screen can be determined according to the determined current gazing point of the human eye on the display screen.
SmartView is a technical scheme for realizing a high-definition VR technology by combining a VR technology and an eyetracking technology. The technical scheme comprises the following steps: firstly, a watching area of a user on a display screen is accurately tracked through an eyetracking technology, then only the watching area is subjected to high-definition rendering, other areas are subjected to non-high-definition rendering, and meanwhile, an Integrated Circuit (IC) can process rendered non-high-definition images (also called low-definition images or low-definition images) into high-resolution images to be displayed on the display screen. The Display screen may be a Liquid Crystal Display (LCD) screen or an Organic Light-Emitting Diode (OLED) Display screen.
Unity, also known as Unity engine, is a multi-platform, comprehensive game development tool developed by the united Technologies, which is a fully integrated professional game engine. Unity can be used to develop VR technology.
It should be noted that whether a user can view a high-definition image through a screen is mainly determined by two factors, on one hand, the physical resolution of the screen itself, that is, the number of pixel points on the screen, and at present, the monocular resolution of a mainstream wearable device screen appearing in the market is 1080 × 1200; another aspect is the sharpness of the image to be displayed. The user can view a high definition image through the screen only when the resolution of the screen is higher and the definition of the image to be displayed is higher. Wherein, the higher definition means that the VR host needs to perform more refined rendering processing on the image used for presenting the three-dimensional environment in the wearable device.
That is, if the user wants to observe a higher definition image, it is necessary to increase both the resolution of the screen and the definition of the image, and if the definition of the image is increased, the rendering pressure of the VR host and the bandwidth required for image transmission between the VR host and the wearable device are obviously increased. Therefore, there has been a bottleneck in solving the problem of how to make a screen with a monocular resolution of 4320 × 4320 or even higher resolution exhibit a higher definition image. The introduction of the smartview technology solves the bottleneck of the monocular high-definition image in the aspects of hardware transmission and software rendering to a certain extent. The smartview technology is combined with the eyetracking technology, so that the high-definition requirement of a gazing area can be guaranteed, and the rendering pressure and the image transmission bandwidth are reduced.
In the related art, in order to ensure that the gaze point coordinates of both eyes can be accurately determined, the eyetracking technology needs to provide two cameras in the wearable device, the two cameras can respectively collect human eye images of the left eye and the right eye (the human eye images are also called gaze point images and the like), and the VR host calculates the gaze point coordinates based on the human eye images.
However, two cameras that set up among this technical scheme have greatly increased wearable equipment's among the VR system weight and cost, are unfavorable for this VR system's general popularization.
Moreover, the technical scheme does not consider the visual characteristics of people: because the left eye and the right eye are respectively positioned at different positions in space, the visual angles of objects viewed by two eyes are different, so that the positions of the same object in different eyeball visual fields are different, and the images viewed by the two eyes are not completely overlapped actually. Therefore, if the gaze point coordinates of the left and right eyes are calculated from the eye images of the left and right eyes, the positions of the gaze point coordinates of the left and right eyes on the display screen do not actually coincide with each other, and if the gaze areas of the left and right eyes are determined based on the gaze point coordinates of the left and right eyes, it is difficult for the gaze areas of the left and right eyes to completely coincide with each other.
If the smartview technology is adopted to perform high-definition rendering on the gaze point areas of the left eye and the right eye which are not coincident, the generated high-definition graphs of the left eye and the right eye are difficult to be completely coincident. As shown in fig. 1, the figure shows a left-eye high-definition image 11 and a right-eye high-definition image 12 obtained by high-definition rendering of fixation point regions of left and right eyes, respectively. As can be seen from fig. 1, the left-eye high-definition image 11 and the right-eye high-definition image 12 overlap only in the middle partial region. The visual perception presented to the user is that the user can see the high-definition image area 13, the high-definition image area 14 and the high-definition image area 15 in the visual field range of the left eye and the right eye of the user. The high-definition image area 13 is a high-definition image area visible to both the left and right eyes, the high-definition image area 14 is a high-definition image area visible to only the left eye, and the high-definition image area 15 is a high-definition image area visible to only the right eye. Because the high-definition image area 14 and the high-definition image area 15 are only high-definition image areas that can be seen by one of the eyes, when the eyes of the user watch the display screen at the same time, the viewing experience of the user can be influenced, and a relatively obvious boundary line can be presented between the two image areas, so that the viewing experience of the user is further influenced.
The embodiment of the invention provides a method for determining a gazing area, which can ensure that the determined gazing areas of the left eye and the right eye are overlapped, so that a user can watch completely overlapped high-definition images, and the user experience is effectively improved. Before describing the method, a wearable device to which the method is applied will be described first.
The embodiment of the invention provides wearable equipment. As shown in fig. 2, the wearable device 20 may include a first display component 21 and a second display component 22, the first display component 21 includes a first display screen 211 and a first lens 212 located at a light emitting side of the first display screen 211, and the second display component 22 includes a second display screen 221 and a second lens 222 located at a light emitting side of the second display screen 221. Wherein the lenses (i.e., the first lens 212 and the second lens 222) are used to enlarge the images displayed on the corresponding display screens (i.e., the first display screen 211 and the second display screen 221) to provide a more realistic immersive sensation for the user.
Taking the first display module 21 as an example, as shown in fig. 3, the human eye observes a first virtual image 213 corresponding to the image currently displayed on the first display screen 211 through the first lens 212, where the first virtual image 213 is usually an enlarged image of the image currently displayed on the first display screen 211.
In addition, the wearable device may further include an image acquisition component, the image acquisition component may be an eye-tracking camera, the eye-tracking camera is integrated around at least one of the first display screen and the second display screen of the wearable device, and is used for acquiring an eye image corresponding to a certain display screen in real time and sending the eye image to the VR host, and the VR host processes the eye image to determine a gaze point coordinate of the eye on the display screen. The gazing point coordinates are acquired by the gazing area determining device.
The wearable device further comprises a device for determining the gaze area, which can be incorporated in the wearable device by means of software or hardware, or in the VR host, and which can be used to perform the method for determining the gaze area described below.
Fig. 4 is a flowchart illustrating a method for determining a gazing area according to an embodiment of the present invention, where the method may include the following steps:
step 201, a fixation point of a first target eye on a first display screen is obtained, wherein the first target eye is a left eye or a right eye.
Step 202, determining a target virtual area according to the fixation point and the field angle of the first target eye, where the target virtual area is an area currently located within the visible range of the first target eye in the three-dimensional environment presented by the wearable device.
Step 203, determining a first target virtual image according to the gazing point and the field angle of the first target eye, where the first target virtual image is a virtual image which is located within the visible range of the first target eye and is formed by the image displayed by the current first display screen through the first lens.
And 204, determining the target virtual area as an area currently located in a visual range of a second target eye in the three-dimensional environment presented by the wearable device, wherein the second target eye is an eye other than the first target eye in the left eye and the right eye.
Step 205, determining a second target virtual image according to the target virtual area and the position of the second target eye, where the second target virtual image is a virtual image located within the visible range of the second target eye, of a second virtual image formed by the image currently displayed on the second display screen through the second lens.
And step 206, determining a first gazing area of the first target eye in the image displayed on the first display screen and a second gazing area of the second target eye in the image displayed on the second display screen according to the first target virtual image and the second target virtual image.
In summary, in the method for determining a gazing area according to the embodiment of the present invention, a target virtual area is determined according to a gazing point of a first target eye on a first display screen and a field angle of the first target eye, and the target virtual area is determined as an area within a visible range of a second target eye, so as to determine the visible range of the second target eye, and further a first virtual image currently seen by the first target eye and a second virtual image currently seen by the second target eye can be determined, so that a first gazing area of the first target eye in an image displayed on the first display screen and a second gazing area of the second target eye in an image displayed on the second display screen can be determined. Because the gaze area of two target eyes on the display screen is confirmed by same target virtual area, therefore first gaze area and second gaze area can accurately coincide, effectively improved the display effect of image in the wearable equipment, solved among the correlation technique about the eye gaze the area be difficult to completely coincide and lead to the relatively poor problem of display effect of image in the wearable equipment, effectively improved the display effect of image in the wearable equipment, promoted user's impression and experienced.
Fig. 5 is a flowchart illustrating another method for determining a gazing area according to an embodiment of the present invention, which may also be performed by the gazing area determining apparatus, and is applied to a wearable device, and the wearable device may refer to the wearable device illustrated in fig. 2. The method may comprise the steps of:
step 301, a gaze point of a first target eye currently on a first display screen is obtained, where the first target eye is a left eye or a right eye.
In the embodiment of the invention, eye-movement tracking cameras can be arranged around the first display screen of the wearable device, the eye-movement tracking cameras can acquire eye images of corresponding eyes in real time, and the VR host determines the fixation point coordinates of the eyes on the first display screen according to the eye images. The gazing point coordinates are acquired by the gazing area determining device.
Step 302, determining a target virtual area according to the gaze point and the field angle of the first target eye, where the target virtual area is an area currently located within the visible range of the first target eye in the three-dimensional environment presented by the wearable device.
Alternatively, as shown in fig. 6, the process of determining the target virtual area according to the gazing point and the field angle of the first target eye may include:
and step 3021, determining the current visual range of the first target eye according to the fixation point and the field angle of the first target eye.
The field angle of the first target eye may be composed of a horizontal field angle and a vertical field angle, and an area located within the horizontal field angle and the vertical field angle of the human eye is a visible range of the first target eye. The angle of view that can be actually achieved by the human eye is limited, and generally, the horizontal angle of view of the human eye is 188 degrees at the maximum and the vertical angle of view of the human eye is 150 degrees at the maximum. In general, the field angle of the human eye is kept constant regardless of the rotation of the human eye, and the current visual range of the human eye can be determined according to the current fixation point of the human eye and the horizontal field angle and the vertical field angle of the first target eye. Of course, the angles of view of the left and right eyes may be different, and in the case of different individuals, the angles of view of the human eyes may also be different, which is not limited in the embodiment of the present invention.
Fig. 7 schematically shows the current fixation point G of the first target eye O, the horizontal field angle a and the vertical field angle b of the first target eye, and the current visible range of the first target eye (i.e., the spatial region surrounded by the point O, the point P, the point Q, the point M, and the point N).
And step 3022, determining a region in the three-dimensional environment within the current visual range of the first target eye as a target virtual region.
In order to provide a good sense of immersion for the user, in practical applications, the scene range of the three-dimensional environment presented in the wearable device should be larger than the visible range of the human eye. Therefore, in practical application, the region in the three-dimensional environment, which is located within the current visible range of the first target eye, is determined as the target virtual region.
Of course, if the scene range of the three-dimensional environment presented in the wearable device is smaller than the visual range of the human eye, the region located within the scene range of the three-dimensional environment in the current visual range of the first target eye is determined as the target virtual region.
Optionally, in step 3022, the process of determining the target virtual area may include the following steps:
step a1, emitting at least two rays from the point of the first target eye, where the at least two rays are emitted along the boundary of the field angle of the first target eye.
The Unity engine may issue at least two rays (the rays are virtual rays) from the point of the first target eye, that is, draw at least two rays with the point of the first target eye as a starting point, and the at least two rays may respectively exit along the boundary of the field angle of the first target eye.
In the wearable device provided by the embodiment of the present invention, a first virtual camera and a second virtual camera are respectively disposed at a point where the position of the first target eye is located and a point where the position of the second target eye is located. Pictures seen by the left eye and the right eye of the user through the first display screen and the second display screen in the wearable device come from pictures shot by the first virtual camera and the second virtual camera respectively.
Since the point of the position of the first target eye is the point of the position of the first virtual camera in the wearable device, in practical applications, the position of the target eye can be characterized by the position of the virtual camera, and the Unity engine can emit at least two rays from the position of the first virtual camera.
Step A2, at least two calibration points where at least two rays contact the virtual area are obtained.
In the extending direction of the at least two rays, the at least two rays contact with a three-dimensional environment presented by the wearable device, namely a virtual area, to form at least two calibration points. In the Unity engine, when a ray having a physical property collides with a collider of the surface of the virtual object, the Unity engine can recognize the coordinates of the collision point, i.e., the coordinates of the surface of the virtual object.
Step a3, determining an area surrounded by the at least two calibration points in the virtual area as a target virtual area.
In the embodiment of the present invention, a geometric figure of the target virtual image region may be predetermined, the at least two calibration points are connected according to the geometric figure, and a region surrounded by the connecting lines is determined as the target virtual region.
Of course, in the embodiment of the present invention, the object identification may be further performed on the area surrounded by the connecting line to extract the valid object in the surrounded area, and the area where the valid object is located is determined as the target virtual area by ignoring the invalid object (for example, the background such as the sky) in the surrounded area.
Step 303, determining a first target virtual image according to the gazing point and the field angle of the first target eye, where the first target virtual image is a virtual image which is located within the visible range of the first target eye and is formed by the image displayed by the current first display screen through the first lens.
The left eye and the right eye see the first virtual image and the second virtual image through the first lens and the second lens respectively, when the first virtual image and the second virtual image are presented in front of the left eye and the right eye simultaneously, the two eyes acquire the first virtual image and the second virtual image simultaneously, and three-dimensional images with depth are formed in the brain. In the embodiment of the present invention, in order to determine the first target virtual image, the first virtual image seen by the first target eye and the second virtual image seen by the second target eye need to be re-identified.
Of course, the re-identified first and second virtual images may be transparent in order not to affect the display of the image in the wearable device.
Alternatively, as shown in fig. 8, the process of determining the first target virtual image according to the gazing point and the field angle of the first target eye may include:
step 3031, determining the current visual range of the first target eye according to the position, the fixation point and the field angle of the first target eye.
The process of step 3031 may refer to the related description of step 3021, which is not described herein again in the embodiments of the present invention.
Step 3032, determining the part of the first virtual image, which is located in the current visible range of the first target eye, as the first target virtual image.
Optionally, in step 3032, the process of determining the first target virtual image may include the following steps:
and step B1, emitting at least two rays from the position of the first target eye, wherein the at least two rays are respectively emitted along the boundary of the field angle of the first target eye.
The process of step B1 can refer to the description related to step a1, and the embodiment of the present invention is not described herein again.
The purpose of step B1 is to characterize the current visible range of the first target eye by means of rays, so as to accurately determine the first target virtual image located in the first virtual image.
And step B2, acquiring at least two first contact points of the at least two rays contacting the first virtual image respectively.
In the extending direction of the at least two rays, the at least two rays contact the first virtual image to form at least two first contact points.
And step B3, determining the area enclosed by the at least two first contact points as a first target virtual image.
Alternatively, similarly to step a3 described above, in the embodiment of the present invention, the first target virtual image may be determined according to a predetermined geometric figure, or object identification may be performed on the enclosed area, and the identified object may be determined as the first target virtual image.
And step 304, determining the target virtual area as an area which is currently located in the range of the field angle of a second target eye in the three-dimensional environment presented by the wearable device, wherein the second target eye is an eye other than the first target eye in the left eye and the right eye.
By determining the target virtual area determined from the gaze point of the first target eye and the field angle of the first target eye as an area within the range of the field angle of the second target eye, it can be ensured that the determined gaze areas of both eyes overlap.
And 305, determining a second target virtual image according to the target virtual area and the position of the second target eye, wherein the second target virtual image is a virtual image which is positioned in the range of the field angle of the second target eye and is formed by the image currently displayed by the second display screen through the second lens.
Alternatively, as shown in fig. 9, the process of determining the second target virtual image according to the target virtual area and the position of the second target eye may include:
step 3051, determining a visible range of the second target eye currently in the three-dimensional environment according to the target virtual area and the position of the second target eye.
At least two rays are emitted from the point where the second target eye is located, and the at least two rays are respectively connected with the at least two calibration points which enclose the target virtual area. The spatial area enclosed by the point where the second target eye is located and the at least two calibration points is the current visible range of the second target eye in the three-dimensional environment. The current visual range of the second target eye in the three-dimensional environment is a partial spatial region within the current visual range of the second target eye.
Similar to step 3022, the Unity engine may control the point of the second target eye to emit at least two rays, i.e., draw at least two rays starting from the point of the second target eye and ending with at least two calibration points. The point where the second target eye is located is the point where the second virtual camera is located in the wearable device.
And step 3052, determining a part of the second virtual image, which is positioned in the visible range of the second target eye in the three-dimensional environment, as the second target virtual image.
Optionally, the process of determining the second target virtual image in step 3052 may include the following steps:
and step C1, acquiring at least two second contact points of the at least two rays and the second virtual image respectively. In the extending direction of the at least two rays, the at least two rays contact with the second imaginary line to form at least two second contact points.
And step C2, determining the area enclosed by the at least two second contact points as a second target virtual image.
Alternatively, similarly to step a3 described above, in the embodiment of the present invention, the second target virtual image may be determined according to a predetermined geometric figure, or object identification may be performed on the enclosed area, and the identified object may be determined as the second target virtual image.
In order to ensure the consistency of the objects observed by the left and right eyes, the same algorithm and the same algorithm parameters should be adopted to ensure the consistency of the recognized objects when the object recognition is performed on the enclosed region in step B3 and step D2.
Step 306, acquiring a first corresponding area of the first target virtual image in the image displayed by the first display screen, and acquiring a second corresponding area of the second target virtual image in the image displayed by the second display screen.
The at least two first contact points and the at least two second contact points are converted into at least two first image points in the image displayed on the first display screen and at least two second image points in the image displayed on the second display screen, respectively.
Due to the physical characteristics of the lens, when a user views a target image through the lens, that is, when the user views a target virtual image presented by the target image, the target virtual image is distorted compared with the target image, and in order to avoid that the user views the distorted target image, the target image needs to be subjected to inverse distortion processing in an inverse distortion grid manner in advance. The corresponding relation between the virtual image coordinates and the image coordinates is recorded in the anti-distortion grid.
In the embodiment of the present invention, the at least two first contact points and the at least two second contact points are virtual image coordinates located in virtual images, and the at least two first image points and the at least two second image points are image coordinates in an image displayed in a screen (the coordinates of the screen correspond to the image coordinates displayed in the screen), so that the at least two first contact points and the at least two second contact points can be converted into at least two first image points in the image displayed in the first display screen and at least two second image points in the image displayed in the second display screen based on the correspondence relationship between the virtual image coordinates and the image coordinates in the anti-distortion mesh.
Determining a first corresponding region according to the at least two first image points, optionally determining a region enclosed by the at least two first image points as the first corresponding region, or performing object identification on the enclosed region, and determining an identified object as the first corresponding region; the second corresponding region is determined according to the at least two second image points, optionally, a region surrounded by the at least two second image points is determined as the second corresponding region, or object recognition may be performed on the surrounded region, and the recognized object is determined as the second corresponding region.
Step 307, the first corresponding area is determined as a first gazing area.
And 308, determining the second corresponding area as a second gazing area.
To sum up, according to the method for determining a gazing area provided in the embodiment of the present application, a target virtual area is determined according to a gazing point of a first target eye on a first display screen and a field angle of the first target eye, and the target virtual area is determined as an area within a visible range of a second target eye, so as to determine the visible range of the second target eye, and further a first virtual image currently seen by the first target eye and a second virtual image currently seen by the second target eye can be determined, so that a first gazing area of the first target eye in an image displayed on the first display screen and a second gazing area of the second target eye in an image displayed on the second display screen can be determined. Because the gaze area of two target eyes on the display screen is confirmed by same target virtual area, therefore first region of gazing and the region is watched to the second can accurate coincidence, has solved the problem that the region is watched to the left and right eyes among the correlation technique and is difficult to the complete coincidence and leads to the display effect of image relatively poor among the wearable equipment, has effectively improved the display effect of image among the wearable equipment, has promoted user's impression and has experienced.
Further, in the method for determining a gazing area described in the embodiment of the present invention, in step 301, the obtained gazing point of the first target eye on the display screen at present may be completed by one eye-tracking camera, so that, in the wearable device to which the method for determining a gazing area provided in the embodiment of the present invention is applied, only one eye-tracking camera may be provided.
It should be noted that the sequence of the above steps may be adjusted according to actual needs, for example, step 307 and step 308 may be executed simultaneously or step 308 is executed first and then step 307 is executed, and for example, step 303 and step 304 may be executed simultaneously or step 304 is executed first and then step 303 is executed.
The above embodiment is further described below with reference to fig. 10. Taking the first target eye as the left eye as an example, the method for determining the gazing area comprises the following steps:
step S1, a gaze point S of the first target eye 213 currently on the first display screen 211 is acquired.
Step S2 determines the field angle α of the first target eye 213 from the gaze point S.
In the present embodiment, the description will be given taking an example in which the angle of view is a horizontal angle of view.
Step S3 is to emit two rays from the point where the first target eye 213 is located, the two rays being emitted along the boundary of the field angle α of the first target eye 213, respectively, to acquire the index point S1 and the index point S2 where the two rays contact the virtual area 23, and to determine the area surrounded by the two index points in the virtual area 23 as the target virtual area.
In the present embodiment, a description will be given taking an example in which the region between the connecting lines of the index points S1 and S2 represents a target virtual region.
Step S4 is to acquire a first contact point C 'and a first contact point a' where two rays emitted from a point where the position of the first target eye 213 is in contact with the first virtual image 214, and determine the first target virtual image from the first contact point C 'and the first contact point a'.
In this embodiment, a description will be given taking an example in which an area between the connection lines of the first contact point C 'and the first contact point a' represents the first virtual target image.
Step S5, the target virtual area is determined as an area currently located within the range of the field angle β of the second target eye 223 in the three-dimensional environment presented by the wearable device.
Step S6, two rays are emitted from a point where the second target eye 223 is located, the two rays are respectively connected to the calibration point S1 and the calibration point S2 that enclose the target virtual area, a second contact point D 'and a second contact point B' where the two rays are in contact with the second virtual image 224 are respectively obtained, and the second target virtual image is determined according to the second contact point D 'and the second contact point B'.
In this embodiment, a description will be given taking an example in which the region between the connecting lines of the second contact point D 'and the second contact point B' represents the second virtual target image.
Step S7, converting the first contact point C 'and the first contact point a', and the second contact point D 'and the second contact point B' into a first image point C and a first image point a in the image displayed on the first display screen, and a second image point D and a second image point B in the image displayed on the second display screen, respectively, determining a first region of interest based on the first image point C and the first image point a, and determining a second region of interest based on the second image point D and the second image point B.
It should be noted that, in actual implementation, the first virtual image 214 and the second virtual image 224 are overlapped, but for convenience of explanation of the method for determining the gazing zone, the first virtual image 214 and the second virtual image 224 are not drawn in an overlapped state in fig. 9. In addition, the index points S1 and S2 for representing the target virtual area, the gazing point S, and the like are schematically illustrated.
Fig. 11 shows an apparatus 30 for determining a gazing area according to an embodiment of the present invention, where the apparatus 30 may be applied to a wearable device, and the wearable device may refer to the structure shown in fig. 2, where the apparatus 30 for determining a gazing area includes:
an obtaining module 301, configured to obtain a current gaze point of a first target eye on the first display screen, where the first target eye is a left eye or a right eye;
a first determining module 302, configured to determine a target virtual area according to the gaze point and a field angle of the first target eye, where the target virtual area is an area currently located within a visible range of the first target eye in a three-dimensional environment presented by the wearable device;
a second determining module 303, configured to determine the first target virtual image according to the gaze point and a field angle of the first target eye, where the first target virtual image is a virtual image that is located within a visible range of the first target eye and is formed by the first lens through an image displayed by the first display screen at present;
a third determining module 304, configured to determine the target virtual area as an area currently located within a visible range of the second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is an eye other than the first target eye in the left eye and the right eye;
a fourth determining module 305, configured to determine a second target virtual image according to the target virtual area and the position of the second target eye, where the second target virtual image is a virtual image, formed by the second lens and the image currently displayed by the second display screen, of a second virtual image formed by the second lens, and located within a visible range of the second target eye;
a fifth determining module 306, configured to determine, according to the first target virtual image and the second target virtual image, a first gazing area of the first target eye in the image displayed on the first display screen, and a second gazing area of the second target eye in the image displayed on the second display screen.
To sum up, a target virtual area is determined through the current fixation point of the first target eye on the first display screen and the field angle of the first target eye, and the target virtual area is determined as an area within the visible range of the second target eye, so that the visible range of the second target eye is determined, and then a first virtual image currently seen by the first target eye and a second virtual image currently seen by the second target eye can be determined, and therefore a first fixation area of the first target eye in the image displayed on the first display screen and a second fixation area of the second target eye in the image displayed on the second display screen can be determined. Because the gaze area of two target eyes on the display screen is confirmed by same target virtual area, therefore first region of gazing and the region is watched to the second can accurate coincidence, has solved the problem that the region is watched to the left and right eyes among the correlation technique and is difficult to the complete coincidence and leads to the display effect of image relatively poor among the wearable equipment, has effectively improved the display effect of image among the wearable equipment, has promoted user's impression and has experienced.
Optionally, the first determining module 302 is configured to:
determining the current visual range of the first target eye according to the fixation point and the field angle of the first target eye;
and determining a region in the three-dimensional environment, which is located in the current visible range of the first target eye, as the target virtual region.
Optionally, the fourth determining module 305 is configured to:
determining the current visual range of the second target eye according to the target virtual area and the position of the second target eye;
and determining a part of the second virtual image, which is positioned in the current visible range of the second target eye, as the second target virtual image.
Optionally, the second determining module 303 is configured to:
determining the current visual range of the first target eye according to the position of the first target eye, the fixation point and the field angle of the first target eye;
and determining a part of the first virtual image, which is positioned in the current visible range of the first target eye, as the first target virtual image.
To sum up, a target virtual area is determined through the current fixation point of the first target eye on the first display screen and the field angle of the first target eye, and the target virtual area is determined as an area within the visible range of the second target eye, so that the visible range of the second target eye is determined, and then a first virtual image currently seen by the first target eye and a second virtual image currently seen by the second target eye can be determined, and therefore a first fixation area of the first target eye in the image displayed on the first display screen and a second fixation area of the second target eye in the image displayed on the second display screen can be determined. Because the gaze area of two target eyes on the display screen is confirmed by same target virtual area, therefore first region of gazing and the region is watched to the second can accurate coincidence, has solved the problem that the region is watched to the left and right eyes among the correlation technique and is difficult to the complete coincidence and leads to the display effect of image relatively poor among the wearable equipment, has effectively improved the display effect of image among the wearable equipment, has promoted user's impression and has experienced.
Fig. 12 shows a schematic structural diagram of another wearable device 20 provided in the embodiment of the present invention, where the wearable device 20 includes a gaze area determination apparatus 24, an image acquisition component 23, a first display component 21, and a second display component 22.
The gazing area determining device 24 may be the gazing area determining device 30 shown in fig. 10, and the image collecting component 23, the first display component 21 and the second display component 22 may refer to the foregoing description, which is not repeated herein in this embodiment of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the present invention, the terms "first", "second", "third" and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for determining a gazing zone is applied to a wearable device, the wearable device comprises a first display component and a second display component, the first display component comprises a first display screen and a first lens located on a light-emitting side of the first display screen, the second display component comprises a second display screen and a second lens located on a light-emitting side of the second display screen, and the method comprises the following steps:
acquiring a current fixation point of a first target eye on the first display screen, wherein the first target eye is a left eye or a right eye;
determining a target virtual area according to the fixation point and the field angle of the first target eye, wherein the target virtual area is an area which is currently located in the visual range of the first target eye in the three-dimensional environment presented by the wearable device;
determining a first target virtual image according to the gaze point and the field angle of the first target eye, wherein the first target virtual image is a virtual image which is positioned in the visible range of the first target eye and is formed by the current image displayed by the first display screen through the first lens;
determining the target virtual area as an area currently located in a visual range of a second target eye in the three-dimensional environment presented by the wearable device, wherein the second target eye is an eye other than the first target eye in the left eye and the right eye;
determining a second target virtual image according to the target virtual area and the position of the second target eye, wherein the second target virtual image is a virtual image which is positioned in a visible range of the second target eye and is formed by the image currently displayed by the second display screen through the second lens;
according to the first target virtual image and the second target virtual image, a first watching area of the first target eye in the image displayed by the first display screen and a second watching area of the second target eye in the image displayed by the second display screen are determined.
2. The method of claim 1, wherein determining a target virtual area from the gaze point and a field angle of the first target eye comprises:
determining the current visual range of the first target eye according to the fixation point and the field angle of the first target eye;
and determining a region in the three-dimensional environment, which is located in the current visible range of the first target eye, as the target virtual region.
3. The method of claim 1 or 2, wherein the determining a second virtual target image from the target virtual area and the position of the second target eye comprises:
determining the visible range of the second target eye in the three-dimensional environment according to the target virtual area and the position of the second target eye;
and determining a part of the second virtual image, which is positioned in a visible range of the second target eye in the three-dimensional environment, as the second target virtual image.
4. The method of claim 1, wherein the determining a first target virtual image from the gaze point and a field angle of the first target eye comprises:
determining the current visual range of the first target eye according to the position of the first target eye, the fixation point and the field angle of the first target eye;
and determining a part of the first virtual image, which is positioned in the current visible range of the first target eye, as the first target virtual image.
5. The method of claim 4, wherein determining a first gaze region of the first target eye in the image displayed on the first display screen and a second gaze region of the second target eye in the image displayed on the second display screen from the first target virtual image and the second target virtual image comprises:
acquiring a first corresponding area of the first target virtual image in an image displayed by the first display screen and a second corresponding area of the second target virtual image in an image displayed by the second display screen;
determining the first corresponding region as the first gaze region;
determining the second corresponding region as the second gaze region.
6. The device for determining the gazing area is applied to wearable equipment, the wearable equipment comprises a first display component and a second display component, the first display component comprises a first display screen and a first lens located on the light-emitting side of the first display screen, the second display component comprises a second display screen and a second lens located on the light-emitting side of the second display screen, and the device comprises:
the acquisition module is used for acquiring a fixation point of a first target eye on the first display screen currently, wherein the first target eye is a left eye or a right eye;
a first determining module, configured to determine a target virtual area according to the gaze point and a field angle of the first target eye, where the target virtual area is an area currently located within a visible range of the first target eye in a three-dimensional environment presented by the wearable device;
a second determining module, configured to determine a first target virtual image according to the gaze point and a field angle of the first target eye, where the first target virtual image is a virtual image that is located within a visible range of the first target eye and is formed by the first lens through an image displayed by the first display screen at present;
a third determining module, configured to determine the target virtual area as an area currently located within a visible range of a second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is an eye other than the first target eye in the left eye and the right eye;
a fourth determining module, configured to determine a second target virtual image according to the target virtual area and the position of the second target eye, where the second target virtual image is a virtual image that is located within a visible range of the second target eye and is formed by the image currently displayed by the second display screen through the second lens;
a fifth determining module, configured to determine, according to the first target virtual image and the second target virtual image, a first gazing area of the first target eye in the image displayed on the first display screen, and a second gazing area of the second target eye in the image displayed on the second display screen.
7. The apparatus of claim 6, wherein the first determining module is configured to:
determining the current visual range of the first target eye according to the fixation point and the field angle of the first target eye;
and determining a region in the three-dimensional environment, which is located in the current visible range of the first target eye, as the target virtual region.
8. The apparatus of claim 6 or 7, wherein the fourth determining module is configured to:
determining the current visual range of the second target eye according to the target virtual area and the position of the second target eye;
and determining a part of the second virtual image, which is positioned in the current visible range of the second target eye, as the second target virtual image.
9. The apparatus of claim 8, wherein the second determining module is configured to:
determining the current visual range of the first target eye according to the position of the first target eye, the fixation point and the field angle of the first target eye;
and determining a part of the first virtual image, which is positioned in the current visible range of the first target eye, as the first target virtual image.
10. A wearable device, characterized in that the wearable device comprises: the device comprises a determination device of a watching area, an image acquisition component, a first display component and a second display component;
the first display assembly comprises a first display screen and a first lens positioned on the light emergent side of the first display screen, and the second display assembly comprises a second display screen and a second lens positioned on the light emergent side of the second display screen;
the means for determining the region of gaze is the apparatus of any of claims 6 to 9.
CN201910333506.7A 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device Active CN109901290B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910333506.7A CN109901290B (en) 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device
PCT/CN2020/080961 WO2020215960A1 (en) 2019-04-24 2020-03-24 Method and device for determining area of gaze, and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910333506.7A CN109901290B (en) 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device

Publications (2)

Publication Number Publication Date
CN109901290A CN109901290A (en) 2019-06-18
CN109901290B true CN109901290B (en) 2021-05-14

Family

ID=66956250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910333506.7A Active CN109901290B (en) 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device

Country Status (2)

Country Link
CN (1) CN109901290B (en)
WO (1) WO2020215960A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901290B (en) * 2019-04-24 2021-05-14 京东方科技集团股份有限公司 Method and device for determining gazing area and wearable device
CN110347265A (en) * 2019-07-22 2019-10-18 北京七鑫易维科技有限公司 Render the method and device of image
CN114581514B (en) * 2020-11-30 2025-02-07 华为技术有限公司 Method for determining binocular gaze points and electronic device
US11474598B2 (en) 2021-01-26 2022-10-18 Huawei Technologies Co., Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions
CN113467619B (en) * 2021-07-21 2023-07-14 腾讯科技(深圳)有限公司 Picture display method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105425399A (en) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 Method for rendering user interface of head-mounted equipment according to human eye vision feature
CN106233187A (en) * 2014-04-25 2016-12-14 微软技术许可有限责任公司 There is the display device of light modulation panel
CN107797280A (en) * 2016-08-31 2018-03-13 乐金显示有限公司 Personal immersion display device and its driving method
CN109031667A (en) * 2018-09-01 2018-12-18 哈尔滨工程大学 A kind of virtual reality glasses image display area horizontal boundary localization method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9398229B2 (en) * 2012-06-18 2016-07-19 Microsoft Technology Licensing, Llc Selective illumination of a region within a field of view
US10129538B2 (en) * 2013-02-19 2018-11-13 Reald Inc. Method and apparatus for displaying and varying binocular image content
JP6509101B2 (en) * 2015-12-09 2019-05-08 Kddi株式会社 Image display apparatus, program and method for displaying an object on a spectacle-like optical see-through type binocular display
US10429647B2 (en) * 2016-06-10 2019-10-01 Facebook Technologies, Llc Focus adjusting virtual reality headset
US20190018483A1 (en) * 2017-07-17 2019-01-17 Thalmic Labs Inc. Dynamic calibration systems and methods for wearable heads-up displays
CN108369744B (en) * 2018-02-12 2021-08-24 香港应用科技研究院有限公司 3D Gaze Detection via Binocular Homography Mapping
CN109087260A (en) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 A kind of image processing method and device
CN109901290B (en) * 2019-04-24 2021-05-14 京东方科技集团股份有限公司 Method and device for determining gazing area and wearable device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106233187A (en) * 2014-04-25 2016-12-14 微软技术许可有限责任公司 There is the display device of light modulation panel
CN105425399A (en) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 Method for rendering user interface of head-mounted equipment according to human eye vision feature
CN107797280A (en) * 2016-08-31 2018-03-13 乐金显示有限公司 Personal immersion display device and its driving method
CN109031667A (en) * 2018-09-01 2018-12-18 哈尔滨工程大学 A kind of virtual reality glasses image display area horizontal boundary localization method

Also Published As

Publication number Publication date
WO2020215960A1 (en) 2020-10-29
CN109901290A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109901290B (en) Method and device for determining gazing area and wearable device
US10855909B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
CN106796344B (en) System, arrangement and the method for the enlarged drawing being locked on object of interest
JP5873982B2 (en) 3D display device, 3D image processing device, and 3D display method
CN106484116B (en) The treating method and apparatus of media file
CN108259883B (en) Image processing method, head-mounted display, and readable storage medium
US10715791B2 (en) Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes
CN111556305B (en) Image processing method, VR device, terminal, display system and computer-readable storage medium
CN111710050A (en) Image processing method and device for virtual reality equipment
US10885651B2 (en) Information processing method, wearable electronic device, and processing apparatus and system
CN108710206A (en) A kind of method and apparatus of anti-dazzle and visual fatigue applied to VR displays
CN113467619A (en) Picture display method, picture display device, storage medium and electronic equipment
US20230244307A1 (en) Visual assistance
US20190281280A1 (en) Parallax Display using Head-Tracking and Light-Field Display
CN115359093A (en) A Gaze Tracking Method Based on Monocular Gaze Estimation
CN106851249A (en) Image processing method and display device
CN113438464A (en) Switching control method, medium and system for naked eye 3D display mode
JP2023515205A (en) Display method, device, terminal device and computer program
CN114371779B (en) Visual enhancement method for sight depth guidance
US10255676B2 (en) Methods and systems for simulating the effects of vision defects
EP3038061A1 (en) Apparatus and method to display augmented reality data
CN109917908B (en) Image acquisition method and system of AR glasses
CN111654688A (en) Method and equipment for acquiring target control parameters
CN114581514B (en) Method for determining binocular gaze points and electronic device
CN115883816A (en) Display method and device, head-mounted display equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant