WO2021110035A1 - 眼部定位装置、方法及3d显示设备、方法和终端 - Google Patents
眼部定位装置、方法及3d显示设备、方法和终端 Download PDFInfo
- Publication number
- WO2021110035A1 WO2021110035A1 PCT/CN2020/133329 CN2020133329W WO2021110035A1 WO 2021110035 A1 WO2021110035 A1 WO 2021110035A1 CN 2020133329 W CN2020133329 W CN 2020133329W WO 2021110035 A1 WO2021110035 A1 WO 2021110035A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- black
- eye
- white
- eye positioning
- spatial position
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Definitions
- This application relates to 3D display technology, such as eye positioning devices, methods, and 3D display equipment, methods, and terminals.
- the embodiments of the present application intend to provide eye positioning devices, methods, 3D display devices, methods and terminals, computer-readable storage media, and computer program products.
- an eye positioning device including: an eye locator, including a black and white camera configured to capture a black and white image of a user's face, and a depth acquisition device configured to acquire depth information of the face ;
- the eye positioning image processor is configured to determine the spatial position of the eye based on the black and white image and the depth of field information.
- the spatial position of the user's eyes can be determined with high precision, thereby being able to provide a 3D display image of the display object that matches the spatial position of the user's eyes, improving the 3D display quality and enhancing the viewing experience.
- the viewpoint position of the user's eyes can be determined, so as to provide the user with a more accurate 3D display with a higher degree of freedom.
- the eye positioning image processor is further configured to recognize the presence of eyes based on the black and white image.
- the eye positioning device includes an eye positioning data interface configured to transmit eye spatial position information including the spatial position of the eye.
- the depth-of-field acquisition device is a structured light camera or a TOF camera.
- the eye positioning device further includes a viewing angle determining device configured to calculate the user's viewing angle with respect to the 3D display device.
- 3D display images of the display object viewed from different angles can be generated in a follow-up manner, so that the user can watch the 3D display images consistent with the viewing angle, and enhance the realism and immersion of the 3D display.
- the black and white camera is configured to capture a sequence of black and white images.
- the eye positioning image processor includes: a buffer configured to buffer multiple black-and-white images in a black-and-white image sequence; a comparator configured to compare multiple black-and-white images before and after the black-and-white image sequence; Configured so that when the comparator does not recognize the presence of eyes in the current black-and-white image in the black-and-white image sequence and recognizes the presence of eyes in the previous or subsequent black-and-white images, it will be based on the previous or subsequent black-and-white images
- the eye space position information determined by the acquired depth information is used as the current eye space position information.
- a 3D display device including: a multi-viewpoint 3D display screen, including a plurality of sub-pixels corresponding to a plurality of viewpoints; the eye positioning device as described above, is configured to determine the spatial position of a user's eyes And a 3D processing device configured to determine the viewpoint according to the spatial position of the user's eyes, and render sub-pixels corresponding to the viewpoint based on the 3D signal.
- the multi-view 3D display screen includes a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and each composite sub-pixel of the plurality of composite sub-pixels is corresponding to a plurality of composite sub-pixels. It is composed of multiple sub-pixels for each viewpoint.
- the 3D processing device and the eye positioning device are communicatively connected through an eye positioning data interface.
- the 3D display device further includes a 3D photographing device configured to capture 3D images, and the 3D photographing device includes a depth-of-field camera and at least two color cameras.
- the eye positioning device is integrated with the 3D camera.
- the 3D camera is placed in front of the 3D display device.
- an eye positioning method including: taking a black and white image of a user's face; acquiring depth information of the face; and determining the spatial position of the eye based on the black and white image and the depth information.
- the eye positioning method further includes: recognizing the presence of the eye based on the black and white image.
- the eye positioning method further includes: transmitting eye spatial position information including the spatial position of the eye.
- the eye positioning method further includes: photographing a black-and-white image sequence including black-and-white images.
- the eye positioning method further includes: buffering multiple black-and-white images in a black-and-white image sequence; comparing multiple black-and-white images before and after the black-and-white image sequence; when the current black-and-white image in the black-and-white image sequence is not recognized by comparing
- the eye spatial position information determined based on the previous or subsequent black and white image and the acquired depth information is used as the current eye spatial position information.
- a 3D display method including: determining the spatial position of a user's eyes; determining a viewpoint according to the spatial position of the user's eyes, and rendering sub-pixels corresponding to the viewpoint based on a 3D signal; wherein the 3D display device includes The multi-viewpoint 3D display screen includes multiple sub-pixels corresponding to multiple viewpoints.
- a 3D display terminal including a processor, a memory storing program instructions, and a multi-view 3D display screen.
- the processor is configured to execute the 3D display method described above when executing the program instructions.
- the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned eye positioning method and 3D display method.
- the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
- the above-mentioned computer program includes program instructions.
- the above-mentioned computer executes the above-mentioned eye positioning method, 3D display method.
- Fig. 1 is a schematic diagram of an eye positioning device according to an embodiment of the present disclosure
- FIGS. 2A and 2B are schematic diagrams of a 3D display device according to an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of using an eye positioning device according to an embodiment of the present disclosure to determine the spatial position of an eye
- FIG. 4 is a schematic diagram of the steps of an eye positioning method according to an embodiment of the present disclosure.
- Fig. 5 is a schematic diagram of steps of an eye positioning method according to an embodiment of the present disclosure.
- Fig. 6 is a schematic diagram of steps of an eye positioning method according to an embodiment of the present disclosure.
- FIG. 7 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
- Fig. 8 is a schematic structural diagram of a 3D display terminal according to an embodiment of the present disclosure.
- an eye positioning device configured to be used in a 3D display device.
- the eye positioning device includes: an eye locator, including a black-and-white camera configured to take a black-and-white image, and a black-and-white camera configured to take a black-and-white image.
- a depth-of-field acquisition device for acquiring depth-of-field information; an eye positioning image processor configured to recognize the presence of the eye based on the black and white image and determine the spatial position of the eye based on the black-and-white image and the acquired depth information.
- Such an eye positioning device is exemplarily shown in FIG. 1.
- a 3D display device including: a multi-viewpoint 3D display screen (for example, a multi-viewpoint naked eye 3D display screen), including a plurality of sub-pixels corresponding to a plurality of viewpoints; a 3D processing device configured to Rendering the sub-pixels corresponding to the viewpoint based on the 3D signal; wherein the viewpoint is determined by the spatial position of the user's eyes; and according to the eye positioning device described above.
- a multi-viewpoint 3D display screen for example, a multi-viewpoint naked eye 3D display screen
- a 3D processing device configured to Rendering the sub-pixels corresponding to the viewpoint based on the 3D signal
- the viewpoint is determined by the spatial position of the user's eyes; and according to the eye positioning device described above.
- the determination of the viewpoint by the spatial position of the eye may be implemented by a 3D processing device, or may be implemented by an eye positioning image processor of an eye positioning device.
- the 3D processing device is communicatively connected with the multi-view 3D display screen.
- the 3D processing device is communicatively connected with the driving device of the multi-view 3D display screen.
- an eye positioning method including: taking a black and white image; acquiring depth information; recognizing the presence of the eye based on the black and white image; and determining the spatial position of the eye based on the black and white image and the acquired depth information.
- a 3D display method is provided, which is suitable for a 3D display device.
- the 3D display device includes a multi-viewpoint 3D display screen, including multiple sub-pixels corresponding to multiple viewpoints; the 3D display method includes: transmitting a 3D signal; Use the eye positioning method described above to determine the spatial position of the user's eyes; determine the viewpoint where the eye is based on the spatial position of the eye; and render the sub-pixels corresponding to the viewpoint based on the 3D signal.
- FIG. 2A shows a schematic diagram of a 3D display device 100 according to an embodiment of the present disclosure.
- a 3D display device 100 is provided, including a multi-view 3D display screen 110, a signal interface 140 configured to receive a video frame of a 3D signal, and a 3D processing device communicatively connected to the signal interface 140 The device 130 and the eye positioning device 150.
- the eye positioning device 150 is communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive eye positioning data.
- the 3D processing device is configured to determine the viewpoint of the user's eyes from the spatial position of the eyes. In other embodiments, the determination of the viewpoint of the user's eyes from the spatial position of the eyes can also be achieved by an eye positioning device, and the 3D processing device receives the eye positioning data including the viewpoint.
- the eye positioning data may include the spatial position of the eyes, such as the distance between the user's eyes and the multi-viewpoint 3D display screen, the viewpoint where the user's eyes are located, and the user's perspective.
- the multi-view 3D display screen 110 may include a display panel and a grating (not labeled) covering the display panel.
- the multi-view 3D display screen 110 may include m columns and n rows, that is, m ⁇ n composite pixels, thereby defining a display resolution of m ⁇ n.
- the resolution of m ⁇ n may be a resolution above Full High Definition (FHD), including but not limited to 1920 ⁇ 1080, 1920 ⁇ 1200, 2048 ⁇ 1280, 2560 ⁇ 1440, 3840 ⁇ 2160, etc.
- FHD Full High Definition
- each composite pixel includes a plurality of composite sub-pixels, and each composite sub-pixel is composed of i sub-pixels of the same color corresponding to i viewpoints, i ⁇ 3.
- i 6
- i 6
- the three composite sub-pixels respectively correspond to three colors, namely red (R), green (G) and blue (B).
- the three composite sub-pixels in each composite pixel are arranged in a single column, and the six sub-pixels of each composite sub-pixel are arranged in a single row.
- multiple composite sub-pixels in each composite pixel are arranged in different forms; it is also conceivable that multiple sub-pixels in each composite sub-pixel are arranged in different forms.
- the 3D display apparatus 100 may be provided with a single 3D processing device 130.
- the single 3D processing device 130 simultaneously processes the rendering of the sub-pixels of each composite pixel of each composite pixel of the multi-view 3D display screen 110.
- the 3D display device 100 may also be provided with more than one 3D processing device 130, which process the composite sub-pixels of the composite pixels of the multi-view 3D display screen 110 in parallel, serial, or a combination of series and parallel. Rendering of sub-pixels.
- 3D processing device can be allocated in other ways and process multiple rows and multiple columns of composite pixels or composite sub-pixels of the multi-view 3D display screen 110 in parallel, which falls within the scope of the embodiments of the present disclosure.
- the 3D processing device 130 may also optionally include a buffer 131 to buffer the received video frames.
- the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
- the 3D display device 100 may further include a processor 101 communicatively connected to the 3D processing device 130 through the signal interface 140.
- the processor 101 is included in a computer or a smart terminal, such as a mobile terminal, or as a processor unit thereof.
- the processor 101 may be arranged outside the 3D display device.
- the 3D display device may be a multi-view 3D display with a 3D processing device, such as a non-intelligent 3D TV.
- the following exemplary embodiments of the 3D display device include a processor inside.
- the signal interface 140 is an internal interface connecting the processor 101 and the 3D processing device 130.
- the signal interface as the internal interface of the 3D display device may be MIPI, mini-MIPI interface, LVDS interface, min-LVDS interface or Display Port interface.
- the processor 101 of the 3D display device 100 may include a register 122.
- the register 122 can be configured to temporarily store instructions, data, and addresses.
- the register 122 may be configured to receive information about the display requirements of the multi-view 3D display screen 110
- the 3D display device 100 may further include a codec configured to decompress and encode and decode the compressed 3D signal and send the decompressed 3D signal to the 3D processing device 130 via the signal interface 140.
- a codec configured to decompress and encode and decode the compressed 3D signal and send the decompressed 3D signal to the 3D processing device 130 via the signal interface 140.
- the 3D display device 100 further includes a 3D photographing device 120 configured to capture 3D images.
- the eye positioning device 150 is integrated in the 3D photographing device 120, or it is conceivable to be integrated into a conventional photographing device of a processing terminal or a display device.
- the 3D camera 120 is a front camera.
- the 3D photographing device 120 includes a camera unit 121, a 3D image processor 126, and a 3D image output interface 125.
- the camera unit 121 includes a first color camera 121a, a second color camera 121b, and a depth camera 121c.
- the 3D image processor 126 may be integrated in the camera unit 121.
- the first color camera 121a is configured to obtain a first color image of the subject
- the second color camera 121b is configured to obtain a second color image of the subject
- the intermediate point is obtained by combining the two color images.
- the composite color image; the depth-of-field camera 121c is configured to obtain depth information of the subject.
- the synthesized color image and depth information obtained by synthesis form a 3D image.
- the first color camera and the second color camera are the same color camera.
- the first color camera and the second color camera may also be different color cameras.
- the first color image and the second color image can be calibrated or corrected.
- the depth-of-field camera 121c may be a TOF (time of flight) camera or a structured light camera.
- the depth camera 121c may be arranged between the first color camera and the second color camera.
- the 3D image processor 126 is configured to synthesize the first color image and the second color image into a synthetic color image, and synthesize the obtained synthetic color image and depth information into a 3D image.
- the formed 3D image is transmitted to the processor 101 of the 3D display device 100 through the 3D image output interface 125.
- the first color image, the second color image, and the depth information are directly transmitted to the processor 101 of the 3D display device 100 via the 3D image output interface 125, and the processor 101 performs the aforementioned synthesis of the two color images and forms the 3D image Wait for processing.
- the 3D image output interface 125 may also be communicatively connected to the 3D processing device 130 of the 3D display device 100, so that the 3D processing device 130 can perform processing such as synthesizing color images and forming 3D images.
- At least one of the first color camera and the second color camera is a wide-angle color camera.
- the eye positioning device 150 is integrated in the 3D photographing device 120 and includes an eye locator 151, an eye positioning image processor 152 and an eye positioning data interface 153.
- the eye locator 151 includes a black-and-white camera 151a and a depth-of-field acquisition device 151b.
- the black-and-white camera 151a is configured to capture black-and-white images
- the depth-of-field acquisition device 151b is configured to acquire depth-of-field information.
- the eye positioning device 150 is also front-facing.
- the subject of the black-and-white camera 151a is the user's face, and the face or eye is recognized based on the black-and-white image captured, and the depth-of-field acquiring device acquires at least the depth information of the eye, and may also acquire the depth information of the face.
- the eye positioning data interface 153 of the eye positioning device 150 is communicatively connected to the 3D processing device 130 of the 3D display device 100, so that the 3D processing device 130 can directly receive the eye positioning data.
- the eye positioning image processor 152 may be communicatively connected to the processor 101 of the 3D display device 100, so that the eye positioning data may be transmitted from the processor 101 to the 3D processing through the eye positioning data interface 153 ⁇ 130 ⁇ Device 130.
- the eye positioning device 150 is communicatively connected with the camera unit 121, so that the eye positioning data can be used when shooting 3D images.
- the eye locator 151 is also provided with an infrared emitting device 154.
- the infrared emitting device 154 is configured to selectively emit infrared light to supplement the light when the ambient light is insufficient, for example, when shooting at night, so that shooting can also be performed under weak ambient light conditions. Can recognize black and white images of faces and eyes.
- the eye positioning device 150 or the processing terminal or display device integrated with the eye positioning device may be configured to, when the black-and-white camera is working, based on the received light sensing signal, for example, it is detected that the light sensing signal is lower than When the threshold is given, control the opening of the infrared emitting device or adjust its size.
- the light sensing signal is received from an ambient light sensor integrated in the processing terminal or the display device.
- the infrared emitting device 154 is configured to emit infrared light with a wavelength greater than or equal to 1.5 microns, that is, long-wave infrared light. Compared with short-wave infrared light, long-wave infrared light has a weaker ability to penetrate the skin, so it is less harmful to the eyes.
- the captured black and white image is transmitted to the eye positioning image processor 152.
- the eye positioning image processor is configured to have a visual recognition function, such as a face recognition function, and is configured to recognize a face and eyes based on a black and white image. Based on the identified eyes, the viewing angle of the user relative to the display screen of the display device can be obtained, which will be described below.
- the depth information of the eyes or the face acquired by the depth acquisition device 151b is also transmitted to the eye positioning image processor 152.
- the eye positioning image processor 152 is configured to determine the spatial position of the eye based on the black and white image and the acquired depth information, which will be described below.
- the depth-of-field acquisition device 151b is a structured light camera or a TOF camera.
- the TOF camera includes a projector and a receiver.
- the projector transmits light pulses to the observed object, and then receives the light pulses reflected back from the observed object through the receiver, passing the round trip time of the light pulse To calculate the distance between the observed object and the camera.
- the structured light camera includes a projector and a collector.
- the surface structured light such as coded structured light
- the surface structured light is projected onto the observed object through the projector to form a distorted image of the surface structured light on the surface of the observed object.
- the distorted image is collected and analyzed by the collector, so as to restore the three-dimensional outline and spatial information of the observed object.
- the black and white camera 151a is a wide-angle black and white camera.
- the depth-of-field acquisition device 151b and the depth-of-field camera 121c of the 3D photographing device 120 may be the same. In this case, the depth acquisition device 151b and the depth camera 121c may be the same TOF camera or the same structured light camera. In other embodiments, the depth-of-field acquisition device 151b and the depth-of-field camera 121c may be different.
- the eye positioning device 150 includes a viewing angle determining device 155, which is configured to calculate the user's viewing angle with respect to the 3D display device or its display screen or black and white camera.
- the angle of view includes, but is not limited to, the inclination angle of the user's monocular and black-and-white camera lens center O/display center DLC relative to the black and white camera plane MCP/display plane DLP, and the binocular connection The inclination angle of the connection between the midpoint (center of both eyes) and the black-and-white camera lens center O/display center DLC relative to the black-and-white camera plane MCP/display plane DLP.
- the angle of view can also include the inclination angle of the binocular line with respect to the black and white camera plane MCP/display plane DLP, and the plane of the face HFP with respect to The tilt angle of the black and white camera plane MCP/display plane DLP, etc.
- the plane HFP of the face can be determined by extracting several facial features, such as the eyes and ears, the corners of the eyes and the mouth, the eyes and the chin, and so on.
- the black-and-white camera plane MCP can be regarded as the display screen plane DLP.
- the inclination angle of the line with respect to the plane described above includes but is not limited to the angle between the line and the projection of the line in the plane, the angle between the projection of the line in the plane and the horizontal direction of the plane, The angle between the projection of the line in the plane and the vertical direction of the plane.
- the angle between the line and the projection of the line in the plane may have a horizontal component and a vertical component.
- the viewing angle determining device 155 may be integrated in the eye positioning image processor 152.
- the eye positioning image processor 152 is configured to determine the spatial position of the eye based on the black and white image and the depth information.
- the spatial position of the eye includes but is not limited to the above-described angle of view, the distance of the eye relative to the black and white camera plane MCP/display plane DLP, the eye relative to the eye positioning device or its black and white camera /3D display equipment or the spatial coordinates of its display screen, etc.
- the eye positioning device 150 may further include a viewing angle data output interface configured to output the viewing angle calculated by the viewing angle determining device.
- the viewing angle determination device may be integrated in the 3D processing device.
- the black-and-white image captured by the black-and-white camera 151a including the left and right eyes of the user can be known as the X-axis (horizontal direction ) Coordinates and Y-axis (vertical direction) coordinates.
- the X axis and the Y axis (not shown) perpendicular to the X axis form a black and white camera plane MCP, which is parallel to the focal plane FP;
- the axis direction is the Z axis, and the Z axis is also the depth direction. That is to say, in the XZ plane shown in FIG.
- the X-axis coordinates XR and XL of the left and right eyes imaging in the focal plane FP are known; moreover, the focal length f of the black and white camera 151a is known;
- the inclination angle ⁇ relative to the X axis of the projection of the line connecting the left eye and the right eye and the black and white camera lens center O in the XZ plane can be calculated, which will be further described below.
- the Y-axis coordinates of the left eye and right eye imaging in the focal plane FP are known, and combined with the known focal length f, the left eye and right eye can be calculated
- the black-and-white images including the left and right eyes of the user captured by the black-and-white camera 151a and the depth information of the left and right eyes acquired by the depth-of-field acquisition device 151b can be known as the left and right eyes.
- the angle ⁇ between the projection of the line connecting the left eye and the right eye in the XZ plane and the X axis can be calculated.
- the angle between the projection of the line connecting the left eye and the right eye in the YZ plane and the Y axis can be calculated.
- FIG. 3 schematically shows a top view of a geometric relationship model that uses a black and white camera 151a and a depth acquisition device 151b (not shown) to determine the spatial position of the eye.
- R and L represent the user's right eye and left eye, respectively
- XR and XL are respectively the X-axis coordinates of the user's right eye R and left eye L in the focal plane FP of the black and white camera 151a.
- a threshold may be set for the included angle ⁇ , and when the included angle ⁇ does not exceed the threshold, it can be considered that the user is looking up at the plane of the display screen DLP.
- the eye positioning device 150 includes an eye positioning data interface 153 configured to transmit eye spatial position information, including but not limited to the tilt angle, included angle, and spatial coordinates as described above.
- eye spatial position information can provide users with targeted or customized 3D display screens.
- the angle of view such as the angle between the center of the user’s eyes and the center of the display DLC relative to the horizontal direction (X axis) or the vertical direction (Y axis) is transmitted to the 3D processing device 130.
- the 3D processing device 130 Based on the received viewing angle, the 3D processing device 130 generates a 3D display screen corresponding to the viewing angle, so as to be able to present display objects viewed from different angles to the user.
- the following effect can be presented in the horizontal direction; based on the relationship between the center of the user's eyes and the center of the display screen DLC
- the angle in the vertical direction (Y-axis) can present a follow-up effect in the vertical direction.
- the spatial coordinates of the user’s left and right eyes are transmitted to the 3D processing device 130 through the eye positioning data interface 153.
- the 3D processing device 130 determines based on the received spatial coordinates that the user’s eyes are located and determined by
- the multi-viewpoint 3D display screen 110 provides viewpoints, and renders corresponding sub-pixels based on the video frame of the 3D signal.
- the video frame based on the 3D signal renders the sub-pixels corresponding to the two viewpoints among the multiple composite sub-pixels of each composite pixel.
- the sub-pixels corresponding to the viewpoints adjacent to the two viewpoints among the plurality of complex sub-pixels of each complex pixel may be additionally rendered.
- the video frame based on the 3D signal renders the multiple composite sub-pixels of each composite pixel corresponding to the four viewpoints. Sub-pixels.
- the next video frame of the 3D signal may be rendered based on a new predetermined viewpoint among multiple composite sub-pixels of each composite pixel.
- the corresponding sub-pixel may be rendered based on a new predetermined viewpoint among multiple composite sub-pixels of each composite pixel.
- multiple composite sub-pixels of each composite pixel and sub-pixels corresponding to the viewpoints of each user's eyes may be rendered based on the video frame of the 3D signal.
- the user's viewing angle and viewpoint position are determined separately, and a 3D display screen that changes with the viewing angle and viewpoint position is provided accordingly to improve the viewing experience.
- the eye spatial position information can also be directly transmitted to the processor 101 of the 3D display device 100, and the 3D processing device 130 receives/reads the eye spatial position from the processor 101 through the eye positioning data interface 153 information.
- the black-and-white camera 151a is configured to capture a sequence of black-and-white images, which includes a plurality of black-and-white images arranged in time.
- the eye positioning image processor 152 includes a buffer 156 and a comparator 157.
- the buffer 156 is configured to buffer a plurality of black-and-white images arranged sequentially in time in the black-and-white image sequence.
- the comparator 157 is configured to compare a plurality of black and white images taken before and after time in the black and white image sequence. By comparison, for example, it can be judged whether the spatial position of the eye has changed or whether the eye is still in the viewing range, and so on.
- the eye localization image processor 152 further includes a judge (not shown) configured to, based on the comparison result of the comparator, the presence of the eye is not recognized in the current black-and-white image in the black-and-white image sequence And when the presence of eyes is recognized in the previous or subsequent black and white images, the eye spatial position information determined based on the previous or subsequent black and white images is used as the current eye spatial position information.
- the user briefly turns his head. In this case, the user's face and eyes may not be recognized for a short time.
- the eye spatial position information determined based on the black-and-white image that is later than the current black-and-white image can be used as the current eye spatial position information; it can also be based on the eye-space position information before the current black-and-white image. , That is, the eye space position information determined by taking the black and white image earlier is used as the current eye space position information.
- the black and white camera 151a is configured to capture a sequence of black and white images at a frequency of 24 frames per second or more.
- the shooting is performed at a frequency of 30 frames per second.
- shooting is performed at a frequency of 60 frames per second.
- the black and white camera 151a is configured to shoot at the same frequency as the refresh frequency of the display screen of the 3D display device.
- the embodiments of the present disclosure may also provide an eye positioning method, which is implemented by using the eye positioning device in the above-mentioned embodiment.
- the eye positioning method includes:
- S403 Determine the spatial position of the eye based on the captured black and white image and the depth of field information.
- the eye positioning method includes:
- S504 Determine the spatial position of the eye based on the captured black and white image and depth of field information
- S505 Transmit the spatial position information of the eye including the spatial position of the eye.
- the eye positioning method includes:
- S601 Take a black and white image sequence including black and white images of the user's face
- S602 Cache multiple black and white images in the black and white image sequence
- S603 compare multiple black and white images before and after in the black and white image sequence
- the embodiments of the present disclosure may also provide a 3D display method, which is applicable to the 3D display device in the above embodiment.
- the 3D display device includes a multi-viewpoint 3D display screen, and the multi-viewpoint 3D display screen includes a plurality of sub-pixels corresponding to a plurality of viewpoints.
- the 3D display method includes:
- S701 Determine the spatial position of the user's eyes
- S702 Determine the viewpoint according to the spatial position of the user's eyes, and render sub-pixels corresponding to the viewpoint based on the 3D signal.
- the embodiment of the present disclosure provides a 3D display terminal 800.
- the 3D display terminal 800 includes a processor 814, a memory 811, a multi-view 3D display screen 810, and may also include a communication interface 812 and a bus 813.
- the multi-view 3D display screen 810, the processor 814, the communication interface 812, and the memory 811 communicate with each other through the bus 813.
- the communication interface 812 can be used for information transmission.
- the processor 814 may call logic instructions in the memory 811 to execute the 3D display method of the foregoing embodiment.
- logic instructions in the memory 811 can be implemented in the form of a software functional unit and when sold or used as an independent product, they can be stored in a computer readable storage medium.
- the memory 811 can be configured to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
- the processor 814 executes functional applications and data processing by running the program instructions/modules stored in the memory 811, that is, realizes the eye positioning method and/or the 3D display method in the foregoing method embodiment.
- the memory 811 may include a program storage area and a data storage area.
- the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal device, and the like.
- the memory 811 may include a high-speed random access memory, and may also include a non-volatile memory.
- the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned eye positioning method and 3D display method.
- the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
- the above-mentioned computer program includes program instructions.
- the above-mentioned computer executes the above-mentioned eye positioning method, 3D display method.
- the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
- the aforementioned storage media can be non-transitory storage media, including: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other media that can store program codes, or it can be a transient storage medium. .
- the disclosed methods and products can be implemented in other ways.
- the device or device embodiments described above are merely illustrative.
- the division of units may only be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined. Or it can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to implement this embodiment.
- the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
本申请公开一种眼部定位装置,包括:眼部定位器,包括被配置为拍摄用户脸部的黑白图像的黑白摄像头和被配置为获取脸部的景深信息的景深获取装置;眼部定位图像处理器,被配置为基于黑白图像和获取的景深信息确定眼部的空间位置。上述装置能高精度地确定用户眼部的空间位置,进而提高3D显示质量。本申请还公开一种眼部定位方法、3D显示设备、3D显示方法和3D显示终端、计算机可读存储介质、计算机程序产品。
Description
本申请要求在2019年12月05日提交中国知识产权局、申请号为201911231165.9、发明名称为“人眼追踪装置、方法及3D显示设备、方法和终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及3D显示技术,例如涉及眼部定位装置、方法及3D显示设备、方法和终端。
在一些常规的脸部或眼部定位装置中,仅检测脸部与屏幕的距离,并依靠预设的或默认的瞳距来确定眼部所在的视点位置。这样识别的精度不高,可能会造成视点计算错误,无法满足高质量的3D显示。
本背景技术仅为了便于了解本领域的相关技术,并不视作对现有技术的承认。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了简单的概括。该概括不是泛泛评述,也不是要确定关键/重要组成元素或描绘这些实施例的保护范围,而是作为后面的详细说明的序言。
本申请的实施例意图提供眼部定位装置、方法及3D显示设备、方法和终端、计算机可读存储介质、计算机程序产品。
在一个方案中,提供了一种眼部定位装置,包括:眼部定位器,包括被配置为拍摄用户的脸部的黑白图像的黑白摄像头和被配置为获取脸部的景深信息的景深获取装置;眼部定位图像处理器,被配置为基于黑白图像和景深信息确定眼部的空间位置。
通过这种眼部定位装置,能够高精度地确定用户眼部的空间位置,从而能够提供符合用户眼部的空间位置的显示对象的3D显示画面,提高3D显示质量,提升观看体验。基于用户眼部的实际空间位置能够确定用户眼部所处的视点位置,从而能向用户提供更精确的、自由度更高的3D显示。
在一些实施例中,眼部定位图像处理器还被配置为基于黑白图像识别眼部的存在。
在一些实施例中,眼部定位装置包括眼部定位数据接口,被配置为传输包含眼部的空间位置的眼部空间位置信息。
在一些实施例中,景深获取装置为结构光摄像头或TOF摄像头。
在一些实施例中,眼部定位装置还包括视角确定装置,被配置为计算用户相对于3D显示设备的视角。
根据视角,能够以随动的方式生成从不同角度观察到的显示对象的3D显示画面,从而使用户能够观看到与视角相符合的3D显示画面,增强3D显示的真实感和沉浸感。
在一些实施例中,黑白摄像头被配置为拍摄黑白图像序列。
在一些实施例中,眼部定位图像处理器包括:缓存器,配置为缓存黑白图像序列中多幅黑白图像;比较器,配置为比较黑白图像序列中的前后多幅黑白图像;判决器,被配置为,当比较器通过比较在黑白图像序列中的当前黑白图像中未识别到眼部的存在且在之前或之后的黑白图像中识别到眼部的存在时,将基于之前或之后的黑白图像和获取的景深信息确定的眼部空间位置信息作为当前的眼部空间位置信息。
基于此,例如在黑白摄像头出现卡顿或跳帧等情况时,能够为用户提供更为连贯的显示画面,确保观看体验。
在一个方案中,提供了一种3D显示设备,包括:多视点3D显示屏,包括对应多个视点的多个子像素;如上文描述的眼部定位装置,被配置为确定用户眼部的空间位置;以及3D处理装置,被配置为根据用户眼部的空间位置确定视点,并且基于3D信号渲染与视点对应的子像素。
在一些实施例中,多视点3D显示屏包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成。
在一些实施例中,3D处理装置与眼部定位装置通过眼部定位数据接口通信连接。
在一些实施例中,3D显示设备还包括:3D拍摄装置,被配置为采集3D图像,3D拍摄装置包括景深摄像头和至少两个彩色摄像头。
在一些实施例中,眼部定位装置与3D拍摄装置集成设置。
在一些实施例中,3D拍摄装置前置于3D显示设备。
在一个方案中,提供了一种眼部定位方法,包括:拍摄用户的脸部的黑白图像;获取脸部的景深信息;基于黑白图像和所述景深信息确定眼部的空间位置。
在一些实施例中,眼部定位方法还包括:基于黑白图像识别眼部的存在。
在一些实施例中,眼部定位方法还包括:传输包含眼部的空间位置的眼部空间位置信息。
在一些实施例中,眼部定位方法还包括:拍摄出包括黑白图像的黑白图像序列。
在一些实施例中,眼部定位方法还包括:缓存黑白图像序列中多幅黑白图像;比较黑白图像序列中的前后多幅黑白图像;当通过比较在黑白图像序列中的当前黑白图像未识别到眼部的存在且在之前或之后的黑白图像中识别到眼部的存在时,基于之前或之后的黑白图像和获取的景深信息确定的眼部空间位置信息作为当前的眼部空间位置信息。
在一个方案中,提供了一种3D显示方法包括:确定用户眼部的空间位置;根据用户眼部的空间位置确定视点,并且基于3D信号渲染与视点对应的子像素;其中,3D显示设备包括多视点3D显示屏,多视点3D显示屏包括对应多个视点的多个子像素。
在一个方案中,提供了一种3D显示终端,包括处理器、存储有程序指令的存储器和多视点3D显示屏,处理器被配置为在执行程序指令时,执行如上文描述的3D显示方法。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的眼部定位方法、3D显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的眼部定位方法、3D显示方法。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图不构成比例限制,并且其中:
图1是根据本公开实施例的眼部定位装置的示意图;
图2A和图2B是根据本公开实施例的3D显示设备的示意图;
图3是利用根据本公开实施例的眼部定位装置确定眼部的空间位置的示意图;
图4是根据本公开实施例的眼部定位方法的步骤示意图;
图5是根据本公开实施例的眼部定位方法的步骤示意图;
图6是根据本公开实施例的眼部定位方法的步骤示意图;
图7是根据本公开实施例的3D显示方法的步骤示意图;
图8是根据本公开实施例的3D显示终端的结构示意图。
附图标记:
100:3D显示设备;101:处理器;122:寄存器;110:多视点3D显示屏;120:3D拍摄装置;121:摄像头单元;121a:第一彩色摄像头;121b:第二彩色摄像头;121c:景深摄像头;125:3D图像输出接口;126:3D图像处理器;130:3D处理装置;131:缓存器; 140:信号接口;150:眼部定位装置;151:眼部定位器;151a:黑白摄像头;151b:景深获取装置;152:眼部定位图像处理器;155:视角确定装置;156:缓存器;157:比较器;153:眼部定位数据接口;FP:焦平面;O:镜头中心;f:焦距;MCP:黑白摄像头平面;R:用户的右眼;L:用户的左眼;P:瞳距;XR:用户的右眼在焦平面内成像的X轴坐标;XL:用户的左眼在焦平面内成像的X轴坐标;βR:倾斜角;βL:倾斜角;α:夹角;DR:用户的右眼R相对于黑白摄像头平面MCP的距离;DL:用户的左眼L相对于黑白摄像头平面MCP的距离;DLP:显示屏平面;DLC:显示屏中心;HFP:脸部所在平面;800:3D显示终端;810:多视点3D显示屏;811:存储器;812:通信接口;813:总线;814:处理器。
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。
在本公开实施例中,提供了一种眼部定位装置,被配置为用于3D显示设备,眼部定位装置包括:眼部定位器,包括被配置为拍摄黑白图像的黑白摄像头和被配置为获取景深信息的景深获取装置;眼部定位图像处理器,被配置为基于黑白图像识别眼部的存在且基于黑白图像和获取的景深信息确定眼部的空间位置。这种眼部定位装置在图1中示例性地示出。
在本公开实施例中,提供了一种3D显示设备,包括:多视点3D显示屏(例如:多视点裸眼3D显示屏),包括对应多个视点的多个子像素;3D处理装置,被配置为基于3D信号渲染与视点对应的子像素;其中,视点由用户的眼部的空间位置确定;以及根据上文描述的眼部定位装置。
作为解释而非限制性地,由眼部的空间位置确定视点可由3D处理装置实现,也可由眼部定位装置的眼部定位图像处理器实现。
在一些实施例中,3D处理装置与多视点3D显示屏通信连接。
在一些实施例中,3D处理装置与多视点3D显示屏的驱动装置通信连接。
在本公开实施例中,提供了一种眼部定位方法,包括:拍摄黑白图像;获取景深信息;基于黑白图像识别眼部的存在;基于黑白图像和获取的景深信息确定眼部的空间位置。
在本公开实施例中,提供了一种3D显示方法,适用于3D显示设备,3D显示设备包括多视点3D显示屏,包括对应多个视点的多个子像素;3D显示方法包括:传输3D信号;利用根据上文描述的眼部定位方法确定用户的眼部的空间位置;基于眼部的空间位置确定眼部所在的视点;基于3D信号渲染与视点对应的子像素。
图2A示出了根据本公开实施例的3D显示设备100的示意图。参考图2A,在本公开实施例中提供了一种3D显示设备100,包括多视点3D显示屏110、被配置为接收3D信号的视频帧的信号接口140、与信号接口140通信连接的3D处理装置130和眼部定位装置150。眼部定位装置150通信连接至3D处理装置130,由此,3D处理装置130可以直接接收眼部定位数据。
在一些实施例中,3D处理装置被配置为由眼部的空间位置确定用户眼部所在的视点。在另一些实施例中,由眼部的空间位置确定用户眼部所在的视点也可通过眼部定位装置实现,3D处理装置接收包含视点的眼部定位数据。
作为解释而非限制性地,眼部定位数据可以包含眼部的空间位置,例如用户的眼部相对于多视点3D显示屏的间距、用户的眼部所在的视点、用户视角等。
多视点3D显示屏110可包括显示面板和覆盖在显示面板上的光栅(未标识)。在图2A所示的实施例中,多视点3D显示屏110可包括m列n行、亦即m×n个复合像素并因此限定出m×n的显示分辨率。
在一些实施例中,m×n的分辨率可以为全高清(FHD)以上的分辨率,包括但不限于,1920×1080、1920×1200、2048×1280、2560×1440、3840×2160等。
作为解释而非限制地,每个复合像素包括多个复合子像素,各复合子像素由对应于i个视点的i个同色子像素构成,i≥3。在图2A所示的实施例中,i=6,但可以想到i为其他数值。在所示的实施例中,多视点3D显示屏可相应地具有i(i=6)个视点(V1-V6),但可以想到可以相应地具有更多或更少个视点。
作为解释而非限制地,在图2A所示的实施例中,每个复合像素包括三个复合子像素,并且每个复合子像素由对应于6个视点(i=6)的6个同色子像素构成。三个复合子像素分别对应于三种颜色,即红(R)、绿(G)和蓝(B)。在图2A所示的实施例中,每个复合像素中的三个复合子像素呈单列布置,每个复合子像素的六个子像素呈单行布置。但可以想到,各复合像素中的多个复合子像素成不同排布形式;也可以想到,各复合子像素中的多个子像素成不同排布形式。
作为解释而非限制性地,例如图2A所示,3D显示设备100可设置有单个3D处理装置130。单个3D处理装置130同时处理对多视点3D显示屏110的各复合像素的各复合子像素的子像素的渲染。在另一些实施例中,3D显示设备100也可设置有一个以上3D处理装置130,它们并行、串行或串并行结合地处理对多视点3D显示屏110的各复合像素的各复合子像素的子像素的渲染。本领域技术人员将明白,一个以上3D处理装置可以有其他的方式分配且并行处理多视点3D显示屏110的多行多列复合像素或复合子像素,这落入 本公开实施例的范围内。
在一些实施例中,3D处理装置130还可以选择性地包括缓存器131,以便缓存所接收到的视频帧。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
继续参考图2A,3D显示设备100还可包括通过信号接口140通信连接至3D处理装置130的处理器101。在本文所示的一些实施例中,处理器101被包括在计算机或智能终端、如移动终端中或作为其处理器单元。但是可以想到,在一些实施例中,处理器101可以设置在3D显示设备的外部,例如3D显示设备可以为带有3D处理装置的多视点3D显示器,例如非智能的3D电视。
为简单起见,下文中的3D显示设备的示例性实施例内部包括处理器。基于此,信号接口140为连接处理器101和3D处理装置130的内部接口。在本文所示的一些实施例中,作为3D显示设备的内部接口的信号接口可以为MIPI、mini-MIPI接口、LVDS接口、min-LVDS接口或Display Port接口。在一些实施例中,如图2A所示,3D显示设备100的处理器101可包括寄存器122。寄存器122可被配置为暂存指令、数据和地址。在一些实施例中,寄存器122可被配置为接收有关多视点3D显示屏110的显示要求的信息
在一些实施例中,3D显示设备100还可以包括编解码器,配置为对压缩的3D信号解压缩和编解码并将解压缩的3D信号经信号接口140发送至3D处理装置130。
参考图2B,3D显示设备100还包括被配置为采集3D图像的3D拍摄装置120,眼部定位装置150集成在3D拍摄装置120中,也可以想到集成到处理终端或显示设备的常规摄像装置中。在所示的实施例中,3D拍摄装置120为前置摄像装置。3D拍摄装置120包括摄像头单元121、3D图像处理器126、3D图像输出接口125。
如图2B所示,摄像头单元121包括第一彩色摄像头121a、第二彩色摄像头121b、景深摄像头121c。在另一些实施例中,3D图像处理器126可以集成在摄像头单元121内。在一些实施例中,第一彩色摄像头121a被配置为获得拍摄对象的第一彩色图像,第二彩色摄像头121b被配置为获得拍摄对象的第二彩色图像,通过合成这两幅彩色图像获得中间点的合成彩色图像;景深摄像头121c被配置为获得拍摄对象的景深信息。通过合成获得的合成彩色图像和景深信息形成3D图像。在本公开实施例中,第一彩色摄像头和第二彩色摄像头是相同的彩色摄像头。在另一些实施例中,第一彩色摄像头和第二彩色摄像头也可以是不同的彩色摄像头。在这种情况下,为了获得合成彩色图像,可以对第一彩色图像和第二彩色图像进行校准或矫正。景深摄像头121c可以是TOF(飞行时间)摄像头或结构光摄像头。景深摄像头121c可以设置在第一彩色摄像头和第二彩色摄像头之间。
在一些实施例中,3D图像处理器126被配置为将第一彩色图像和第二彩色图像合成为合成彩色图像,并将获得的合成彩色图像与景深信息合成为3D图像。所形成的3D图像通过3D图像输出接口125传输至3D显示设备100的处理器101。
可选地,第一彩色图像、第二彩色图像以及景深信息经由3D图像输出接口125直接传输至3D显示设备100的处理器101,并通过处理器101进行上述合成两幅彩色图像以及形成3D图像等处理。
可选地,3D图像输出接口125还可通信连接到3D显示设备100的3D处理装置130,从而可通过3D处理装置130进行上述合成彩色图像以及形成3D图像等处理。
在一些实施例中,第一彩色摄像头和第二彩色摄像头中至少一个是广角的彩色摄像头。
继续参考图2B,眼部定位装置150集成在3D拍摄装置120内并且包括眼部定位器151、眼部定位图像处理器152和眼部定位数据接口153。
眼部定位器151包括黑白摄像头151a和景深获取装置151b。黑白摄像头151a被配置为拍摄黑白图像,景深获取装置151b被配置为获取景深信息。在3D拍摄装置120是前置的并且眼部定位装置150集成在3D拍摄装置120内的情况下,眼部定位装置150也是前置的。那么,黑白摄像头151a的拍摄对象是用户脸部,基于拍摄到的黑白图像识别出脸部或眼部,景深获取装置至少获取眼部的景深信息,也可以获取脸部的景深信息。
在一些实施例中,眼部定位装置150的眼部定位数据接口153通信连接至3D显示设备100的3D处理装置130,由此,3D处理装置130可以直接接收眼部定位数据。在另一些实施例中,眼部定位图像处理器152可通信连接至3D显示设备100的处理器101,由此眼部定位数据可以从处理器101通过眼部定位数据接口153被传输至3D处理装置130。
在一些实施例中,眼部定位装置150与摄像头单元121通信连接,由此可在拍摄3D图像时使用眼部定位数据。
可选地,眼部定位器151还设置有红外发射装置154。在黑白摄像头151a工作时,红外发射装置154被配置为选择性地发射红外光,以在环境光线不足时、例如在夜间拍摄时起到补光作用,从而在环境光线弱的条件下也能拍摄能识别出脸部及眼部的黑白图像。
在一些实施例中,眼部定位装置150或集成有眼部定位装置的处理终端或显示设备可以配置为,在黑白摄像头工作时,基于接收到的光线感应信号,例如检测到光线感应信号低于给定阈值时,控制红外发射装置的开启或调节其大小。在一些实施例中,光线感应信号是从处理终端或显示设备集成的环境光传感器接收的。
可选地,红外发射装置154配置为发射波长大于或等于1.5微米的红外光,亦即长波红外光。与短波红外光相比,长波红外光穿透皮肤的能力较弱,因此对眼部的伤害较小。
拍摄到的黑白图像被传输至眼部定位图像处理器152。示例性地,眼部定位图像处理器配置为具有视觉识别功能、例如脸部识别功能,并且配置为基于黑白图像识别出脸部和眼部。基于识别出的眼部,能够得到用户相对于显示设备的显示屏的视角,这将在下文中描述。
通过景深获取装置151b获取到的眼部或脸部的景深信息也被传输至眼部定位图像处理器152。眼部定位图像处理器152被配置为基于黑白图像和获取的景深信息确定眼部的空间位置,这将在下文中描述。
在一些实施例中,景深获取装置151b为结构光摄像头或TOF摄像头。
作为解释而非限制性地,TOF摄像头包括投射器和接收器,通过投射器将光脉冲到被观测对象上,然后通过接收器接收从被观测对象反射回的光脉冲,通过光脉冲的往返时间来计算被观测对象与摄像头的距离。
作为解释而非限制性地,结构光摄像头包括投影器和采集器,通过投影器将面结构光、例如编码结构光投射到被观测对象上,在被观测对象表面形成面结构光的畸变图像,然后通过采集器采集并解析畸变图像,从而还原被观测对象的三维轮廓、空间信息等。
在一些实施例中,黑白摄像头151a是广角的黑白摄像头。
在一些实施例中,景深获取装置151b和3D拍摄装置120的景深摄像头121c可以是相同的。在这种情况下,景深获取装置151b和景深摄像头121c可以是同一个TOF摄像头或同一个结构光摄像头。在另一些实施例中,景深获取装置151b和景深摄像头121c可以是不同的。
在一些实施例中,眼部定位装置150包括视角确定装置155,视角确定装置155被配置为计算用户相对于3D显示设备或其显示屏或黑白摄像头的视角。
基于黑白摄像头151a拍摄的黑白图像,视角包括但不限于用户的单眼与黑白摄像头镜头中心O/显示屏中心DLC的连线相对于黑白摄像头平面MCP/显示屏平面DLP的倾斜角、双眼连线的中点(双眼中心)与黑白摄像头镜头中心O/显示屏中心DLC的连线相对于黑白摄像头平面MCP/显示屏平面DLP的倾斜角。
在此基础上,再结合景深获取装置151b获取的深度图像,除了上述倾斜角,视角还可以包括双眼连线相对于黑白摄像头平面MCP/显示屏平面DLP的倾斜角、脸部所在平面HFP相对于黑白摄像头平面MCP/显示屏平面DLP的倾斜角等。其中,脸部所在平面HFP可通过提取若干脸部特征来确定,例如眼部和耳部、眼部和嘴角、眼部和下巴等。在本公开实施例中,由于眼部定位装置150及其黑白摄像头151a相对于3D显示设备或其显示屏为前置的,可将黑白摄像头平面MCP视作显示屏平面DLP。
作为解释而非限制性的,上文描述的线相对于面的倾斜角包括但不限于线与线在面内的投影的夹角、线在面内的投影与面的水平方向的夹角、线在面内的投影与面的竖直方向的夹角。其中,线与线在面内的投影的夹角可具有水平方向的分量和竖直方向的分量。
在一些实施例中,如图2B所示,视角确定装置155可集成设置在眼部定位图像处理器152内。如上文所述,眼部定位图像处理器152被配置为基于黑白图像和景深信息确定眼部的空间位置。在本公开实施例中,眼部的空间位置包括但不限于上文描述的视角、眼部相对于黑白摄像头平面MCP/显示屏平面DLP的距离、眼部相对于眼部定位装置或其黑白摄像头/3D显示设备或其显示屏的空间坐标等。在一些实施例中,眼部定位装置150还可包括视角数据输出接口,视角数据输出接口被配置为输出由视角确定装置计算出的视角。
在另一些实施例中,视角确定装置可集成设置在3D处理装置内。
作为解释而非限制性地,通过黑白摄像头151a拍摄的包含了用户左眼和右眼的黑白图像,可得知左眼和右眼在黑白摄像头151a的焦平面FP内成像的X轴(水平方向)坐标和Y轴(竖直方向)坐标。如图3所示,以黑白摄像头151a的镜头中心O为原点,X轴和与X轴垂直的Y轴(未示出)形成黑白摄像头平面MCP,其与焦平面FP平行;黑白摄像头151a的光轴方向为Z轴,Z轴也是深度方向。也就是说,在图3所示的XZ平面内,左眼和右眼在焦平面FP内成像的X轴坐标XR、XL是已知的;而且,黑白摄像头151a的焦距f是已知的;在这种情况下,可算出左眼和右眼与黑白摄像头镜头中心O的连线在XZ平面内的投影相对于X轴的倾斜角β,这将在下文中进一步描述。同理,在(未示出的)YZ平面内,左眼和右眼在焦平面FP内成像的Y轴坐标是已知的,再结合已知的焦距f,可算出左眼和右眼与黑白摄像头镜头中心O的连线在YZ平面内的投影相对于黑白摄像头平面MCP的Y轴的倾斜角。
作为解释而非限制性地,通过黑白摄像头151a拍摄的包含了用户左眼和右眼的黑白图像以及景深获取装置151b获取的左眼和右眼的景深信息,可得知左眼和右眼在黑白摄像头151a的坐标系内的空间坐标(X,Y,Z),其中,Z轴坐标即为景深信息。据此,如图3所示,可算出左眼和右眼的连线在XZ平面内的投影与X轴的夹角α。同理,在(未示出的)YZ平面内,可算出左眼和右眼的连线在YZ平面内的投影与Y轴的夹角。
图3示意性地示出了利用黑白摄像头151a和景深获取装置151b(未示出)确定眼部的空间位置的几何关系模型的俯视图。其中,R和L分别表示用户的右眼和左眼,XR和XL分别为用户右眼R和左眼L在黑白摄像头151a的焦平面FP内成像的X轴坐标。在已知黑白摄像头151a的焦距f、双眼在焦平面FP内的X轴坐标XR、XL的情况下,可以得出用户的右眼R和左眼L与镜头中心O的连线在XZ平面内的投影相对于X轴的倾斜角 βR和βL分别为:
在此基础上,通过(未示出的)景深获取装置151b获得的右眼R和左眼L的景深信息,可得知用户右眼R和左眼L相对于黑白摄像头平面MCP/显示屏平面DLP的距离DR和DL。据此,可以得出用户双眼连线在XZ平面内的投影与X轴的夹角α以及瞳距P分别为:
上述计算方法和数学表示仅是示意性的,本领域技术人员可以想到其他计算方法和数学表示,以得到所需的眼部的空间位置。本领域技术人员也可以想到,必要时将黑白摄像头的坐标系与显示设备或其显示屏的坐标系进行变换。
在一些实施例中,当距离DR和DL不等并且夹角α不为零时,可认为用户斜视显示屏平面DLP;当距离DR和DL相等并且视角α为零时,可认为用户平视显示屏平面DLP。在另一些实施例中,可以针对夹角α设定阈值,在夹角α不超过阈值的情况下,可以认为用户平视显示屏平面DLP。
在一些实施例中,眼部定位装置150包括眼部定位数据接口153,被配置为传输眼部空间位置信息,包括但不限于如上文描述的倾斜角、夹角、空间坐标等。利用眼部空间位置信息可向用户提供有针对性的或定制化的3D显示画面。
作为解释而非限制性地,视角、例如用户双眼中心与显示屏中心DLC的连线相对于水平方向(X轴)或竖直方向(Y轴)的夹角通过眼部定位数据接口153传输至3D处理装置130。3D处理装置130基于接收到的视角随动地生成与视角相符合的3D显示画面,从而能够向用户呈现从不同角度观察的显示对象。
示例性地,基于用户双眼中心与显示屏中心DLC的连线相对于水平方向(X轴)的夹角能够呈现水平方向上的随动效果;基于用户双眼中心与显示屏中心DLC的连线相对于竖直方向(Y轴)的夹角能够呈现竖直方向上的随动效果。
作为解释而非限制性地,用户的左眼和右眼的空间坐标通过眼部定位数据接口153传输至3D处理装置130。3D处理装置130基于接收到的空间坐标确定用户双眼所处的且由多视点3D显示屏110提供的视点,并基于3D信号的视频帧渲染相应的子像素。
示例性地,当基于眼部空间位置信息确定用户的双眼各对应一个视点时,基于3D信号的视频帧渲染各复合像素的多个复合子像素中与这两个视点相对应的子像素,也可额外地渲染各复合像素的多个复合子像素中与这两个视点相邻的视点相对应的子像素。
示例性地,当基于眼部空间位置信息确定用户的双眼各分别位于两个视点之间时,基于3D信号的视频帧渲染各复合像素的多个复合子像素中与这四个视点相对应的子像素。
示例性地,当基于眼部空间位置信息确定用户双眼中至少一只眼睛产生了运动时,可基于3D信号的下一视频帧渲染各复合像素的多个复合子像素中与新的预定的视点对应的子像素。
示例性地,当基于眼部空间位置信息确定有一个以上用户时,可基于3D信号的视频帧渲染各复合像素的多个复合子像素与各用户双眼分别所处的视点相对应的子像素。
在一些实施例中,分别确定用户的视角和视点位置,并据此提供随视角和视点位置变化的3D显示画面,提升观看体验。
在另一些实施例中,眼部空间位置信息也可被直接传输至3D显示设备100的处理器101,3D处理装置130通过眼部定位数据接口153从处理器101接收/读取眼部空间位置信息。
在一些实施例中,黑白摄像头151a配置为拍摄出黑白图像序列,其包括按照时间前后排列的多幅黑白图像。
在一些实施例中,眼部定位图像处理器152包括缓存器156和比较器157。缓存器156被配置为缓存黑白图像序列中分别按照时间前后排列的多幅黑白图像。比较器157被配置为比较黑白图像序列中按照时间前后拍摄的多幅黑白图像。通过比较,例如可以判断眼部的空间位置是否变化或者判断眼部是否还处于观看范围内等。
在一些实施例中,眼部定位图像处理器152还包括判决器(未示出),被配置为基于比较器的比较结果,在黑白图像序列中的当前黑白图像中未识别到眼部的存在且在之前或之后的黑白图像中识别到眼部的存在时,基于之前或之后的黑白图像确定的眼部空间位置信息作为当前的眼部空间位置信息。这种情况例如为用户短暂转动头部。在这种情况下,有可能短暂地无法识别到用户的脸部及其眼部。
示例性地,在缓存器156的缓存段内存有黑白图像序列中的若干黑白图像。在某些情况下,无法从所缓存的当前黑白图像中识别出脸部及眼部,然而可以从所缓存的之前或之后的黑白图像中识别出脸部及眼部。在这种情况下,可以将基于在当前黑白图像之后的、也就是更晚拍摄的黑白图像确定的眼部空间位置信息作为当前的眼部空间位置信息;也可以将基于在当前黑白图像之前的、也就是更早拍摄黑白图像确定的眼部空间位置信息作为 当前的眼部空间位置信息。此外,也可以对基于上述之前和之后的能识别出脸部及眼部的黑白图像确定的眼部空间位置信息取平均值、进行数据拟合、进行插值或以其他方法处理,并且将得到的结果作为当前的眼部空间位置信息。
在一些实施例中,黑白摄像头151a被配置为以24帧/秒或以上的频率拍摄黑白图像序列。示例性地,以30帧/秒的频率拍摄。示例性地,以60帧/秒的频率拍摄。
在一些实施例中,黑白摄像头151a被配置为以与3D显示设备的显示屏刷新频率相同的频率进行拍摄。
本公开实施例还可以提供一种眼部定位方法,其利用上述实施例中的眼部定位装置来实现。
参考图4,在一些实施例中,眼部定位方法包括:
S401:拍摄用户的脸部的黑白图像;
S402:获取脸部的景深信息;
S403:基于拍摄的黑白图像和景深信息确定眼部的空间位置。
参考图5,在一些实施例中,眼部定位方法包括:
S501:拍摄用户的脸部的黑白图像;
S502:获取脸部的景深信息;
S503:基于拍摄的黑白图像识别眼部的存在;
S504:基于拍摄的黑白图像和景深信息确定眼部的空间位置;
S505:传输包含眼部的空间位置的眼部空间位置信息。
参考图6,在一些实施例中,眼部定位方法包括:
S601:拍摄出包括用户的脸部的黑白图像的黑白图像序列;
S602:缓存黑白图像序列中多幅黑白图像;
S603:比较黑白图像序列中的前后多幅黑白图像;
S604:获取脸部的景深信息;
S605:当通过比较在黑白图像序列中的当前黑白图像未识别到眼部的存在且在之前或之后的黑白图像中识别到眼部的存在时,基于之前或之后的黑白图像和获取的景深信息确定的眼部空间位置信息作为当前的眼部空间位置信息。
本公开实施例还可以提供一种3D显示方法,其适用于上述实施例中的3D显示设备,3D显示设备包括多视点3D显示屏,多视点3D显示屏包括对应多个视点的多个子像素。
参考图7,在一些实施例中,3D显示方法包括:
S701:确定用户眼部的空间位置;
S702:根据用户眼部的空间位置确定视点,并且基于3D信号渲染与视点对应的子像素。
本公开实施例提供了一种3D显示终端800,参考图8,3D显示终端800包括:处理器814、存储器811、多视点3D显示屏810,还可以包括通信接口812和总线813。其中,多视点3D显示屏810、处理器814、通信接口812、存储器811通过总线813完成相互间的通信。通信接口812可以用于信息传输。处理器814可以调用存储器811中的逻辑指令,以执行上述实施例的3D显示方法。
此外,存储器811中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器811作为一种计算机可读存储介质,可被配置为存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令/模块。处理器814通过运行存储在存储器811中的程序指令/模块,从而执行功能应用以及数据处理,即实现上述方法实施例中的眼部定位方法和/或3D显示方法。
存储器811可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器811可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的眼部定位方法、3D显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的眼部定位方法、3D显示方法。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
本领域技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,可以取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法以实现所描述的功能,但是这种实现不应认为超出 本公开实施例的范围。
本文所披露的实施例中,所揭露的方法、产品(包括但不限于装置、设备等),可以通过其它的方式实现。例如,以上所描述的装置或设备实施例仅仅是示意性的,例如,单元的划分,可以仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例。另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
另外,在附图中的流程图所对应的描述中,不同的方框所对应的操作或步骤也可以以不同于描述中所披露的顺序发生,有时不同的操作或步骤之间不存在特定的顺序。
Claims (22)
- 一种眼部定位装置,包括:眼部定位器,包括被配置为拍摄用户的脸部的黑白图像的黑白摄像头和被配置为获取所述脸部的景深信息的景深获取装置;眼部定位图像处理器,被配置为基于所述黑白图像和所述景深信息确定眼部的空间位置。
- 根据权利要求1所述的眼部定位装置,其中,所述眼部定位图像处理器还被配置为基于所述黑白图像识别所述眼部的存在。
- 根据权利要求1所述的眼部定位装置,还包括眼部定位数据接口,被配置为传输包含所述眼部的空间位置的眼部空间位置信息。
- 根据权利要求1所述的眼部定位装置,其中,所述景深获取装置为结构光摄像头或TOF摄像头。
- 根据权利要求1所述的眼部定位装置,还包括视角确定装置,被配置为计算所述用户相对于3D显示设备的视角。
- 根据权利要求1至5任一项所述的眼部定位装置,其中,所述黑白摄像头被配置为拍摄黑白图像序列。
- 根据权利要求6所述的眼部定位装置,其中,所述眼部定位图像处理器包括:缓存器,配置为缓存所述黑白图像序列中多幅黑白图像;比较器,配置为比较所述黑白图像序列中的前后多幅黑白图像;判决器,被配置为,当所述比较器通过比较在所述黑白图像序列中的当前黑白图像中未识别到眼部的存在且在之前或之后的黑白图像中识别到眼部的存在时,将基于所述之前或之后的黑白图像和获取的景深信息确定的眼部空间位置信息作为当前的眼部空间位置信息。
- 一种3D显示设备,包括:多视点3D显示屏,包括对应多个视点的多个子像素;根据权利要求1至7任一项所述的眼部定位装置,被配置为确定用户眼部的空间位置;以及3D处理装置,被配置为根据所述用户眼部的空间位置确定视点,并且基于3D信号渲染与所述视点对应的子像素。
- 根据权利要求8所述的3D显示设备,其中,所述多视点3D显示屏包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中 的每个复合子像素由对应于多个视点的多个子像素构成。
- 根据权利要求8或9所述的3D显示设备,其中,所述3D处理装置与所述眼部定位装置通过眼部定位数据接口通信连接。
- 根据权利要求8或9所述的3D显示设备,还包括:3D拍摄装置,被配置为采集3D图像,所述3D拍摄装置包括景深摄像头和至少两个彩色摄像头。
- 根据权利要求11所述的3D显示设备,其中,所述眼部定位装置与所述3D拍摄装置集成设置。
- 根据权利要求11所述的3D显示设备,其中,所述3D拍摄装置前置于所述3D显示设备。
- 一种眼部定位方法,包括:拍摄用户的脸部的黑白图像;获取所述脸部的景深信息;基于所述黑白图像和所述景深信息确定眼部的空间位置。
- 根据权利要求14所述的眼部定位方法,还包括:基于所述黑白图像识别所述眼部的存在。
- 根据权利要求14所述的眼部定位方法,还包括:传输包含所述眼部的空间位置的眼部空间位置信息。
- 根据权利要求14所述的眼部定位方法,还包括:拍摄出包括所述黑白图像的黑白图像序列。
- 根据权利要求17所述的眼部定位方法,还包括:缓存所述黑白图像序列中多幅黑白图像;比较所述黑白图像序列中的前后多幅黑白图像;当通过比较在所述黑白图像序列中的当前黑白图像未识别到眼部的存在且在之前或之后的黑白图像中识别到眼部的存在时,基于所述之前或之后的黑白图像和获取的景深信息确定的眼部空间位置信息作为当前的眼部空间位置信息。
- 一种3D显示方法,包括:确定用户眼部的空间位置;根据所述用户眼部的空间位置确定视点,并且基于3D信号渲染与所述视点对应的子像素;其中,所述3D显示设备包括多视点3D显示屏,所述多视点3D显示屏包括对应多个 视点的多个子像素。
- 一种3D显示终端,包括处理器、存储有程序指令的存储器和多视点3D显示屏,所述处理器被配置为在执行所述程序指令时,执行根据权利要求19所述的3D显示方法。
- 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行如权利要求14至19任一项所述的方法。
- 一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当该程序指令被计算机执行时,使所述计算机执行如权利要求14至19任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911231165.9A CN112929639B (zh) | 2019-12-05 | 2019-12-05 | 人眼追踪装置、方法及3d显示设备、方法和终端 |
CN201911231165.9 | 2019-12-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021110035A1 true WO2021110035A1 (zh) | 2021-06-10 |
Family
ID=76161253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/133329 WO2021110035A1 (zh) | 2019-12-05 | 2020-12-02 | 眼部定位装置、方法及3d显示设备、方法和终端 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN112929639B (zh) |
TW (1) | TW202123693A (zh) |
WO (1) | WO2021110035A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115278201A (zh) * | 2022-07-29 | 2022-11-01 | 北京芯海视界三维科技有限公司 | 处理装置及显示器件 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI800959B (zh) | 2021-10-22 | 2023-05-01 | 宏碁股份有限公司 | 眼球追蹤方法及眼球追蹤裝置 |
CN114079765B (zh) * | 2021-11-17 | 2024-05-28 | 京东方科技集团股份有限公司 | 图像显示方法、装置及系统 |
TWI806379B (zh) | 2022-01-24 | 2023-06-21 | 宏碁股份有限公司 | 特徵點位置偵測方法及電子裝置 |
CN114979614A (zh) * | 2022-05-16 | 2022-08-30 | 北京芯海视界三维科技有限公司 | 显示模式确定方法及显示模式确定装置 |
CN115567698A (zh) * | 2022-09-23 | 2023-01-03 | 立观科技(盐城)有限公司 | 一种实现横向及纵向3d显示的装置及实现方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120062556A1 (en) * | 2010-09-13 | 2012-03-15 | Sumihiko Yamamoto | Three-dimensional image display apparatus, three-dimensional image processor, three-dimensional image display method, and computer program product |
CN104079919A (zh) * | 2009-11-04 | 2014-10-01 | 三星电子株式会社 | 使用主动亚像素渲染的高密度多视点图像显示系统及方法 |
CN104519334A (zh) * | 2013-09-26 | 2015-04-15 | Nlt科技股份有限公司 | 立体图像显示装置、终端装置、立体图像显示方法及其程序 |
CN104536578A (zh) * | 2015-01-13 | 2015-04-22 | 京东方科技集团股份有限公司 | 裸眼3d显示装置的控制方法及装置、裸眼3d显示装置 |
CN106331688A (zh) * | 2016-08-23 | 2017-01-11 | 湖南拓视觉信息技术有限公司 | 基于视觉追踪技术的三维显示系统及方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101002253A (zh) * | 2004-06-01 | 2007-07-18 | 迈克尔·A.·韦塞利 | 水平透视模拟器 |
CN108616736A (zh) * | 2016-12-29 | 2018-10-02 | 深圳超多维科技有限公司 | 用于立体显示的跟踪定位方法及装置 |
-
2019
- 2019-12-05 CN CN201911231165.9A patent/CN112929639B/zh active Active
-
2020
- 2020-12-02 WO PCT/CN2020/133329 patent/WO2021110035A1/zh active Application Filing
- 2020-12-04 TW TW109142827A patent/TW202123693A/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104079919A (zh) * | 2009-11-04 | 2014-10-01 | 三星电子株式会社 | 使用主动亚像素渲染的高密度多视点图像显示系统及方法 |
US20120062556A1 (en) * | 2010-09-13 | 2012-03-15 | Sumihiko Yamamoto | Three-dimensional image display apparatus, three-dimensional image processor, three-dimensional image display method, and computer program product |
CN104519334A (zh) * | 2013-09-26 | 2015-04-15 | Nlt科技股份有限公司 | 立体图像显示装置、终端装置、立体图像显示方法及其程序 |
CN104536578A (zh) * | 2015-01-13 | 2015-04-22 | 京东方科技集团股份有限公司 | 裸眼3d显示装置的控制方法及装置、裸眼3d显示装置 |
CN106331688A (zh) * | 2016-08-23 | 2017-01-11 | 湖南拓视觉信息技术有限公司 | 基于视觉追踪技术的三维显示系统及方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115278201A (zh) * | 2022-07-29 | 2022-11-01 | 北京芯海视界三维科技有限公司 | 处理装置及显示器件 |
Also Published As
Publication number | Publication date |
---|---|
CN112929639A (zh) | 2021-06-08 |
CN112929639B (zh) | 2024-09-10 |
TW202123693A (zh) | 2021-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021110035A1 (zh) | 眼部定位装置、方法及3d显示设备、方法和终端 | |
US10838206B2 (en) | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking | |
CN211128024U (zh) | 3d显示设备 | |
WO2021110038A1 (zh) | 3d显示设备、3d图像显示方法 | |
US20170127045A1 (en) | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
US9848184B2 (en) | Stereoscopic display system using light field type data | |
KR20160121798A (ko) | 직접적인 기하학적 모델링이 행해지는 hmd 보정 | |
CN108093244B (zh) | 一种远程随动立体视觉系统 | |
CN108885342A (zh) | 用于低延迟渲染的宽基线立体 | |
US20160021363A1 (en) | Enhancing the Coupled Zone of a Stereoscopic Display | |
WO2018032841A1 (zh) | 绘制三维图像的方法及其设备、系统 | |
CN105430368A (zh) | 一种两视点立体图像合成方法及系统 | |
US20230316810A1 (en) | Three-dimensional (3d) facial feature tracking for autostereoscopic telepresence systems | |
TWI450025B (zh) | A device that can simultaneously capture multi-view 3D images | |
CN110969706B (zh) | 增强现实设备及其图像处理方法、系统以及存储介质 | |
CN211531217U (zh) | 3d终端 | |
US10679589B2 (en) | Image processing system, image processing apparatus, and program for generating anamorphic image data | |
US20170257614A1 (en) | Three-dimensional auto-focusing display method and system thereof | |
US11388391B2 (en) | Head-mounted display having an image sensor array | |
CN112929638B (zh) | 眼部定位方法、装置及多视点裸眼3d显示方法、设备 | |
CN214756700U (zh) | 3d显示设备 | |
JP2005174148A (ja) | 撮像装置及び方法、撮像システム | |
CN114020150A (zh) | 图像显示方法、装置、电子设备及介质 | |
WO2021110032A1 (zh) | 多视点3d显示设备和3d图像显示方法 | |
JP2024062935A (ja) | 立体視表示コンテンツを生成する方法および装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20895783 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20895783 Country of ref document: EP Kind code of ref document: A1 |