CN103905808A - Device and method used for three-dimension display and interaction. - Google Patents
Device and method used for three-dimension display and interaction. Download PDFInfo
- Publication number
- CN103905808A CN103905808A CN201210579928.0A CN201210579928A CN103905808A CN 103905808 A CN103905808 A CN 103905808A CN 201210579928 A CN201210579928 A CN 201210579928A CN 103905808 A CN103905808 A CN 103905808A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- image
- unit
- display units
- capture unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Provided is a device used for three-dimension display and interaction. The device includes: a display and capture unit configured such that the display and capture unit converts three-dimension display content provided by a control unit into a three-dimension virtual object and displays the three-dimension virtual object in a three-dimension space and obtains depth information of an interaction gesture made by a user in the three-dimension space through use of the interaction device; and the control unit which is configured such that the control unit obtains the depth information of the interaction gesture from the display and capture unit and generates a depth map corresponding to the interaction gesture, analyzes collision between the three-dimension virtual object and the interaction device on the basis of the generated depth map, and updates the three-dimension display content according to the analysis result of the collision and provides the updated three-dimension display content to the display and capture unit so as to update display of the three-dimension virtual object in the three-dimension space.
Description
Technical field
The present invention relates to dimension display technologies, more particularly, relate to and a kind ofly can carry out three-dimensional display and mutual equipment and method.
Background technology
Three-dimensional (3D) technology has demonstrated huge development potentiality in multiple fields in (such as consumer electronics field, imaging of medical field etc.).At present, there is multiple 3D Display Technique, for example, used the 3D Display Technique of glasses and the 3D Display Technique without glasses.Consider and use the 3D Display Technique of glasses easily to cause by kopiopia, therefore, without the 3D Display Technique of glasses because its convenience has caused more concern.
3D Display Technique without glasses comprises holography, autostereoscopic imaging, panoramic imagery etc.Holographic method needs coherent light, and is placed at recording and reconstruction aspect the larger object of a distance and has some difficulties.Autostereoscopic imaging method is a kind of mode that realizes 3D demonstration with lens pillar or disparity barrier, but, in this method, only can realize Hrizontal perspective poor.In addition, watch the eyes convergent angle of image different with the focus of image possibility, make to exist and use asthenopic phenomenon.For thering is poor both 3D rendering of the poor and vertical perspective of Hrizontal perspective, the general 3D display packing that adopts panoramic imagery, in the method, real 3D object is reconstructed, and watch the eyes convergent angle of 3D object and the focus of described object to keep identical, thereby can not cause and use kopiopia.
Except 3D Display Technique described above, 3D interaction technique is also the one application of 3D technology.3D is a kind of form of man-machine interaction alternately, and in 3D is mutual, user can move and carry out alternately in 3d space.For mutual 3d space can be real physical space, the Virtual Space of simulating at computer or both combinations, in the time that real space is used to input interaction data, user conventionally makes some and moves to using the machine issue an order of device of 3D position of the interactive action that detects user.In the time using the Virtual Space of simulation, the 3D Virtual Space of simulation can be projected in real environment by output device.
3D Display Technique without glasses is very useful alternately for 3D, and this is because user can be more prone to not handle 3D object with any glasses.In addition, make the convergent angle of eyes and the focus of object keep identical also extremely important for avoiding ophthalmic uncomfortable.And providing a kind of, panoramic imaging techniques can not produce with realizing the mutual possibility of actual 3D in asthenopic situation, wherein, actual 3D shows that object can be by interactive device (for example, can be user's hand or the device that any other is handled by user) handle, and the motion of described interactive device can be device analysis and processing depth perception and that can be processed alternately 3D.
Summary of the invention
According to an aspect of the present invention, provide a kind of for three-dimensional display and mutual equipment, described equipment comprises: show and capture unit, the three-dimensional display content that is configured to control unit to provide is converted to three-dimensional object and in three dimensions, shows three-dimensional object, and obtains user and use the depth information of the mutual attitude that interactive device makes in three dimensions; Control unit, be configured to obtain the depth information of described mutual attitude and produce and the corresponding depth map of described mutual attitude from described demonstration and capture unit, collision between depth map analyzing three-dimensional virtual objects and described interactive device based on producing, according to the analysis result of collision is upgraded three-dimensional display content and the three-dimensional display content of renewal is offered to described demonstration and capture unit, to upgrade the demonstration of three-dimensional object in three dimensions.
Described demonstration and capture unit can comprise: image-display units, is configured to the three-dimensional display content that indicative control unit provides; Three-dimensional imaging unit, be arranged in image-display units before, be configured to by image-display units show three-dimensional display content be converted to the three-dimensional object showing in three dimensions; Capture unit, is arranged in the back side of three-dimensional imaging unit, is configured to be obtained user and used by three-dimensional imaging unit the depth information of the mutual attitude that interactive device makes in three dimensions.
Describedly also can comprise for three-dimensional display and mutual equipment: lighting unit, be arranged in the back side of image-display units, be used to image-display units that illumination light is provided.
Three-dimensional imaging unit can be lens pillar or microlens array, or under the control of control unit, is shown as the liquid crystal display LCD panel of the combination of fence, pinhole array or fence and pinhole array.
Described lighting unit can comprise: back light unit, parallels the back side that is arranged in image-display units, thereby illumination light is directly offered to image-display units with image-display units.
Described lighting unit can comprise: back light unit, is arranged in top or the bottom at the image-display units back side perpendicularly with image-display units; Light reflector element, is arranged in the back side of image-display units, reflexes to image-display units for the light that vertically arranged back light unit is sent, and thinks that image-display units provides illumination light.
Described smooth reflector element can be Transflective device.
Described capture unit can be arranged in the back side of Transflective device, the light that sends and see through three-dimensional imaging unit from making the interactive device of mutual attitude can arrive capture unit through described Transflective device, thereby capture unit can obtain the depth information of mutual attitude.
Described Transflective device can be optical splitter.
Described smooth reflector element can be for catoptrical total reflection device.
Described capture unit and back light unit can be arranged in perpendicularly with image-display units the back side of image-display units, wherein, the light that sends and see through three-dimensional imaging unit from the interactive device of making mutual attitude is reflexed to capture unit by described total reflection device, thereby capture unit obtains the depth information of mutual attitude.
Described image-display units can be LCD panel.
Described capture unit can be ccd sensor or the depth perception camera separating with image-display units.
Described image-display units and capture unit can be integrated into the mixed liquid crystal demonstration-charge coupled device LCD-CCD panel that is integrated with LCD panel and multiple ccd sensors.LCD panel in described mixing LCD-CCD panel can be used for the three-dimensional display content that indicative control unit provides, and the ccd sensor in described mixing LCD-CCD panel can obtain by three-dimensional imaging unit the depth information of mutual attitude.
The three-dimensional display content that described control unit provides can be with by multiple element images corresponding object showing in three dimensions, show that the depth information that obtains to capture unit can be the multiple element images relevant with described mutual attitude.
When capture unit is while being arranged in the depth perception camera at the image-display units back side, in the depth information of the mutual attitude that capture unit obtains, also can comprise described three-dimensional display content.Control unit can obtain and the corresponding depth map of mutual attitude by remove described three-dimensional display content from described depth information.
According to a further aspect in the invention, provide a kind of for three-dimensional display and mutual method, described method comprises: (a) three-dimensional display content be converted to three-dimensional object and in three dimensions, show three-dimensional object; (b) obtain user and use the depth information of the mutual attitude that interactive device makes in three dimensions; (c) produce and the corresponding depth map of described mutual attitude according to the depth information of the mutual attitude of obtaining; (d) collision between depth map analyzing three-dimensional virtual objects and the described interactive device based on producing; (e) according to the three-dimensional display content to use in the analysis result step of updating (a) of collision, to upgrade the three-dimensional object being presented in three dimensions.
Described three-dimensional display content can be with by multiple element images corresponding object showing in three dimensions, the depth information of described mutual attitude can be the multiple element images relevant to described mutual attitude.
Brief description of the drawings
By the detailed description of accompanying drawing being carried out below in conjunction with example, above and other object of the present invention and feature will become apparent, wherein:
Fig. 1 is the schematic block diagram for three-dimensional display and mutual equipment illustrating according to exemplary embodiment;
Fig. 2 is the schematic block diagram illustrating according to the demonstration of exemplary embodiment and capture unit;
Fig. 3 is the structured flowchart for three-dimensional display and mutual equipment illustrating according to exemplary embodiment of the present invention;
Fig. 4 A to Fig. 4 F is the structural representation illustrating according to the three-dimensional imaging unit of exemplary embodiment of the present invention;
Fig. 5 A and Fig. 5 B are the structured flowcharts for three-dimensional display and mutual equipment illustrating according to another exemplary embodiment of the present invention;
Fig. 6 is the structured flowchart for three-dimensional display and mutual equipment illustrating according to another exemplary embodiment of the present invention;
Fig. 7 is the structured flowchart for three-dimensional display and mutual equipment illustrating according to another exemplary embodiment of the present invention;
Fig. 8 is the flow chart for three-dimensional display and mutual method illustrating according to exemplary embodiment of the present invention.
In the accompanying drawings, identical label represents identical parts all the time.Embodiment is described to explain the present invention by reference to accompanying drawing below.
Embodiment
The of the present invention exemplary embodiment of the description of carrying out referring to accompanying drawing to help complete understanding to be limited by claim and equivalent thereof is provided.Described description comprises that various specific detail understand helping, but these details to be considered to be only exemplary.Therefore, those of ordinary skill in the art will recognize: without departing from the scope and spirit of the present invention, can make various changes and modifications the embodiments described herein.In addition, for clarity and conciseness, can omit the description of known function and structure.
Fig. 1 is the schematic block diagram for three-dimensional display and mutual equipment 100 illustrating according to exemplary embodiment.
As shown in fig. 1, described equipment 100 comprises demonstration and capture unit 110 and control unit 120.
Show that the three-dimensional display content that can be configured to that with capture unit 110 control unit 120 is provided is converted to three-dimensional object and in three dimensions, shows three-dimensional object.Should be appreciated that, in the present invention, the three-dimensional display content that control unit 120 provides be with by multiple element images corresponding object showing in three dimensions (EI, Elemental Image).In addition, described demonstration and capture unit 110 also can be configured to obtain user and use mutual attitude that interactive device makes in three dimensions (for example, the posture that user uses interactive device to make) depth information (, the multiple element images relevant to described mutual attitude).
For instance, after demonstration and capture unit 110 form and demonstrate three-dimensional object A in three dimensions, if user uses such as the interactive device of finger and makes the action that described three-dimensional object A is flatly allocated to left side, control unit 120 can (for example obtain mutual attitude by showing with capture unit 110, the finger posture of stirring left) depth information, thereby produce and the corresponding depth map of described mutual attitude.Then, come, between the finger of estimating user and three-dimensional object A, whether collision has occurred with described depth map.After definite bumping, control unit 120 can upgrade three-dimensional display content, and the three-dimensional display content of renewal is offered and shown with capture unit 110 to upgrade the demonstration of three-dimensional object A in three dimensions, thereby the three-dimensional object A after upgrading is not just contacted with user's finger in three dimensions, realize described three-dimensional object A is allocated to left side This move.
But, should be appreciated that, above example only illustrates for the ease of understanding, the invention is not restricted to this, for example, control unit 120 can not be moved to the left three-dimensional object A level, but for example cancellation and the overlapping part of finger in three-dimensional object A, thereby when upgrade the demonstration of three-dimensional object A in three dimensions time, can present three-dimensional object A and pointed the display effect of erosion.
Because can being shown with capture unit 110, the three-dimensional space position of three-dimensional object and interactive device and attitude obtain, therefore can analyze the overlapping degree between them by control unit 120, the deflection that then should produce external force according to three-dimensional object, determine and display position and the attitude of the corresponding three-dimensional object of attitude of interactive device, thereby obtain the collision result of interactive device and three-dimensional object.Then, control unit 120 can upgrade according to collision result display position and the attitude of three-dimensional object.Collision between analyzing three-dimensional virtual objects and interactive device be known to those skilled in the art according to the mode that analysis result upgrades three-dimensional display content, will no longer be described in detail at this.
Fig. 2 is the schematic block diagram illustrating according to the demonstration of exemplary embodiment and capture unit 120.
As shown in Figure 2, show with capture unit 110 and can comprise: image-display units 111, three-dimensional imaging unit 112, capture unit 113 and lighting unit 114.
Image-display units 111 can be configured to the three-dimensional display content that indicative control unit 120 provides.Three-dimensional imaging unit 112 can be arranged in image-display units 111 before, and the three-dimensional display content that is configured to that image-display units 111 is shown is converted to the three-dimensional object that can show in three dimensions.Capture unit 113 can be arranged in the back side of three-dimensional imaging unit 112, and is configured to be obtained user and used by three-dimensional imaging unit 112 depth information of the mutual attitude that interactive device makes in three dimensions.Lighting unit 114 can be arranged in the back side of image-display units 111, is used to image-display units 111 that illumination light is provided.
Fig. 2 is only the schematic diagram of realizing demonstration of the present invention and capture unit 110, explains in more detail various structures and the demonstration three-dimensional object of realizing demonstration of the present invention and capture unit 110 and the method for obtaining depth information below with reference to Fig. 3 to Fig. 7.
Fig. 3 is the structured flowchart for three-dimensional display and mutual equipment 100 illustrating according to exemplary embodiment of the present invention.
With reference to Fig. 3, image-display units 111 and capture unit 113 can be integrated into mixed liquid crystal demonstration-charge coupled device (LCD-CCD) panel that is integrated with LCD panel and multiple ccd sensors.In described mixing LCD-CCD panel, LCD panel can be used for the three-dimensional display content that indicative control unit 120 provides, integrated ccd sensor can obtain user by three-dimensional imaging unit 112 and uses the depth information of interactive device (for example, people's hand) the 115 mutual attitudes of making and described depth information is offered to control unit 120.
Mixing as shown in Figure 3 in LCD-CCD panel, multiple ccd sensors can be flatly by rows in LCD panel.The three-dimensional display content that LCD pixel in LCD panel provides for indicative control unit 120, specifically, under the control of control unit 120, LCD panel can multiple element images that on the diverse location in panel, indicative control unit 120 provides respectively according to pre-defined rule in pre-assigned element image (display position of multiple element images that, control unit 120 provides on image-display units is determined in advance).In addition, be embedded in CCD pixel between LCD pixel (by row, ccd sensor) use the depth information of the mutual attitude that interactive device makes at three dimensions for obtaining user, specifically, each ccd sensor can obtain an element image relevant to mutual attitude, then each ccd sensor can send to control unit 120 by the element image obtaining, control unit 120, after receiving the multiple element images that send from multiple CCD sensings, can use multiple element images of reception to produce and the mutual corresponding depth map of attitude.Should be understood that to those skilled in the art, the method that produces depth map with element image is known, therefore for the sake of simplicity, will be not described in detail at this.
But, should be appreciated that, the mode that the arrangement mode of the ccd sensor in mixing LCD-CCD panel is not limited to show in Fig. 3, can also be arranged in ccd sensor in LCD-CCD panel and realize above-mentioned functions in every way.
In addition, as shown in Figure 3, lighting unit 114 can be the back light unit that parallels the back side that is arranged in image-display units 111 with image-display units 111, thereby illumination light is directly offered to image-display units 111.But, the invention is not restricted to this, for example, as will be described later shown in Fig. 6 and Fig. 7, lighting unit 114 also can form by being arranged in perpendicularly the top at image-display units 111 back sides or the back light unit of bottom and light reflector element with image-display units 111, in this case, described smooth reflector element can be arranged in the back side of image-display units 111 and be predetermined angular with back light unit, so that the light sending of vertically arranged back light unit is reflexed to image-display units 111, thereby provide illumination light for image-display units 111.Should be appreciated that, lighting unit 114 can also be arranged with variety of way well known by persons skilled in the art, thinks that image-display units 111 provides illumination light.
Describe the structure of three-dimensional imaging unit 112 in detail below with reference to Fig. 4 A to Fig. 4 F, and by using three-dimensional imaging unit 112 to form three-dimensional object and obtain the method for the depth information of mutual attitude.
Fig. 4 A to Fig. 4 F is the structural representation illustrating according to the three-dimensional imaging unit 112 of exemplary embodiment of the present invention.At Fig. 4 A to Fig. 4 F, for convenience of explanation, illustrate by the image-display units 111 and the capture unit 113 that mix LCD-CCD panel and realize the back side that is arranged in three-dimensional imaging unit 112 and (should understand, LCD pixel in Fig. 4 A to 4C and CCD are the parts as the mixing LCD-CCD panel of example), but, should be appreciated that, the invention is not restricted to this, can also otherwise realize the image-display units 111 at the back side that is arranged in three-dimensional imaging unit 112 of the present invention and capture unit 113 (for example,, after a while with reference to the building form of Fig. 5 A-Fig. 7).
Fig. 4 A illustrates the diagram that comes structure three-dimensional imaging unit 112 with lens pillar, and Fig. 4 B illustrates the diagram that comes structure three-dimensional imaging unit 112 with microlens array.
But, should be appreciated that, come structure three-dimensional imaging unit 112 no matter be with microlens array or with lens pillar, light that the multiple element images on its image-display units behind send can be made to be presented at through superrefraction and in the different distance place imaging above of three-dimensional imaging unit 112, the three-dimensional virtual image in three dimensions can be presented at as shown in Figure 3 thereby form.Should be appreciated that, (for example should consider the characteristic of three-dimensional imaging unit 112, refractive index etc.) form and arrange the multiple element images that are presented on image-display units 111, thus make to be presented at light that the multiple element images on image-display units 111 send can be combined to form three-dimensional object through after three-dimensional imaging unit 112 in three dimensions.
Contrary with the process of above imaging, at capture unit (for example, ccd sensor in Fig. 4 A and Fig. 4 B) 113 while catching the depth information of mutual attitude by three-dimensional imaging unit 112, the light sending from described interactive device can be refracted behind three-dimensional imaging unit 112, and can be imaged as multiple element images.Therefore, capture unit 113 can easily capture the depth information (, multiple element images relevant to mutual attitude) of the mutual attitude that user makes.
Therefore, should be appreciated that, except coming structure three-dimensional imaging unit 112 with microlens array and lens pillar, the device of any above-mentioned effect that can realize three-dimensional imaging unit 112 all can be applicable to the present invention, for example, as shown in Figure 4 C, also can realize three-dimensional imaging unit 112 with LCD panel, now, in order to obtain as the effect of the lens pillar of the use in Fig. 4 A and 4C and microlens array, the LCD panel of Fig. 4 C can show fence thereon under the control of control unit 120 as shown in Fig. 4 D, thereby described LCD panel can play the effect of the lens pillar in Fig. 4 A.Selectively, LCD panel can, under the control of control unit 120, show pinhole array thereon, thereby can play the effect of the microlens array in Fig. 4 B as shown in Fig. 4 E.In addition, described LCD panel also can show the combination of fence and pinhole array thereon as shown in Fig. 4 F, thereby according to the effect that can play according to demand lens pillar or microlens array.
In Fig. 3, having illustrated that demonstration and image-display units 111 in capture unit 110 and capture unit 113 are integrated into mixes LCD-CCD panel and realizes example of the present invention, but, the invention is not restricted to this, can also realize in every way the present invention.Fig. 5 A is the structured flowchart for three-dimensional display and mutual equipment 100 illustrating according to another exemplary embodiment of the present invention.Fig. 5 B illustrates in Fig. 5 A the positive exemplary diagram showing with capture unit 110.Three-dimensional imaging unit 112, lighting unit 114, user's interactive device 115 and the structure of control unit 120 in Fig. 5 A and Fig. 5 B can be with Fig. 3 identical, and three-dimensional imaging unit 112 in Fig. 5 A and Fig. 5 B can also be the form shown in Fig. 4 A to Fig. 4 F.Therefore will no longer be explained in detail it at this for the sake of simplicity.
As shown in Figure 5 A, from different being of structure in Fig. 3, the image-display units 111 of Fig. 5 A can be configured to simple LCD panel and carry out the three-dimensional display content that indicative control unit 120 provides, and capture unit 113 is the ccd sensors that separate with image-display units 111.
In addition, as shown in Fig. 5 A and Fig. 5 B, because ccd sensor is arranged near 4 angles of LCD panel discretely, therefore, the depth information that obtains the mutual attitude that user makes for the ease of ccd sensor (, the multiple element images relevant to mutual attitude), can arrange imaging len 510 in the appropriate location between ccd sensor and three-dimensional display unit 112, thereby the element image forming behind three-dimensional display unit 112 can be converged to ccd sensor by imaging len 510, so that ccd sensor obtains the multiple element images relevant to mutual attitude.
In addition, should be appreciated that, the transducer in Fig. 5 A and Fig. 5 B is not limited to be arranged on 4 angles of LCD panel, also can as required, for example, arrange that along the surrounding of LCD panel multiple ccd sensors obtain the multiple element images relevant to user's mutual attitude.
Fig. 6 is the structural representation for three-dimensional display and mutual equipment 100 illustrating according to another exemplary embodiment of the present invention.The structure of the interactive device 115 of control unit 120, three-dimensional imaging unit 112, image-display units 111 and user in Fig. 6 can be with Fig. 5 A identical, therefore for the sake of simplicity, will no longer be explained in detail it at this.
In Fig. 6, with Fig. 3, Fig. 5 A compares for image-display units 111 provides the example of illumination light with the back side that the back light unit as lighting unit 114 in Fig. 5 B is arranged in image-display units 111 abreast, lighting unit 114 in Fig. 6 can be made up of light reflector element 620 and the back light unit 610 that is vertically arranged in the bottom at image-display units 111 back sides, wherein, as shown in Figure 6, light reflector element 620 can be arranged at a predetermined angle the back side of image-display units 111 and the light sending of vertically arranged back light unit 610 is reflexed to image-display units 111, thereby for image-display units 111 provides illumination light.
In addition, in Fig. 6, also illustrate and use depth perception camera to replace the example of ccd sensor as capture unit 113, depth perception camera 113 in Fig. 6 is arranged in the back side of light reflector element 620, in this case, in order to make camera can obtain the light sending from user's interactive device 115, described smooth reflector element 620 can be Transflective device (for example, optical splitter), thereby make a part for the light sending from interactive device 115 can arrive depth perception camera 113, and a part for the light sending from back light unit 610 can (for example offer image-display units 111 through reflection quilt, LCD panel) for illumination.
Should be appreciated that, the invention is not restricted to this, for example, except the structure of lighting unit 114, in Fig. 6, also can adopt the composition form of the capture unit 113 shown in Fig. 3 or Fig. 5 A and image-display units 111 (, form capture unit 113 and image-display units 111 with mixing LCD-CCD panel, or distinguish composing images display unit 111 and capture unit 113 with the LCD panel and the multiple ccd sensor that separate), and do not need to use depth perception camera.In addition, the otherwise back light unit 610 in layout plan 6, for example, back light unit 610 is vertically arranged in to the top at image-display units 111 back sides, in this case, described smooth reflector element 620 can be arranged in the back side of image-display units 111 and be predetermined angular with back light unit 610, thereby the light sending of vertically arranged back light unit 610 is reflexed to image-display units 111.
Fig. 7 is the structured flowchart for three-dimensional display and mutual equipment 100 illustrating according to another exemplary embodiment of the present invention.The structure of the interactive device 115 of control unit 120, three-dimensional imaging unit 112, image-display units 111 and user in Fig. 7 can be with Fig. 6 identical, therefore for the sake of simplicity, will no longer be explained in detail it at this.
In addition, similar to Fig. 6, Fig. 7 adopts also depth perception camera as capture unit 113, and lighting unit 114 is also made up of light reflector element 720 and the back light unit 710 that is vertically arranged in the bottom at image-display units 111 back sides, wherein, shown in light reflector element 720 be arranged at a predetermined angle the back side of image-display units 111 and for the light sending of vertically arranged back light unit 710 is reflexed to image-display units 111, provide illumination light thereby can be image-display units 111.
But different from Fig. 6, the light reflector element 720 in Fig. 7 is for example, for catoptrical total reflection device (, mirror).Therefore, the depth perception camera 113 in Fig. 7 and back light unit 610 can vertically be arranged in image-display units the bottom at image-display units 111 back sides.In this case, described total reflection device can send the interactive device from making mutual attitude 115, and reflexes to depth perception camera 113 through the light of three-dimensional imaging unit 112, to make depth perception camera 113 can obtain the depth information of mutual attitude.
Should be appreciated that, the invention is not restricted to this, for example, in Fig. 7, also can adopt the form of the composition of the capture unit 113 shown in Fig. 3 or Fig. 5 A and Fig. 5 B and image-display units 111 (, form capture unit 113 and image-display units 111 with mixing LCD-CCD panel, or distinguish composing images display unit 111 and capture unit 113 with LCD panel and multiple ccd sensor separately), and do not need to use depth perception camera.In addition, can also otherwise arrange back light unit 610, for example, the top that back light unit 610 is vertically arranged in to image-display units 111 back sides as above.
In addition, in the time obtaining the depth information of mutual attitude with the depth perception camera that is arranged in image-display units 111 back sides as capture unit 113, because depth perception camera is positioned at the rear of image-display units 111, so also may comprise three-dimensional display content (, multiple element images of three-dimensional display content) in the depth information of the mutual attitude that depth perception camera obtains.Multiple element images of the three-dimensional display content of obtaining due to depth perception camera are to be positioned in same plane (, the three-dimensional display content that depth perception camera shows from the image-display units 111 of plane obtains multiple element images of described three-dimensional display content), therefore the depth information of multiple element images of described three-dimensional display content is identical, is the information being positioned on same two dimensional surface.Because the distance between depth perception camera and image-display units 111 is predetermined and is known, therefore can easily know the degree of depth of the element image of the three-dimensional display content of depth perception camera acquisition.In this case, control unit 120 can be by remove multiple element images of the known three-dimensional display content of the degree of depth from the depth information of the mutual attitude obtained, thereby only obtain and the corresponding depth map of mutual attitude.The method that should be understood that the element image of removing the known three-dimensional display content of the degree of depth is known to those skilled in the art, therefore for the sake of simplicity, will no longer be described in detail at this.
Fig. 8 is the flow chart for three-dimensional display and mutual method illustrating according to exemplary embodiment of the present invention.
As shown in Figure 8, in step 810, demonstration is converted to three-dimensional object and in three dimensions, shows three-dimensional object with the three-dimensional display content being provided by control unit 120 that the three-dimensional imaging unit 112 in capture unit 110 can show image-display units 111.
Then, in step 830, by show with capture unit 110 in capture unit 113 obtain user and use the depth information of the mutual attitude that interactive device 115 makes in three dimensions, below describe capture unit 113 in detail and obtain the method for the depth information of mutual attitude with reference to Fig. 3 and Fig. 4 A to F, for the sake of simplicity, will no longer conduct further description at this.
In step 850, control unit 120 can obtain from capture unit 113 depth information of described mutual attitude, and produces and the corresponding depth map of described mutual attitude according to the depth information of the mutual attitude of obtaining.
In step 870, control unit 120 can based on produce depth map analyzing three-dimensional virtual objects and interactive device between collision.To those skilled in the art, how to estimate that the collision between analyzing three-dimensional virtual objects and described interactive device is known, therefore, for the sake of simplicity, will no longer conduct further description at this.
In step 890, control unit 120 can be according to the analysis result of collision being upgraded to the three-dimensional display content that will be presented in image-display units 111, and the three-dimensional display content of renewal is offered to image-display units 111 show, thereby upgrade the three-dimensional object being presented in three dimensions.
In this way, in the time that user makes action to three-dimensional object, three-dimensional display of the present invention and interactive device can, by catching the attitude of interactive action, upgrade the three-dimensional object of demonstration, thereby realize the mutual effect between user and three-dimensional object according to default rule.
Multiple examples have below been described.But, should be appreciated that, can carry out various amendments.For example, if carry out the technology of describing with different order, if and/or assembly in system, architecture, device or the circuit described combine in a different manner and/or replaced or supplement by other assemblies or its equivalent, can realize suitable result.Therefore, other execution modes within the scope of the claims.
Claims (18)
1. for three-dimensional display and a mutual equipment, described equipment comprises:
Show and capture unit, the three-dimensional display content that is configured to control unit to provide is converted to three-dimensional object and in three dimensions, shows three-dimensional object, and obtains user and use the depth information of the mutual attitude that interactive device makes in three dimensions;
Control unit, be configured to obtain the depth information of described mutual attitude and produce and the corresponding depth map of described mutual attitude from described demonstration and capture unit, collision between depth map analyzing three-dimensional virtual objects and described interactive device based on producing, according to the analysis result of collision is upgraded three-dimensional display content and the three-dimensional display content of renewal is offered to described demonstration and capture unit, to upgrade the demonstration of three-dimensional object in three dimensions.
2. equipment as claimed in claim 1, wherein, described demonstration and capture unit comprise:
Image-display units, is configured to the three-dimensional display content that indicative control unit provides;
Three-dimensional imaging unit, be arranged in image-display units before, be configured to by image-display units show three-dimensional display content be converted to the three-dimensional object showing in three dimensions;
Capture unit, is arranged in the back side of three-dimensional imaging unit, is configured to be obtained user and used by three-dimensional imaging unit the depth information of the mutual attitude that interactive device makes in three dimensions.
3. equipment as claimed in claim 2, also comprises:
Lighting unit, is arranged in the back side of image-display units, is used to image-display units that illumination light is provided.
4. equipment as claimed in claim 2, wherein, three-dimensional imaging unit is lens pillar or microlens array, or under the control of control unit, is shown as the liquid crystal display LCD panel of the combination of fence, pinhole array or fence and pinhole array.
5. equipment as claimed in claim 3, wherein, described lighting unit comprises:
Back light unit, parallels the back side that is arranged in image-display units, thereby illumination light is directly offered to image-display units with image-display units.
6. equipment as claimed in claim 3, wherein, described lighting unit comprises:
Back light unit, is arranged in top or the bottom at the image-display units back side perpendicularly with image-display units;
Light reflector element, is arranged in the back side of image-display units, reflexes to image-display units for the light that vertically arranged back light unit is sent, and thinks that image-display units provides illumination light.
7. equipment as claimed in claim 6, wherein, described smooth reflector element is Transflective device.
8. equipment as claimed in claim 7, wherein, described capture unit is arranged in the back side of Transflective device, the light that sends and see through three-dimensional imaging unit from making the interactive device of mutual attitude arrives capture unit through described Transflective device, thereby capture unit obtains the depth information of mutual attitude.
9. equipment as claimed in claim 7, wherein, described Transflective device is optical splitter.
10. equipment as claimed in claim 6, wherein, described smooth reflector element is for catoptrical total reflection device.
11. equipment as claimed in claim 10, wherein, described capture unit is arranged in perpendicularly the back side of image-display units together with back light unit with image-display units,
Wherein, the light that sends and see through three-dimensional imaging unit from the interactive device of making mutual attitude is reflexed to capture unit by described total reflection device, thereby capture unit obtains the depth information of mutual attitude.
12. equipment as claimed in claim 2, wherein, described image-display units is LCD panel.
13. equipment as claimed in claim 2, wherein, described capture unit is ccd sensor or the depth perception camera separating with image-display units.
14. equipment as claimed in claim 2, wherein, described image-display units and capture unit are integrated into the mixed liquid crystal demonstration-charge coupled device LCD-CCD panel that is integrated with LCD panel and multiple ccd sensors,
Wherein, the three-dimensional display content that the LCD panel in described mixing LCD-CCD panel provides for indicative control unit,
Ccd sensor in described mixing LCD-CCD panel obtains the depth information of mutual attitude by three-dimensional imaging unit.
15. equipment as claimed in claim 1, wherein, the three-dimensional display content that described control unit provides be with by multiple element images corresponding object showing in three dimensions,
Show that the depth information obtaining to capture unit is the multiple element images relevant with described mutual attitude.
16. equipment as claimed in claim 13, wherein, when capture unit is, while being arranged in the depth perception camera at the image-display units back side, also to comprise described three-dimensional display content in the depth information of the mutual attitude that capture unit obtains,
Wherein, control unit obtains and the corresponding depth map of mutual attitude by remove described three-dimensional display content from described depth information.
17. 1 kinds for three-dimensional display and mutual method, and described method comprises:
(a) three-dimensional display content be converted to three-dimensional object and in three dimensions, show three-dimensional object;
(b) obtain user and use the depth information of the mutual attitude that interactive device makes in three dimensions;
(c) produce and the corresponding depth map of described mutual attitude according to the depth information of the mutual attitude of obtaining;
(d) collision between depth map analyzing three-dimensional virtual objects and the described interactive device based on producing;
(e) according to the three-dimensional display content to use in the analysis result step of updating (a) of collision, to upgrade the three-dimensional object being presented in three dimensions.
18. methods as claimed in claim 17, wherein, described three-dimensional display content be with by multiple element images corresponding object showing in three dimensions,
The depth information of described mutual attitude is the multiple element images relevant to described mutual attitude.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210579928.0A CN103905808A (en) | 2012-12-27 | 2012-12-27 | Device and method used for three-dimension display and interaction. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210579928.0A CN103905808A (en) | 2012-12-27 | 2012-12-27 | Device and method used for three-dimension display and interaction. |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103905808A true CN103905808A (en) | 2014-07-02 |
Family
ID=50996931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210579928.0A Pending CN103905808A (en) | 2012-12-27 | 2012-12-27 | Device and method used for three-dimension display and interaction. |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103905808A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299548A (en) * | 2014-10-29 | 2015-01-21 | 中国科学院自动化研究所 | Correcting system for naked eye multi-vision true three-dimensional display system and realizing method |
WO2018113367A1 (en) * | 2016-12-23 | 2018-06-28 | 张家港康得新光电材料有限公司 | Integral imaging apparatus |
CN108431729A (en) * | 2016-01-04 | 2018-08-21 | 微软技术许可有限责任公司 | To increase the three dimensional object tracking of display area |
CN109242953A (en) * | 2018-08-15 | 2019-01-18 | 华中科技大学 | A method of realizing the three-dimensional data volume drawing of aerial display and virtual interacting |
WO2020034253A1 (en) * | 2018-08-17 | 2020-02-20 | 上海先研光电科技有限公司 | Laser-induced interactive volumetric 3d display device and control method therefor |
WO2020037693A1 (en) * | 2018-08-19 | 2020-02-27 | 上海先研光电科技有限公司 | Interactive stereoscopic display apparatus based on photophoresis capture and method for controlling same |
WO2020146981A1 (en) * | 2019-01-14 | 2020-07-23 | 京东方科技集团股份有限公司 | Display apparatus, electronic device, and driving method for display apparatus |
CN112306305A (en) * | 2020-10-28 | 2021-02-02 | 黄奎云 | Three-dimensional touch device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508563A (en) * | 2011-11-03 | 2012-06-20 | 深圳超多维光电子有限公司 | Stereo interactive method and operated device |
CN102681183A (en) * | 2012-05-25 | 2012-09-19 | 合肥鼎臣光电科技有限责任公司 | Two-way three-dimensional imaging and naked-eye three-dimensional display system based on lens array |
CN102789313A (en) * | 2012-03-19 | 2012-11-21 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN102799264A (en) * | 2012-04-18 | 2012-11-28 | 友达光电股份有限公司 | Three-dimensional space interaction system |
-
2012
- 2012-12-27 CN CN201210579928.0A patent/CN103905808A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508563A (en) * | 2011-11-03 | 2012-06-20 | 深圳超多维光电子有限公司 | Stereo interactive method and operated device |
CN102789313A (en) * | 2012-03-19 | 2012-11-21 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN102799264A (en) * | 2012-04-18 | 2012-11-28 | 友达光电股份有限公司 | Three-dimensional space interaction system |
CN102681183A (en) * | 2012-05-25 | 2012-09-19 | 合肥鼎臣光电科技有限责任公司 | Two-way three-dimensional imaging and naked-eye three-dimensional display system based on lens array |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299548A (en) * | 2014-10-29 | 2015-01-21 | 中国科学院自动化研究所 | Correcting system for naked eye multi-vision true three-dimensional display system and realizing method |
US11188143B2 (en) | 2016-01-04 | 2021-11-30 | Microsoft Technology Licensing, Llc | Three-dimensional object tracking to augment display area |
CN108431729A (en) * | 2016-01-04 | 2018-08-21 | 微软技术许可有限责任公司 | To increase the three dimensional object tracking of display area |
WO2018113367A1 (en) * | 2016-12-23 | 2018-06-28 | 张家港康得新光电材料有限公司 | Integral imaging apparatus |
CN109242953A (en) * | 2018-08-15 | 2019-01-18 | 华中科技大学 | A method of realizing the three-dimensional data volume drawing of aerial display and virtual interacting |
WO2020034253A1 (en) * | 2018-08-17 | 2020-02-20 | 上海先研光电科技有限公司 | Laser-induced interactive volumetric 3d display device and control method therefor |
WO2020037693A1 (en) * | 2018-08-19 | 2020-02-27 | 上海先研光电科技有限公司 | Interactive stereoscopic display apparatus based on photophoresis capture and method for controlling same |
WO2020146981A1 (en) * | 2019-01-14 | 2020-07-23 | 京东方科技集团股份有限公司 | Display apparatus, electronic device, and driving method for display apparatus |
CN111771374A (en) * | 2019-01-14 | 2020-10-13 | 京东方科技集团股份有限公司 | Display device, electronic apparatus, and method of driving display device |
CN111771374B (en) * | 2019-01-14 | 2022-05-13 | 京东方科技集团股份有限公司 | Display device, electronic apparatus, and method of driving display device |
US11397475B2 (en) * | 2019-01-14 | 2022-07-26 | Boe Technology Group Co., Ltd. | Display device, electronic device and method for driving display device |
EP3913912A4 (en) * | 2019-01-14 | 2022-08-24 | BOE Technology Group Co., Ltd. | Display apparatus, electronic device, and driving method for display apparatus |
CN112306305A (en) * | 2020-10-28 | 2021-02-02 | 黄奎云 | Three-dimensional touch device |
CN112306305B (en) * | 2020-10-28 | 2021-08-31 | 黄奎云 | Three-dimensional touch device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103905808A (en) | Device and method used for three-dimension display and interaction. | |
CN108351691B (en) | Remote rendering for virtual images | |
US20200166967A1 (en) | Flexible Display for a Mobile Computing Device | |
US20180314322A1 (en) | System and method for immersive cave application | |
CN107209565B (en) | Method and system for displaying fixed-size augmented reality objects | |
US10999412B2 (en) | Sharing mediated reality content | |
US9460555B2 (en) | System and method for three-dimensional visualization of geographical data | |
US10884576B2 (en) | Mediated reality | |
US8976170B2 (en) | Apparatus and method for displaying stereoscopic image | |
CN102591124A (en) | Transverse wide-visual field tridimensional display method and system based on spliced light field | |
CN103744518A (en) | Stereoscopic interaction method, stereoscopic interaction display device and stereoscopic interaction system | |
JP2012174238A5 (en) | ||
CN110337674A (en) | Three-dimensional rebuilding method, device, equipment and storage medium | |
US9972119B2 (en) | Virtual object hand-off and manipulation | |
KR100932977B1 (en) | Stereoscopic video display | |
Gotsch et al. | Holoflex: A flexible light-field smartphone with a microlens array and a p-oled touchscreen | |
US10296098B2 (en) | Input/output device, input/output program, and input/output method | |
JP5252703B2 (en) | 3D image display device, 3D image display method, and 3D image display program | |
CN102521876A (en) | Method and system for realizing three dimensional (3D) stereoscopic effect of user interface | |
CN105007480A (en) | Naked-eye three-dimensional (3D) display method and system for 3D data | |
CN114866757A (en) | Stereoscopic display system and method | |
CN102012620B (en) | Electronic cosmetic box | |
KR100893381B1 (en) | Methods generating real-time stereo images | |
JP5045917B2 (en) | 3D display | |
JP7527081B1 (en) | Image Display System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140702 |