WO2024026638A1 - Haptic interaction with 3d object on naked eye 3d display - Google Patents
Haptic interaction with 3d object on naked eye 3d display Download PDFInfo
- Publication number
- WO2024026638A1 WO2024026638A1 PCT/CN2022/109510 CN2022109510W WO2024026638A1 WO 2024026638 A1 WO2024026638 A1 WO 2024026638A1 CN 2022109510 W CN2022109510 W CN 2022109510W WO 2024026638 A1 WO2024026638 A1 WO 2024026638A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- location
- virtual object
- display surface
- gesture
- display
- Prior art date
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000009877 rendering Methods 0.000 claims abstract description 23
- 230000004044 response Effects 0.000 claims abstract description 4
- 230000000007 visual effect Effects 0.000 claims description 20
- 238000005516 engineering process Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 5
- 239000011521 glass Substances 0.000 claims description 5
- 230000004888 barrier function Effects 0.000 claims description 3
- 238000013459 approach Methods 0.000 abstract 1
- 230000005057 finger movement Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000007654 immersion Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F19/00—Advertising or display means not otherwise provided for
- G09F19/12—Advertising or display means not otherwise provided for using special optical effects
- G09F19/125—Stereoscopic displays; 3D displays
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F27/00—Combined visual and audible advertising or displaying, e.g. for public address
- G09F27/005—Signs associated with a sensor
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F27/00—Combined visual and audible advertising or displaying, e.g. for public address
- G09F2027/001—Comprising a presence or proximity detector
Definitions
- the disclosure relates generally to user interaction with virtual objects, more particularly to methods and systems for haptic interaction with visually presented data on stereoscopic displays.
- haptic interaction provides a more dynamic entertainment experience and a better sense of immersion than using purely visual feedback, because of the interactive nature of tactile rendering. For example, using tactile perception blind people can feel unseen content which leads to better integration into society. In another example, haptic texture rendering can provide contact feeling of materials during online shopping.
- Virtual shape and texture haptics on screen is normally achieved by two major technologies, ‘electrovibration with electro-static force or ‘squeezed film effect’ .
- Adding haptic texture lets the user ‘see’ and ‘touch’ the virtual objects and greatly enhance the immersion.
- Such haptic texture interaction is however limited to two-dimensional (2D) content on screen.
- autostereoscopy offers a method of displaying stereoscopic images without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called “naked eye 3D” or “glasses-free 3D” .
- naked eye 3D display systems are able to produce different images in multiple (different) angular positions, thus evoking both stereo parallax and motion parallax depth cues to its viewers, without the need for special eyewear .
- the goal of naked eye 3D display systems is to reproduce truthfully the light field generated by real-world physical objects by taking a subsample of continuously distributed light field function and use a finite number of “views” (Multiview) to approximate the continuous light field function. Therefore, it is also named as ‘Multiview 3D display technique’ .
- compressive light field displays explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account.
- Compressive display designs include dual and multilayer devices that are driven by algorithms such as computed tomography and non-negative matrix factorization and non-negative tensor factorization.
- a system for haptic interaction with 3D objects presented on a stereoscopic display uses actuators arranged in the stereoscopic display for generating tactile feedback on its display surface after a first gesture of moving a finger of a user is detected by a first sensor, wherein the actuators are activated by a controller in response to the detected first gesture.
- the system essentially combines a 3D stereoscopic display with hand tracking and gesture recognition and tactile feedback generation, all connected by a computer control system.
- immersive haptic experience can be achieved on naked eye 3D displays by allowing users to feel the shape and texture of the displayed 3D content.
- Actuators that create the tactile feedback are integrated into the 3D display, so that the virtual shape and texture can be detected on the surface of the screen.
- the technology can be used for online shopping, entertainment, gaming, and for any other purpose where 3D display is used to present a 3D virtual environment with interactive 3D objects.
- the actuators are configured to generate tactile feedback corresponding to the shape and texture of the virtual object at a contact point determined based on a gesture signal, ensuring an accurate feel of the shape and texture of the virtual object and an immersive, non-disruptive 3D experience for the user.
- the first gesture corresponds to a movement of a finger
- the controller is configured to predict, based on the first gesture signal, a first location on the display surface where the finger is expected to come into contact with the display surface, and to generate, using the actuators, tactile feedback on the display surface corresponding to the surface of the virtual object at the first location, thereby ensuring an accurate feel of the shape and texture of the virtual object and an immersive, non-disruptive 3D experience for the user.
- the stereoscopic display is configured for displaying the virtual object in a stereoscopic space virtually extending beyond the plane of the display surface in at least one direction.
- the controller is configured to, based on the first gesture signal, determine a second location in the stereoscopic space as a virtual contact point between the finger and the virtual object; and to control the stereoscopic display to render the virtual object in the stereoscopic space so that the second location is aligned with the first location in the plane of the display surface.
- the system is further configured to detect a second gesture corresponding to a movement of a finger to a third location on the display surface.
- the controller is configured to generate, using the actuators, and based on the second gesture, tactile feedback on the display surface corresponding to the surface of the virtual object at the third location. This ensures that the 3D object is continuously aligned in space with the contact point where the finger touches the surface of the display and the user continuously feels the shape and texture of the virtual object as expected, further ensuring a non-disruptive 3D experience.
- the controller is configured to, based on the second gesture, determine a fourth location in the stereoscopic space as a virtual contact point between the finger and the virtual object, and to control the stereoscopic display to render the virtual object in the stereoscopic space so that the fourth location is aligned with the third location in the plane of the display surface.
- the second gesture corresponds to a continuous movement of a finger from the first location to the third location on the display surface; and the controller is configured to generate, using the actuators, and based on the second gesture, continuous tactile feedback along the path from the first location to the third location on the display surface to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object through the display surface.
- the first sensor is configured to detect both the first gesture and the second gesture, thus enabling a simple and cost-effective implementation of the system using only one set of sensors for gesture recognition and hand tracking.
- the system comprises a second sensor configured to detect the second gesture, the second sensor preferably arranged as a touch sensor in the stereoscopic display configured to detect touch or near proximity of a finger and the display surface.
- the controller is configured to control the stereoscopic display to render visual feedback on the virtual object surface corresponding to a determined virtual contact point between the finger and the virtual object in the stereoscopic space, thus ensuring a better user experience where the users can easily follow the predicted movement of their fingers with respect to the virtual object.
- the stereoscopic display is an autostereoscopic display configured for visual display of a three-dimensional virtual object without the need of special headgear, glasses or any wearable device using lenticular lens, parallax barrier, volumetric display, holographic or light field technology, wherein the stereoscopic display is preferably a 3D light field stereoscopic display.
- the actuators are configured to generate the tactile feedback using electrovibration, squeeze film effect, or any similar haptic rendering solution suitable for generating tactile feedback locally on a display surface.
- the actuators are configured to generate tactile feedback locally on the display surface at respective locations, the tactile feedback corresponding to the surface of the virtual object at the respective locations, for ensuring a more immersive 3D haptic experience.
- the first sensor is a sensor or system of sensors configured to track and interpret hand gestures, more preferably a sensor or system of sensors configured to detect location and movement of at least one finger, such as a camera, depth sensor, time-of-flight camera, structured light scanner, or mm-wave radar.
- a method for haptic interaction with visually presented data comprising displaying a three-dimensional virtual object on a display surface of a stereoscopic display comprising actuators arranged therein for generating tactile feedback on the display surface; detecting a first gesture using a first sensor and transmitting a first gesture signal associated with the first gesture to a controller; and controlling the actuators by the controller to generate tactile feedback on the display surface corresponding to the shape and texture of the virtual object at a contact point determined based on a gesture signal.
- the stereoscopic display is configured for displaying the virtual object in a stereoscopic space virtually extending beyond the plane of the display surface in at least one direction, and the first gesture corresponds to a movement of a finger.
- the method further comprises predicting, based on the first gesture signal, a first location on the display surface where the finger is expected to come into contact with the display surface; determining, based on the first gesture signal, a second location in the stereoscopic space as a virtual contact point between the finger and the virtual object; and rendering the virtual object in the stereoscopic space on the stereoscopic display so that the second location is aligned with the first location in the plane of the display surface.
- This implementation ensures that the 3D object is aligned in space with the contact point where the finger touches the surface of the display and the user feels the shape and texture of the virtual object as expected, ensuring a non-disruptive 3D experience.
- the method further comprises detecting a second gesture corresponding to a movement of a finger to a third location on the display surface; and generating, using the actuators, and based on the second gesture, tactile feedback on the display surface corresponding to the surface of the virtual object at the third location.
- the method further comprises determining, based on the second gesture, a fourth location in the stereoscopic space as a virtual contact point between the finger and the virtual object; and rendering the virtual object in the stereoscopic space on the stereoscopic display so that the fourth location is aligned with the third location in the plane of the display surface.
- the second gesture corresponds to a continuous movement of a finger from the first location to the third location on the display surface
- the method further comprises generating, using the actuators, and based on the second gesture, continuous tactile feedback along the path from the first location to the third location on the display surface to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object through the display surface.
- the method further comprises rendering visual feedback on the virtual object surface corresponding to a determined virtual contact point between the finger and the virtual object in the stereoscopic space, thus ensuring a better user experience where the users can easily follow the predicted movement of their fingers with respect to the virtual object.
- Fig. 1 shows a schematic front view of a system for haptic interaction in accordance with an example of the embodiments of the disclosure
- Fig. 2 illustrates a method of detecting a first gesture and determining a first location on the display surface where the finger is expected to come into contact with the display surface in accordance with an example of the embodiments of the disclosure
- Fig. 3 illustrates a method of generating tactile feedback on the display surface in accordance with an example of the embodiments of the disclosure
- Figs. 4A through 4D show schematic side views of a system for haptic interaction in accordance with an example of the embodiments of the disclosure, and illustrate a method of rendering a virtual object in the stereoscopic space so that a virtual second location is aligned with the first location in the plane of the display surface in accordance with another example of the embodiments of the disclosure;
- Figs. 5A and 5B show schematic side views of a system for haptic interaction in accordance with an example of the embodiments of the disclosure, and illustrate a method of rendering a virtual object in the stereoscopic space so that a virtual fourth location is aligned with a third location in the plane of the display surface where a finger is detected to move on the display surface in accordance with another example of the embodiments of the disclosure;
- Fig. 6 shows a combined flow diagram of a method for haptic interaction in accordance with further examples of embodiments of the disclosure.
- Fig. 1 shows a schematic front view of a system for haptic interaction in accordance with an example of the embodiments of the disclosure.
- the system comprises a stereoscopic display 1 configured for visual display of a three-dimensional virtual object 3 on a display surface 2.
- the stereoscopic display 1 is an autostereoscopic display 1 configured for visual display of a three-dimensional virtual object 3 without the need of special headgear, glasses or any wearable device.
- the stereoscopic display 1 uses lenticular lens, parallax barrier, volumetric display, holographic or light field technology.
- the stereoscopic display 1 is a 3D light field stereoscopic display 1.
- Actuators 4 are further arranged in the stereoscopic display 1 for generating tactile feedback 5 on the display surface 2.
- the actuators 4 are configured to generate tactile feedback 5 corresponding to the shape and texture of the virtual object 3 at a contact point determined based on a gesture signal 8, as will be explained below.
- the actuators 4 are configured to generate the tactile feedback 5 using electrovibration, squeeze film haptic rendering, or any similar haptic rendering solution suitable for generating tactile feedback 5 locally on a display surface 2 to render haptic texture of the virtual object 3.
- Electrovibration works by modifying electrostatic friction between the display surface 2 and a user’s finger. Passing an electrical charge into an insulated electrode creates an attractive force when a user’s skin comes into contact with it. By modulating this attractive force a variety of sensations can be generated.
- squeeze film haptic rendering reduces friction when active.
- the squeeze-film effect is an over-pressure phenomenon while generates an ultrathin air film between two surfaces.
- an ultrasonic vibration of a few micrometers is applied to the display surface 2. This allows one surface to slide more easily across the other and the friction is reduced accordingly.
- the actuators 4 are configured to generate tactile feedback 5 locally on the display surface 2 at respective locations, the tactile feedback 5 corresponding to the surface of the virtual object 3 at the respective locations.
- the system further includes a first sensor 6 configured to detect a first gesture 7 and transmit a first gesture signal 8 associated with the first gesture 7 to a controller 9.
- the first sensor 6 refers to a sensor or system of sensors configured to track and interpret hand gestures, in particular to detect location and movement of at least one finger.
- the first sensor 6 can be implemented by any suitable means for this purpose, using a camera, depth sensor, time-of-flight (ToF) camera, structured light scanner, or mm-wave radar.
- ToF time-of-flight
- the controller 9 is configured to receive the first gesture signal 8 from the first sensor 6 and to control the actuators 4 in response, in order to generate tactile feedback 5 on the display surface 2, as will be explained in detail below.
- Fig. 2 illustrates detecting a first gesture 7 and determining a first location 10 on the display surface 2 where the finger is expected to come into contact with the display surface 2, in accordance with an example of the embodiments of the disclosure.
- the first gesture 7 corresponds to a movement of a finger.
- the first sensor 6 can generate a first gesture signal 8 which contains information about predicted finger movement based on the detected first gesture 7.
- Hand gesture recognition technology is traditionally used not only to detect the finger location, speed of movement, but also to interpret next step of the finger movement.
- the skilled person can implement such technology as the first sensor 6 without any technical difficulties.
- the controller then can predict, based on the received first gesture signal 8, a first location 10 on the display surface 2 where the finger is expected to come into contact with the display surface 2.
- the controller 9 can then trigger the actuators 4 to generate tactile feedback 5 on the display surface 2 corresponding to the surface of the virtual object 3 at the first location 10.
- the controller 9 can also be configured to control the stereoscopic display 1 to render visual feedback 16 on the virtual object 3 surface corresponding to a determined virtual contact point between the finger and the virtual object 3 in the stereoscopic space.
- Figs. 4A through 4C show schematic side views of the system for haptic interaction in accordance with an example of the embodiments of the disclosure.
- the stereoscopic display 1 is configured for displaying the virtual object 3 in a stereoscopic space virtually extending beyond the plane of the display surface 2 in at least one direction, preferably in both directions perpendicular to the plane of the display surface 2.
- the controller can predict, based on the received first gesture signal 8 from the first sensor 6 that corresponds to a detected first gesture 7 of a finger movement of a user, a first location 10 on the display surface 2 where the finger is expected to come into contact with the display surface 2.
- this first location 10 however may often not be located on the surface of the virtual object 3 but within or outside the virtual object 3 in the stereoscopic space.
- the controller 9 first determines a second location 11 in the stereoscopic space based on the first gesture signal 8 as described above, which second location 11 is a virtual contact point between the finger and the virtual object 3.
- the controller 9 then adjusts the rendering of the virtual object 3 in the stereoscopic space so that the second location 11 is aligned with the first location 10 in the plane of the display surface 2.
- Figs. 5A and 5B illustrate further steps of the haptic interaction method accordance with an example of the embodiments of the disclosure, wherein after the user’s finger comes in contact with the display surface 2 and receives tactile feedback 5 at the first location 10, as described above, the finger is moved in a sliding motion across the display surface 2 to another location as illustrated in Fig. 5A. This is detected as a second gesture 12, corresponding to a sliding finger movement to a third location 13 on the display surface 2.
- This second gesture 12 can be detected either by the first sensor 6, or a separate, dedicated second sensor 15 configured to detect gestures like this second gesture 1.
- the second sensor 15 can be arranged as a touch sensor in the stereoscopic display 1 configured to detect touch or near proximity of a finger and the display surface 2.
- the controller 9 generates, using the actuators 4, and based on the second gesture 12, continuous tactile feedback 5 along the path from the first location 10 to the third location 13 on the display surface 2 to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object 3 through the display surface 2.
- the controller 9 determines, based on the second gesture 12, a fourth location 14 in the stereoscopic space as a virtual contact point between the finger and the virtual object 3, and the rendering of the virtual object 3 on the stereoscopic display 1 is adjusted in the stereoscopic space so that the fourth location 14 is aligned with the third location 13 in the plane of the display surface 2.
- This adjustment is continuous as long as the user’s finger is in contact with the display surface 2. This way, the virtual shape and texture of the 3D virtual object 3 is always detected during the finger’s sliding on the display surface 2 to ensure an uninterrupted, immersive haptic experience.
- Fig. 6 shows a combined flow diagram of a method for haptic interaction in accordance with further examples of embodiments of the disclosure.
- the application scenario can be described by the following main steps.
- the first gesture 7 representing an intention of touching the 3D virtual object 3 on the stereoscopic display 1 (naked eye 3D display) is detected via the first sensor 6, i.e. the gesture and hand tracking system.
- the system predicts a second location 11 where the user’s finger is pointing on the surface of the virtual object 3. This second location 11 is defined as the contact point between finger and the virtual object 3.
- the controller 9 sends commands to the 3D display rendering module to render the virtual object 3 so that it ‘moves’ to a first location 10 where the cross section containing the second location 11 is exactly on the display surface 2.
- the user can touch the surface of the virtual object 3 and the display surface 2 simultaneously.
- the texture tactile feedback 5 on the screen surface is activated.
- the haptic texture is generated by the actuators 4, which are integrated into the display 1 with the techniques described above, using either ‘Electrovibration’ or ‘Squeeze film effect’ .
- a third step it is ensured that the virtual shape and texture can be continuously detected also when the finger is sliding on the display surface 2.
- Hand tracking of the finger by the first sensor 6 or a separate second sensor 15 interprets the finger’s sliding speed and direction as a second gesture 12 and predicts the movement of the finger to a virtual fourth location 14 on the surface of the virtual object 3.
- the 3D object rendering ensures that the virtual fourth location 14 is always aligned with a corresponding third location 13 on the display surface 2, where texture tactile feedback 5 is also activated so that the user can feel the shape and texture of the virtual object 3 throughout the entire sliding action, thereby ensuring a continuous haptic immersion.
- a virtual object 3 rendering algorithm can further give the user visual feedback 16 in the form of visual interaction cues that the finger always touches the surface of the virtual object 3 during the sliding action.
- a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
- a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Marketing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A system and method for providing haptic interaction with 3D objects displayed on a stereoscopic display (1). The system uses actuators (4) integrated into the stereoscopic display (1) for generating tactile feedback (5) on its display surface (4) corresponding to the shape and texture of the virtual object (3), in response to a first gesture (7) being detected by a first sensor (6) where a finger approaches the display surface (2).The system predicts where the finger is expected to come into contact with the display surface (2) and moves the virtual object (3) so that its surface aligns with the predicted location. The system then continuously adjusts the rendering of the virtual object (3) while the finger is sliding on the display surface (4) to keep the surface of the virtual object (3) aligned.
Description
The disclosure relates generally to user interaction with virtual objects, more particularly to methods and systems for haptic interaction with visually presented data on stereoscopic displays.
For many years, human-computer interactions have traditionally been carried out using a standard keyboard and/or mouse, with a screen providing a user with visual feedback of the keyboard and/or mouse input. With the constantly improving technology of computer-based devices these keyboards are becoming more and more ineffective and, some cases even impossible means of interacting with computer-implemented virtual environments.
In traditional gaming entertainment systems, haptic interaction provides a more dynamic entertainment experience and a better sense of immersion than using purely visual feedback, because of the interactive nature of tactile rendering. For example, using tactile perception blind people can feel unseen content which leads to better integration into society. In another example, haptic texture rendering can provide contact feeling of materials during online shopping.
Virtual shape and texture haptics on screen is normally achieved by two major technologies, ‘electrovibration with electro-static force or ‘squeezed film effect’ . Adding haptic texture lets the user ‘see’ and ‘touch’ the virtual objects and greatly enhance the immersion. Such haptic texture interaction is however limited to two-dimensional (2D) content on screen.
With the development of three-dimensional (3D) displays, where a virtual environment such as a user interface is stereoscopically displayed as 3D objects in a virtual stereoscopic space, there is an increasing challenge for providing means for interacting with these 3D objects in the virtual stereoscopic space. This is because the technology is based on the principles of stereopsis, wherein a different image is provided to the viewer's left and right eyes. These interactions are therefore usually limited by the need for wearing additional binocular stereo display equipment for experiencing the virtual stereoscopic space, such as head-mounted displays (HMD) or 3D glasses.
In binocular stereo display technologies the “focal distance” is not necessarily equal to the “convergence distance. ” This type of visual conflict (i.e., the accommodation–convergence conflict) may cause visual confusion and visual fatigue of human visual systems.
To solve this issue, autostereoscopy offers a method of displaying stereoscopic images without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called “naked eye 3D” or "glasses-free 3D" .
These naked eye 3D display systems are able to produce different images in multiple (different) angular positions, thus evoking both stereo parallax and motion parallax depth cues to its viewers, without the need for special eyewear . The goal of naked eye 3D display systems is to reproduce truthfully the light field generated by real-world physical objects by taking a subsample of continuously distributed light field function and use a finite number of “views” (Multiview) to approximate the continuous light field function. Therefore, it is also named as ‘Multiview 3D display technique’ .
With rapid advances in optical fabrication, digital processing power, and computational models for human perception, a new generation of display technology is emerging: compressive light field displays. These architectures explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account. Compressive display designs include dual and multilayer devices that are driven by algorithms such as computed tomography and non-negative matrix factorization and non-negative tensor factorization.
With all these technological advancements, it becomes a natural demand that users would like to ‘see’ and ‘touch’ the displayed 3D content. One solution for interacting with 3D objects presented through 3D displays is to provide mid-air haptics by using ultrasonic array beamforming. This solution can provide some haptic feeling in the air, but it is not sufficient to render good virtual haptic texture. Consequently, there is still an existing need for providing immersive haptic interaction with 3D objects on naked eye 3D displays.
SUMMARY
It is an object to provide an improved method and system for haptic interaction with 3D objects on stereoscopic displays which overcomes or at least reduces the problems mentioned above.
The foregoing and other objects are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
According to a first aspect, there is provided a system for haptic interaction with 3D objects presented on a stereoscopic display. The system uses actuators arranged in the stereoscopic display for generating tactile feedback on its display surface after a first gesture of moving a finger of a user is detected by a first sensor, wherein the actuators are activated by a controller in response to the detected first gesture.
The system essentially combines a 3D stereoscopic display with hand tracking and gesture recognition and tactile feedback generation, all connected by a computer control system. By combining these systems immersive haptic experience can be achieved on naked eye 3D displays by allowing users to feel the shape and texture of the displayed 3D content. Actuators that create the tactile feedback are integrated into the 3D display, so that the virtual shape and texture can be detected on the surface of the screen. The technology can be used for online shopping, entertainment, gaming, and for any other purpose where 3D display is used to present a 3D virtual environment with interactive 3D objects.
In a possible implementation form of the first aspect the actuators are configured to generate tactile feedback corresponding to the shape and texture of the virtual object at a contact point determined based on a gesture signal, ensuring an accurate feel of the shape and texture of the virtual object and an immersive, non-disruptive 3D experience for the user.
In a further possible implementation form of the first aspect the first gesture corresponds to a movement of a finger, and the controller is configured to predict, based on the first gesture signal, a first location on the display surface where the finger is expected to come into contact with the display surface, and to generate, using the actuators, tactile feedback on the display surface corresponding to the surface of the virtual object at the first location, thereby ensuring an accurate feel of the shape and texture of the virtual object and an immersive, non-disruptive 3D experience for the user.
In a further possible implementation form of the first aspect the stereoscopic display is configured for displaying the virtual object in a stereoscopic space virtually extending beyond the plane of the display surface in at least one direction. The controller is configured to, based on the first gesture signal, determine a second location in the stereoscopic space as a virtual contact point between the finger and the virtual object; and to control the stereoscopic display to render the virtual object in the stereoscopic space so that the second location is aligned with the first location in the plane of the display surface. This implementation ensures that the 3D object is aligned in space with the contact point where the finger touches the surface of the display and the user feels the shape and texture of the virtual object as expected, further ensuring a non-disruptive 3D experience.
In a further possible implementation form of the first aspect the system is further configured to detect a second gesture corresponding to a movement of a finger to a third location on the display surface. The controller is configured to generate, using the actuators, and based on the second gesture, tactile feedback on the display surface corresponding to the surface of the virtual object at the third location. This ensures that the 3D object is continuously aligned in space with the contact point where the finger touches the surface of the display and the user continuously feels the shape and texture of the virtual object as expected, further ensuring a non-disruptive 3D experience.
In a further possible implementation form of the first aspect the controller is configured to, based on the second gesture, determine a fourth location in the stereoscopic space as a virtual contact point between the finger and the virtual object, and to control the stereoscopic display to render the virtual object in the stereoscopic space so that the fourth location is aligned with the third location in the plane of the display surface. This ensures that when the finger slides on the screen, the 3D object moves in the space so that the finger can always touch the surface of the object and feels the shape and texture of it, further ensuring a non-disruptive 3D experience.
In a further possible implementation form of the first aspect the second gesture corresponds to a continuous movement of a finger from the first location to the third location on the display surface; and the controller is configured to generate, using the actuators, and based on the second gesture, continuous tactile feedback along the path from the first location to the third location on the display surface to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object through the display surface.
In a further possible implementation form of the first aspect the first sensor is configured to detect both the first gesture and the second gesture, thus enabling a simple and cost-effective implementation of the system using only one set of sensors for gesture recognition and hand tracking.
In another possible implementation form of the first aspect the system comprises a second sensor configured to detect the second gesture, the second sensor preferably arranged as a touch sensor in the stereoscopic display configured to detect touch or near proximity of a finger and the display surface. This system enables selective input detection for different types of gestures (in space and touching the display) for a more accurate 3D experience.
In a further possible implementation form of the first aspect the controller is configured to control the stereoscopic display to render visual feedback on the virtual object surface corresponding to a determined virtual contact point between the finger and the virtual object in the stereoscopic space, thus ensuring a better user experience where the users can easily follow the predicted movement of their fingers with respect to the virtual object.
In a further possible implementation form of the first aspect the stereoscopic display is an autostereoscopic display configured for visual display of a three-dimensional virtual object without the need of special headgear, glasses or any wearable device using lenticular lens, parallax barrier, volumetric display, holographic or light field technology, wherein the stereoscopic display is preferably a 3D light field stereoscopic display.
In a further possible implementation form of the first aspect the actuators are configured to generate the tactile feedback using electrovibration, squeeze film effect, or any similar haptic rendering solution suitable for generating tactile feedback locally on a display surface.
In a further possible implementation form of the first aspect the actuators are configured to generate tactile feedback locally on the display surface at respective locations, the tactile feedback corresponding to the surface of the virtual object at the respective locations, for ensuring a more immersive 3D haptic experience.
In a further possible implementation form of the first aspect the first sensor is a sensor or system of sensors configured to track and interpret hand gestures, more preferably a sensor or system of sensors configured to detect location and movement of at least one finger, such as a camera, depth sensor, time-of-flight camera, structured light scanner, or mm-wave radar.
According to a second aspect, there is provided a method for haptic interaction with visually presented data, the method comprising displaying a three-dimensional virtual object on a display surface of a stereoscopic display comprising actuators arranged therein for generating tactile feedback on the display surface; detecting a first gesture using a first sensor and transmitting a first gesture signal associated with the first gesture to a controller; and controlling the actuators by the controller to generate tactile feedback on the display surface corresponding to the shape and texture of the virtual object at a contact point determined based on a gesture signal.
With this method it becomes possible to achieve an immersive haptic experience on naked eye 3D displays allowing users to feel the shape and texture of the displayed 3D content, by combining a 3D stereoscopic display with hand tracking and gesture recognition and tactile feedback generation, all connected by a computer control system. Actuators that create the tactile feedback are integrated into the 3D display, so that the virtual shape and texture can be detected on the surface of the screen. The method can be used for online shopping, entertainment, gaming, and for any other purpose where 3D display is used to present a 3D virtual environment with interactive 3D objects.
In a possible implementation form of the second aspect the stereoscopic display is configured for displaying the virtual object in a stereoscopic space virtually extending beyond the plane of the display surface in at least one direction, and the first gesture corresponds to a movement of a finger. The method further comprises predicting, based on the first gesture signal, a first location on the display surface where the finger is expected to come into contact with the display surface; determining, based on the first gesture signal, a second location in the stereoscopic space as a virtual contact point between the finger and the virtual object; and rendering the virtual object in the stereoscopic space on the stereoscopic display so that the second location is aligned with the first location in the plane of the display surface. This implementation ensures that the 3D object is aligned in space with the contact point where the finger touches the surface of the display and the user feels the shape and texture of the virtual object as expected, ensuring a non-disruptive 3D experience.
In a further possible implementation form of the second aspect the method further comprises detecting a second gesture corresponding to a movement of a finger to a third location on the display surface; and generating, using the actuators, and based on the second gesture, tactile feedback on the display surface corresponding to the surface of the virtual object at the third location. This ensures that the 3D object is continuously aligned in space with the contact point where the finger touches the surface of the display and the user continuously feels the shape and texture of the virtual object as expected, further ensuring a non-disruptive 3D experience.
In a further possible implementation form of the second aspect the method further comprises determining, based on the second gesture, a fourth location in the stereoscopic space as a virtual contact point between the finger and the virtual object; and rendering the virtual object in the stereoscopic space on the stereoscopic display so that the fourth location is aligned with the third location in the plane of the display surface. This ensures that when the finger slides on the screen, the 3D object moves in the space so that the finger can always touch the surface of the object and feels the shape and texture of it, further ensuring a non-disruptive 3D experience.
In a further possible implementation form of the second aspect the second gesture corresponds to a continuous movement of a finger from the first location to the third location on the display surface, and the method further comprises generating, using the actuators, and based on the second gesture, continuous tactile feedback along the path from the first location to the third location on the display surface to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object through the display surface.
In a further possible implementation form of the second aspect the method further comprises rendering visual feedback on the virtual object surface corresponding to a determined virtual contact point between the finger and the virtual object in the stereoscopic space, thus ensuring a better user experience where the users can easily follow the predicted movement of their fingers with respect to the virtual object.
These and other aspects will be apparent from and the embodiment (s) described below.
In the following detailed portion of the present disclosure, the aspects, embodiments and implementations will be explained in more detail with reference to the example embodiments shown in the drawings, in which:
Fig. 1 shows a schematic front view of a system for haptic interaction in accordance with an example of the embodiments of the disclosure;
Fig. 2 illustrates a method of detecting a first gesture and determining a first location on the display surface where the finger is expected to come into contact with the display surface in accordance with an example of the embodiments of the disclosure;
Fig. 3 illustrates a method of generating tactile feedback on the display surface in accordance with an example of the embodiments of the disclosure;
Figs. 4A through 4D show schematic side views of a system for haptic interaction in accordance with an example of the embodiments of the disclosure, and illustrate a method of rendering a virtual object in the stereoscopic space so that a virtual second location is aligned with the first location in the plane of the display surface in accordance with another example of the embodiments of the disclosure;
Figs. 5A and 5B show schematic side views of a system for haptic interaction in accordance with an example of the embodiments of the disclosure, and illustrate a method of rendering a virtual object in the stereoscopic space so that a virtual fourth location is aligned with a third location in the plane of the display surface where a finger is detected to move on the display surface in accordance with another example of the embodiments of the disclosure; and
Fig. 6 shows a combined flow diagram of a method for haptic interaction in accordance with further examples of embodiments of the disclosure.
In the illustrated embodiments described below, structures and features that are the same or similar to corresponding structures and features previously described are denoted by the same reference numeral as previously used for simplicity.
Fig. 1 shows a schematic front view of a system for haptic interaction in accordance with an example of the embodiments of the disclosure.
The system comprises a stereoscopic display 1 configured for visual display of a three-dimensional virtual object 3 on a display surface 2.
The stereoscopic display 1 is an autostereoscopic display 1 configured for visual display of a three-dimensional virtual object 3 without the need of special headgear, glasses or any wearable device. The stereoscopic display 1 uses lenticular lens, parallax barrier, volumetric display, holographic or light field technology. In an embodiment the stereoscopic display 1 is a 3D light field stereoscopic display 1.
The actuators 4 are configured to generate the tactile feedback 5 using electrovibration, squeeze film haptic rendering, or any similar haptic rendering solution suitable for generating tactile feedback 5 locally on a display surface 2 to render haptic texture of the virtual object 3.
Electrovibration works by modifying electrostatic friction between the display surface 2 and a user’s finger. Passing an electrical charge into an insulated electrode creates an attractive force when a user’s skin comes into contact with it. By modulating this attractive force a variety of sensations can be generated.
Unlike electrostatic techniques which increase friction, squeeze film haptic rendering reduces friction when active. The squeeze-film effect is an over-pressure phenomenon while generates an ultrathin air film between two surfaces. In order to create this air gap, an ultrasonic vibration of a few micrometers is applied to the display surface 2. This allows one surface to slide more easily across the other and the friction is reduced accordingly.
In an embodiment the actuators 4 are configured to generate tactile feedback 5 locally on the display surface 2 at respective locations, the tactile feedback 5 corresponding to the surface of the virtual object 3 at the respective locations.
The system further includes a first sensor 6 configured to detect a first gesture 7 and transmit a first gesture signal 8 associated with the first gesture 7 to a controller 9. The first sensor 6 refers to a sensor or system of sensors configured to track and interpret hand gestures, in particular to detect location and movement of at least one finger. The first sensor 6 can be implemented by any suitable means for this purpose, using a camera, depth sensor, time-of-flight (ToF) camera, structured light scanner, or mm-wave radar.
The controller 9 is configured to receive the first gesture signal 8 from the first sensor 6 and to control the actuators 4 in response, in order to generate tactile feedback 5 on the display surface 2, as will be explained in detail below.
Fig. 2 illustrates detecting a first gesture 7 and determining a first location 10 on the display surface 2 where the finger is expected to come into contact with the display surface 2, in accordance with an example of the embodiments of the disclosure. As shown in Fig. 2, the first gesture 7 corresponds to a movement of a finger. Using hand gesture recognition technology which utilizes sensors to track and interpret hand gestures, locations and movements, the first sensor 6 can generate a first gesture signal 8 which contains information about predicted finger movement based on the detected first gesture 7.
Hand gesture recognition technology is traditionally used not only to detect the finger location, speed of movement, but also to interpret next step of the finger movement. The skilled person can implement such technology as the first sensor 6 without any technical difficulties.
The controller then can predict, based on the received first gesture signal 8, a first location 10 on the display surface 2 where the finger is expected to come into contact with the display surface 2.
As shown in Fig. 3, the controller 9 can then trigger the actuators 4 to generate tactile feedback 5 on the display surface 2 corresponding to the surface of the virtual object 3 at the first location 10.
As further shown in Fig. 2, the controller 9 can also be configured to control the stereoscopic display 1 to render visual feedback 16 on the virtual object 3 surface corresponding to a determined virtual contact point between the finger and the virtual object 3 in the stereoscopic space.
Figs. 4A through 4C show schematic side views of the system for haptic interaction in accordance with an example of the embodiments of the disclosure.
As shown in Fig. 4A, the stereoscopic display 1 is configured for displaying the virtual object 3 in a stereoscopic space virtually extending beyond the plane of the display surface 2 in at least one direction, preferably in both directions perpendicular to the plane of the display surface 2. As described above, the controller can predict, based on the received first gesture signal 8 from the first sensor 6 that corresponds to a detected first gesture 7 of a finger movement of a user, a first location 10 on the display surface 2 where the finger is expected to come into contact with the display surface 2. As shown in Fig. 4A, this first location 10 however may often not be located on the surface of the virtual object 3 but within or outside the virtual object 3 in the stereoscopic space. To provide an accurate and consistent haptic experience for the user, it is necessary to adjust the visual rendering of the virtual object 3 in the stereoscopic space so that the expected contact point of the user’s finger aligns with a point on the surface of the virtual object 3.
As shown in Fig. 4B, the controller 9 first determines a second location 11 in the stereoscopic space based on the first gesture signal 8 as described above, which second location 11 is a virtual contact point between the finger and the virtual object 3.
As shown in Fig. 4C, the controller 9 then adjusts the rendering of the virtual object 3 in the stereoscopic space so that the second location 11 is aligned with the first location 10 in the plane of the display surface 2.
This ensures that when the finger of the user receives tactile feedback 5 on the display surface 2 at the first location 10, as shown in Fig. 4D, it corresponds to the shape and texture of the surface of the virtual object 3 at the same location.
Figs. 5A and 5B illustrate further steps of the haptic interaction method accordance with an example of the embodiments of the disclosure, wherein after the user’s finger comes in contact with the display surface 2 and receives tactile feedback 5 at the first location 10, as described above, the finger is moved in a sliding motion across the display surface 2 to another location as illustrated in Fig. 5A. This is detected as a second gesture 12, corresponding to a sliding finger movement to a third location 13 on the display surface 2. This second gesture 12 can be detected either by the first sensor 6, or a separate, dedicated second sensor 15 configured to detect gestures like this second gesture 1.
The second sensor 15 can be arranged as a touch sensor in the stereoscopic display 1 configured to detect touch or near proximity of a finger and the display surface 2.
The controller 9 generates, using the actuators 4, and based on the second gesture 12, continuous tactile feedback 5 along the path from the first location 10 to the third location 13 on the display surface 2 to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object 3 through the display surface 2.
As shown in Figs 5A however, this sliding movement of the finger can result in a similar issue as described above, namely that the third location 13 is not located on the surface of the virtual object 3 but outside the virtual object 3 in the stereoscopic space. To provide an accurate and consistent haptic experience for the user, it is again necessary to adjust the visual rendering of the virtual object 3 in the stereoscopic space so that the expected contact point of the user’s finger aligns all the time with a point on the surface of the virtual object 3.
As shown in Fig. 5B, the controller 9 then determines, based on the second gesture 12, a fourth location 14 in the stereoscopic space as a virtual contact point between the finger and the virtual object 3, and the rendering of the virtual object 3 on the stereoscopic display 1 is adjusted in the stereoscopic space so that the fourth location 14 is aligned with the third location 13 in the plane of the display surface 2.
This adjustment is continuous as long as the user’s finger is in contact with the display surface 2. This way, the virtual shape and texture of the 3D virtual object 3 is always detected during the finger’s sliding on the display surface 2 to ensure an uninterrupted, immersive haptic experience.
Fig. 6 shows a combined flow diagram of a method for haptic interaction in accordance with further examples of embodiments of the disclosure.
The application scenario can be described by the following main steps.
In a first step, the first gesture 7 representing an intention of touching the 3D virtual object 3 on the stereoscopic display 1 (naked eye 3D display) is detected via the first sensor 6, i.e. the gesture and hand tracking system. The system then predicts a second location 11 where the user’s finger is pointing on the surface of the virtual object 3. This second location 11 is defined as the contact point between finger and the virtual object 3.
In a second step, knowing that the finger is planning to touch the second location 11 on the virtual object 3, the controller 9 sends commands to the 3D display rendering module to render the virtual object 3 so that it ‘moves’ to a first location 10 where the cross section containing the second location 11 is exactly on the display surface 2. By doing so, the user can touch the surface of the virtual object 3 and the display surface 2 simultaneously. Meanwhile, the texture tactile feedback 5 on the screen surface is activated. The haptic texture is generated by the actuators 4, which are integrated into the display 1 with the techniques described above, using either ‘Electrovibration’ or ‘Squeeze film effect’ .
In a third step, it is ensured that the virtual shape and texture can be continuously detected also when the finger is sliding on the display surface 2. Hand tracking of the finger by the first sensor 6 or a separate second sensor 15 interprets the finger’s sliding speed and direction as a second gesture 12 and predicts the movement of the finger to a virtual fourth location 14 on the surface of the virtual object 3. Then, similarly as described above, the 3D object rendering ensures that the virtual fourth location 14 is always aligned with a corresponding third location 13 on the display surface 2, where texture tactile feedback 5 is also activated so that the user can feel the shape and texture of the virtual object 3 throughout the entire sliding action, thereby ensuring a continuous haptic immersion.
A virtual object 3 rendering algorithm can further give the user visual feedback 16 in the form of visual interaction cues that the finger always touches the surface of the virtual object 3 during the sliding action.
The various aspects and implementations have been described in conjunction with various embodiments herein. However, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject-matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
The reference signs used in the claims shall not be construed as limiting the scope.
Claims (20)
- A system for haptic interaction with visually presented data, the system comprising:a stereoscopic display (1) comprising a display surface (2) configured for visual display of a three-dimensional virtual object (3) ;actuators (4) arranged in the stereoscopic display (1) for generating tactile feedback (5) on the display surface (2) ;a first sensor (6) configured to detect a first gesture (7) and transmit a first gesture signal (8) associated with the first gesture (7) ; anda controller (9) configured for controlling the actuators (4) in response to the first gesture signal (8) .
- The system according to claim 1, wherein the actuators (4) are configured to generate tactile feedback (5) corresponding to the shape and texture of the virtual object (3) at a contact point determined based on a gesture signal.
- The system according to any one of claim 1 or 2, wherein the first gesture (7) corresponds to a movement of a finger, and wherein the controller (9) is configured to predict, based on the first gesture signal (8) , a first location (10) on the display surface (2) where the finger is expected to come into contact with the display surface (2) , and to generate, using the actuators (4) , tactile feedback (5) on the display surface (2) corresponding to the surface of the virtual object (3) at the first location (10) .
- The system according to claim 3, wherein the stereoscopic display (1) is configured for displaying the virtual object (3) in a stereoscopic space virtually extending beyond the plane of the display surface (2) in at least one direction; and whereinthe controller (9) is configured to, based on the first gesture signal (8) , determine a second location (11) in the stereoscopic space as a virtual contact point between the finger and the virtual object (3) ; and whereinthe controller (9) is further configured to control the stereoscopic display (1) to render the virtual object (3) in the stereoscopic space so that the second location (11) is aligned with the first location (10) in the plane of the display surface (2) .
- The system according to any one of claims 3 or 4, wherein the system is further configured to detect a second gesture (12) corresponding to a movement of a finger to a third location (13) on the display surface (2) ; and whereinthe controller (9) is configured to generate, using the actuators (4) , and based on the second gesture (12) , tactile feedback (5) on the display surface (2) corresponding to the surface of the virtual object (3) at the third location (13) .
- The system according to claim 5, wherein the controller (9) is configured to, based on the second gesture (12) , determine a fourth location (14) in the stereoscopic space as a virtual contact point between the finger and the virtual object (3) ; and whereinthe controller (9) is further configured to control the stereoscopic display (1) to render the virtual object (3) in the stereoscopic space so that the fourth location (14) is aligned with the third location (13) in the plane of the display surface (2) .
- The system according to any one of claim 5 or 6, wherein the second gesture (12) corresponds to a continuous movement of a finger from the first location (10) to the third location (13) on the display surface (2) ; and whereinthe controller (9) is configured to generate, using the actuators (4) , and based on the second gesture (12) , continuous tactile feedback (5) along the path from the first location (10) to the third location (13) on the display surface (2) to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object (3) through the display surface (2) .
- The system according to any one of claims 5 to 7, wherein the first sensor (6) is configured to detect both the first gesture (7) and the second gesture (12) .
- The system according to any one of claims 5 to 7, wherein the system comprises a second sensor (15) configured to detect the second gesture (12) , the second sensor (15) preferably arranged as a touch sensor in the stereoscopic display (1) configured to detect touch or near proximity of a finger and the display surface (2) .
- The system according to any one of the preceding claims, wherein the controller (9) is configured to control the stereoscopic display (1) to render visual feedback (16) on the virtual object (3) surface corresponding to a determined virtual contact point between the finger and the virtual object (3) in the stereoscopic space.
- The system according to any one of the preceding claims, wherein the stereoscopic display (1) is an autostereoscopic display (1) configured for visual display of a three-dimensional virtual object (3) without the need of special headgear, glasses or any wearable device using lenticular lens, parallax barrier, volumetric display, holographic or light field technology, wherein the stereoscopic display (1) is preferably a 3D light field stereoscopic display (1) .
- The system according to any one of the preceding claims, wherein the actuators (4) are configured to generate the tactile feedback (5) using electrovibration, squeeze film effect, or any similar haptic rendering solution suitable for generating tactile feedback (5) locally on a display surface (2) .
- The system according to claim 12, wherein the actuators (4) are configured to generate tactile feedback (5) locally on the display surface (2) at respective locations, the tactile feedback (5) corresponding to the surface of the virtual object (3) at the respective locations.
- The system according to any one of the preceding claims, wherein the first sensor (6) is a sensor or system of sensors configured to track and interpret hand gestures, more preferably a sensor or system of sensors configured to detect location and movement of at least one finger, such as a camera, depth sensor, time-of-flight camera, structured light scanner, or mm-wave radar.
- A method for haptic interaction with visually presented data, the method comprising:displaying a three-dimensional virtual object (3) on a display surface (2) of a stereoscopic display (1) comprising actuators (4) arranged therein for generating tactile feedback (5) on the display surface (2) ;detecting a first gesture (7) using a first sensor (6) and transmitting a first gesture signal (8) associated with the first gesture (7) to a controller (9) ; andcontrolling the actuators (4) by the controller (9) to generate tactile feedback (5) on the display surface (2) corresponding to the shape and texture of the virtual object (3) at a contact point determined based on a gesture signal.
- The method according to claim 15, wherein the stereoscopic display (1) is configured for displaying the virtual object (3) in a stereoscopic space virtually extending beyond the plane of the display surface (2) in at least one direction, and wherein the first gesture (7) corresponds to a movement of a finger, the method further comprising:predicting, based on the first gesture signal (8) , a first location (10) on the display surface (2) where the finger is expected to come into contact with the display surface (2) ;determining, based on the first gesture signal (8) , a second location (11) in the stereoscopic space as a virtual contact point between the finger and the virtual object (3) ; andrendering the virtual object (3) in the stereoscopic space on the stereoscopic display (1) so that the second location (11) is aligned with the first location (10) in the plane of the display surface (2) .
- The method according to claim 16, the method further comprising:detecting a second gesture (12) corresponding to a movement of a finger to a third location (13) on the display surface (2) ; andgenerating, using the actuators (4) , and based on the second gesture (12) , tactile feedback (5) on the display surface (2) corresponding to the surface of the virtual object (3) at the third location (13) .
- The method according to claim 17, the method further comprising:determining, based on the second gesture (12) , a fourth location (14) in the stereoscopic space as a virtual contact point between the finger and the virtual object (3) ; andrendering the virtual object (3) in the stereoscopic space on the stereoscopic display (1) so that the fourth location (14) is aligned with the third location (13) in the plane of the display surface (2) .
- The method according to any one of claims 17 or 18, wherein the second gesture (12) corresponds to a continuous movement of a finger from the first location (10) to the third location (13) on the display surface (2) , the method further comprising:generating, using the actuators (4) , and based on the second gesture (12) , continuous tactile feedback (5) along the path from the first location (10) to the third location (13) on the display surface (2) to ensure that the finger receives uninterrupted haptic feedback while being in contact with the virtual object (3) through the display surface (2) .
- The method according to any one of claims 15 to 19, further comprising rendering visual feedback (16) on the virtual object (3) surface corresponding to a determined virtual contact point between the finger and the virtual object (3) in the stereoscopic space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/109510 WO2024026638A1 (en) | 2022-08-01 | 2022-08-01 | Haptic interaction with 3d object on naked eye 3d display |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/109510 WO2024026638A1 (en) | 2022-08-01 | 2022-08-01 | Haptic interaction with 3d object on naked eye 3d display |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024026638A1 true WO2024026638A1 (en) | 2024-02-08 |
Family
ID=89848288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/109510 WO2024026638A1 (en) | 2022-08-01 | 2022-08-01 | Haptic interaction with 3d object on naked eye 3d display |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024026638A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508546A (en) * | 2011-10-31 | 2012-06-20 | 冠捷显示科技(厦门)有限公司 | Three-dimensional (3D) virtual projection and virtual touch user interface and achieving method |
WO2012128399A1 (en) * | 2011-03-21 | 2012-09-27 | Lg Electronics Inc. | Display device and method of controlling the same |
CN102736728A (en) * | 2011-04-11 | 2012-10-17 | 宏碁股份有限公司 | Control method and system for three-dimensional virtual object and processing device for three-dimensional virtual object |
US8294557B1 (en) * | 2009-06-09 | 2012-10-23 | University Of Ottawa | Synchronous interpersonal haptic communication system |
US20180246572A1 (en) * | 2017-02-24 | 2018-08-30 | Immersion Corporation | Systems and Methods for Virtual Affective Touch |
CN109343716A (en) * | 2018-11-16 | 2019-02-15 | Oppo广东移动通信有限公司 | A kind of image display method and apparatus, computer readable storage medium |
-
2022
- 2022-08-01 WO PCT/CN2022/109510 patent/WO2024026638A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8294557B1 (en) * | 2009-06-09 | 2012-10-23 | University Of Ottawa | Synchronous interpersonal haptic communication system |
WO2012128399A1 (en) * | 2011-03-21 | 2012-09-27 | Lg Electronics Inc. | Display device and method of controlling the same |
CN102736728A (en) * | 2011-04-11 | 2012-10-17 | 宏碁股份有限公司 | Control method and system for three-dimensional virtual object and processing device for three-dimensional virtual object |
CN102508546A (en) * | 2011-10-31 | 2012-06-20 | 冠捷显示科技(厦门)有限公司 | Three-dimensional (3D) virtual projection and virtual touch user interface and achieving method |
US20180246572A1 (en) * | 2017-02-24 | 2018-08-30 | Immersion Corporation | Systems and Methods for Virtual Affective Touch |
CN109343716A (en) * | 2018-11-16 | 2019-02-15 | Oppo广东移动通信有限公司 | A kind of image display method and apparatus, computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7540112B2 (en) | Visual halo around the periphery of the visual field | |
Makino et al. | HaptoClone (Haptic-Optical Clone) for Mutual Tele-Environment by Real-time 3D Image Transfer with Midair Force Feedback. | |
EP3436863B1 (en) | Interactions with 3d virtual objects using poses and multiple-dof controllers | |
US8866739B2 (en) | Display device, image display system, and image display method | |
US20200409529A1 (en) | Touch-free gesture recognition system and method | |
EP2638461B1 (en) | Apparatus and method for user input for controlling displayed information | |
US20100128112A1 (en) | Immersive display system for interacting with three-dimensional content | |
US20170150108A1 (en) | Autostereoscopic Virtual Reality Platform | |
WO2022012194A1 (en) | Interaction method and apparatus, display device, and storage medium | |
WO2019001745A1 (en) | System and method for interacting with a user via a mirror | |
Mihelj et al. | Introduction to virtual reality | |
US11194439B2 (en) | Methods, apparatus, systems, computer programs for enabling mediated reality | |
WO2024026638A1 (en) | Haptic interaction with 3d object on naked eye 3d display | |
Argelaguet et al. | Visual feedback techniques for virtual pointing on stereoscopic displays | |
CN115380236A (en) | Content movement and interaction using a single controller | |
US11682162B1 (en) | Nested stereoscopic projections | |
JP2017078890A (en) | Program for three-dimensionally displaying virtual reality space, computer therefor and head-mounted display device therefor | |
JP2024159778A (en) | Visual halo around the periphery of the visual field | |
KR20240092971A (en) | Display device for glasses-free stereoscopic image with function of air touch | |
Yuan et al. | How do users select stereoscopic 3D content? | |
Joachimiak et al. | View Synthesis with Kinect-Based Tracking for Motion Parallax Depth Cue on a 2D Display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22953453 Country of ref document: EP Kind code of ref document: A1 |