[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3613904.3642069acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

InflatableBots: Inflatable Shape-Changing Mobile Robots for Large-Scale Encountered-Type Haptics in VR

Published: 11 May 2024 Publication History

Abstract

We introduce InflatableBots, shape-changing inflatable robots for large-scale encountered-type haptics in VR. Unlike traditional inflatable shape displays, which are immobile and limited in interaction areas, our approach combines mobile robots with fan-based inflatable structures. This enables safe, scalable, and deployable haptic interactions on a large scale. We developed three coordinated inflatable mobile robots, each of which consists of an omni-directional mobile base and a reel-based inflatable structure. The robot can simultaneously change its height and position rapidly (horizontal: 58.5 cm/sec, vertical: 10.4 cm/sec, from 40 cm to 200 cm), which allows for quick and dynamic haptic rendering of multiple touch points to simulate various body-scale objects and surfaces in real-time across large spaces (3.5 m x 2.5 m). We evaluated our system with a user study (N = 12), which confirms the unique advantages in safety, deployability, and large-scale interactability to significantly improve realism in VR experiences.
Figure 1:
Figure 1: InflatableBots combines mobile robots and shape-changing inflatables for large-scale VR haptics. InflatableBots can render multiple and continuous touch points by smoothly changing its height and position.

1 Introduction

Large-scale haptics have significant potential for fully immersive VR experiences [6, 9, 54, 71]. In contrast to today’s small-scale haptic interfaces that can only simulate handheld-size object [10, 34, 62], large-scale encountered type haptics [35] enable the user to engage with haptically rendered VR environments through whole-body interactions, such as walking around [9, 31] and leaning against virtual objects [54, 71], as if they are interacting with them in the physical world.
Inflatable-based haptics [62, 63], in particular, emerge as a promising approach to achieving large-scale encountered-type haptics, as they allow for safe, low-cost, and robust haptic interactions, which are essential for immersive body-scale user experiences. For example, inflatable shape displays, such as TilePoP [63] and LiftTiles [55], can dynamically render diverse shapes and surfaces that can be touched and interacted through entire human body. Unlike mechanical structures, these inflatables reduce the risk of injuring the user and being broken during intense full-body interactions.
However, the existing large-scale inflatable displays have key limitations in scalability and deployability. For example, its inherent immobile form factor makes it difficult to deploy to various spaces and limits the interaction area to a fixed size. As the scale increases, the number of required modules increases substantially, requiring more complex pneumatic actuation and controlling mechanisms. Moreover, the display size and resolution are fairly limited, making it difficult to render smooth and continuous surfaces across a large interaction area.
In this paper, we present InflatableBots, a system that combines mobile robots and shape-changing inflatables for safe, deployable, and scalable VR haptics at large scale (Figure 1). InflatableBots addresses the limitations of the existing inflatable shape displays by integrating mobile robots with inflatables. We employ multiple shape-changing mobile robots to simulate various objects and surfaces by simultaneously changing the height and position of each robot. Thanks to its mobile form factor, the robots can render multiple and continuous touch points without restricting the interaction area (Figure 1). InflatableBots provides several haptic interactions when touching multiple stationary objects, multiple moving objects (Figure 1(a-b)), continuous surfaces (Figure 1(c)), shape-changing objects (Figure 1(d)), and through handheld tools (Figure 8).
InflatableBots consists of a set of fast-moving omni-directional mobile robots (Nexus Robot 4WD Mecanum Wheel Robot 10011 [46]), which can move with a maximum speed of 58.5 cm/sec. These robots are equipped with custom reel-based inflatable structures, inspired by vine-based soft robots [20]. The inflatable structure is actuated with a portable fan, and its height is controlled with a motorized spool. This design ensures a mobile and compact form factor of 30 cm x 30 cm, while allowing real-time haptic interaction with fast and significant shape changes, transitioning from 40 cm to 200 cm at a rate of 10.4 cm/sec.
To evaluate how InflatableBots can create plausible haptic sensations, we conducted three types of user evaluations with 12 participants: 1) testing the realism of various textures of individual objects, 2) testing different-angled continuous surface rendering, and 3) open-ended application-based explorations. The study results confirm the benefits of our approach, including safe, fast, and large-scale haptics, compared to non-haptic conditions. Based on the study results, we discuss the potential future directions for large-scale inflatable haptics.
Finally, we make the following contributions
(1)
A design, implementation, and interaction techniques of InflatableBots, a system that leverages omni-directional robots and reel-based inflatable structures for fast, robust, and safe VR haptics at large-scale.
(2)
Three types of user evaluations and application showcase that demonstrate the benefits of InflatableBots.

2 Related Work

2.1 Large-Scale Haptics

Large-scale haptics typically provide haptic sensations by reconfiguring physical environments. Originally, room-scale haptics have been explored through human-based actuation, where human volunteers manually reconfigure static props in a room (e.g., HapticTurk [8], TurkDeck [9]), but recent works have also explored robotic actuation that enables similar haptic experiences without human labors. The existing large-scale robotic actuation is largely categorized into the following two approaches. 1) Mobile robots + static props approach: The first approach uses single or multiple mobile robots to dynamically move existing haptic props to create a reconfigurable environment. For example, prior research uses mobile robots to dynamically move passive haptic props, such as RoomShift [54], ZoomWalls [71], CircularFloor [28], MoveVR [68], and PhyShare [21]. These robots are used to reconfigure physical environments and props providing large-scale haptic sensations such as a wall, table, or chair for room-scale haptic experiences. Recently, drones have been also explored as a way to move proxy objects in mid-air for encountered-type haptic experiences, such as Beyond the Force [1] and VR Haptic Drones [25]. 2) Shape-changing approach: The second approach uses shape-changing haptic props enabled by large-scale inflatable displays or reconfigurable environments to create dynamic haptic feedback for body-scale interaction. For example, body-scale shape displays, such as TilePoP [63], LiftTiles [55], Elevate [31], dynamically transform the environment to provide a haptic proxy. Similarly, actuated environment like CoVR [6] and Haptic Go Round [26] also allows a similar experience through reconfigurable space. Both approaches have their advantages and limitations. For instance, mobile robots offer a wide interaction area with easy deployment, but are limited in terms of the shapes they can create, as they can only move the position of static props. Shape-changing displays, on the other hand, offer general-purpose haptic interfaces with various shapes, but are often limited by their immobility, scalability, and fixed interaction area.
We explore the combination of these two approaches for large-scale haptics. Previously, the idea of shape-changing mobile robots is partially explored in RoomShift [54], but they do not explicitly investigate the shape-changing haptic props, as it uses shape-changing module only to lift static furniture, rather than changing the haptic props itself. Also, instead of mechanical actuation, inflatable structure has great potential for safe, low-cost, and robust body-scale interaction.

2.2 Encountered-Type Haptics

Researchers have also explored various haptic devices and approaches. One such approach, passive haptics [24, 27], employs physical objects or environments as haptic props. This can be for hand-held objects, as seen in Annexing Reality [23] and Haptic Retargeting [3], or for entire physical spaces, as demonstrated by Redirected Walking [45], Substitutional Reality [50], and VR Haptics at Home [12]. In contrast, active haptics have been explored for hand-held devices, such as NormalTouch [4] and HapticPivot [33], as well as for on-body interfaces like LevelUp [49]. However, these hand-held or wearable devices often fall short in delivering a truly world-grounded haptic sensation of touch and push.
To bridge this gap, encountered-type haptics [35] have been introduced, which provide haptic sensations by dynamically aligning the touchpoint with the virtual object’s position upon user contact. Various strategies have been proposed for delivering encountered-type haptics, such as shape displays (e.g., shapeShift [52], Feelex [29], inForce [39], Steed et al. [53]), tabletop robots (e.g., REACH+ [15]), and robotic arms (e.g., Snake Charmer [2], VRRobot [66]). In particular, our work draws inspiration from the distributed encountered-type haptics presented by HapticBots [56]. We aim to expand upon their tabletop-scale distributed encountered-type haptics, transitioning to a larger scale by harnessing inflatable actuation.

2.3 Inflatable-Based Haptics

Inflatable and pneumatic-based actuation has emerged as one of the unique approaches to creating haptic sensations. For example, prior systems like PuPoP [62], ForceJacket [11], PneuMod [72], MovableBag [36], Push-Ups [67], ThermAirGlove [7], and PneumoVolley [17] leverage the inflatable actuation to provide on-body haptic sensations for various applications and use cases. Alternatively, researchers have utilized airflow as a means to provide force feedback, as seen in Thor’s Hammer [22], AeroPlane [30], JetController [69], and AirRacket [64]. Yet, to our knowledge, no research has explored combining mobile robots with inflatable-based haptics. This paper demonstrates that this unique combination enables a novel inflatable-based haptic interaction that has not been previously explored.

2.4 Shape-Changing Interfaces

Apart from VR haptic contexts, HCI researchers have explored various shape-changing user interfaces for everyday scenarios [44]. For example, large-scale shape-changing interfaces are explored through ceiling-mounted or wall-based shape displays, as seen in BMW Museum [13], Hyposurface [16], MegaFaces [32]. Similar to our work, some works explore a modular approach to construct dynamic furniture, such as Lift-bit [38], Tangible Pixels [61], Mechanical Ottoman [51]. Along this line, ShapeBots [57] introduces the concept of shape-changing swarm robots and HERMITS [40] augment the robots with customizable mechanical add-ons to expand tangible interactions. Other works have also expanded this line of work through large-scale shape-changing interfaces, including TransformTable [59], Shape-shifting Wall Displays [60] and WaddleWalls [41].
While most of the shape-changing interfaces leverage the mechanical actuation, inflatables shape-changing interfaces leverages the pneumatic actuation to create dynamic shape-changing interfaces. Examples include PneUI [70], aeroMorph [42], JamSheets [43], Printflatables [47], Swaminathan et al. [58], Dynamic Buttons [19]). Similar to our work, Poimo [48] combines the inflatable structure and mobility for instant portable mobility. One key advantage of inflatables is their drastic transformation capability. For example, Vine Robots [5, 20] and Pneumatic Reel Actuator [18] are highly extendable actuators for soft robots and shape-changing interfaces. Drawing inspiration from these works, our InflatableBots explores the potential of such designs in delivering haptic feedback, which we further demonstrate through prototyping and user evaluations in VR haptic contexts.

3 InflatableBots

InflatableBots is a modular robot system comprised of mobile robots and inflatable shape-changing structures (shown in Figure 2). The attachable inflatable structures provide dynamic height-changing capabilities. We developed three shape-changing inflatable mobile robots that can coordinate in a large walkable space. This section describes the mechanical design for a hardware component, as well as a software system for controlling these robots.
Figure 2:
Figure 2: Mechanical design of an InflatableBots.

3.1 Inflatable Structure

3.1.1 Overview.

The design of our inflatable body structure is heavily influenced by Vine Robots [20]. The system employs a polyethylene membrane, which is stored in a spool format at the center of the base. The height of the inflatable module is changed based on a motorized spool, which controls the release and retraction of the polyethylene tube. This spool-centric design allows a substantial height change, spanning from 40 cm to 200 cm (shown in Figure 3), with a compact base dimension of 25 cm diameter circle. The fan-based inflation mechanism allows fast and dynamic shape change with a speed of 10.4 cm/sec.
Figure 3:
Figure 3: InflatableBots change the height from a minimum of 40 cm to a maximum of 200 cm at 10.4 cm/sec.

3.1.2 Reel-Based Inflatable Structures.

The inflatable component is primarily constructed from a vinyl sheet with 0.02 mm thickness. The fabrication process requires cutting the sheet to the specified length and subsequently heat-sealing to fabricate the tube. Upon inflation, the tube expands to a diameter of 25 cm. For our design, a tube length of 450 cm is utilized to achieve a height of 200 cm, given that the tube typically requires over twice its length to reach the intended height. One end of the tube is stored in a motorized spool, while the opposite end is securely anchored to a base plate. This approach guarantees the inflatable structure’s durability and operational efficiency.

3.1.3 Motorized Spool and Base Plate.

The base plate is created with a 7 mm cardboard that constructs a circular shape for the inflatable base and space to attach two mobile fans. The vinyl tube is attached to this circular shape with a diameter of 25 cm. It also has an alignment shaft, which prevents any tangling of the spool when it is extended and to enhance the overall robustness of the structure (see Figure 2 (c)). A key consideration is the diameter of the hole; if too small, the structure becomes unstable due to restricted airflow. The spool is controlled by a DC motor (Bringsmart JSMG028A12V80), which has 20 kg · cm torque and 80 RPM. This motor is controlled via a microcontroller (ESP32) with motor drivers (TB6612). The motorized spool is powered with a mobile battery (NiMH battery, 12 V, 1.8 Ah) (see Figure 4). Each module’s assembly takes about 30 minutes, with the cost per actuator being under 1,100 USD, including an omni-directional mobile robot.
Figure 4:
Figure 4: System schematics of the actuator electronics.

3.1.4 Mechanism of Height Control.

The height of each actuator is determined by the motorized spool, which is controlled by a main computer. The computer communicates wirelessly with a microcontroller. The control mechanism is an open-loop system, based on the rotation time of the DC motor. The DC motor rotates at 80 RPM, and it can feed the 10.4 cm of vinyl tube per second. Based on this, we track and control the current height. To control the motorized spool, the main computer dispatches commands to the microcontroller, prompting the spool to rotate and effectuate the desired change.

3.1.5 Rationale for Fan-Based Inflation.

Each robot has two portable fans (AINOUT, 22V high output fan for work clothes). These fans are capable of producing a high airflow rate with 8.6 cm/sec measured by Sanwa supply anemometer CHE-WD1. While our inflatable design is independent of and compatible with both fan-based and air pump actuations, our experiments highlighted some challenges associated with the latter approach. Mobile air pumps, despite their ability to provide more robust inflation, tend to be slower and less portable. For instance, when we experiment with a mobile air pump (Yasunaga air pump, AP-40P), it took a considerable amount of time to inflate the structure (e.g., 1 minute), which is not ideal for dynamic haptic interactions. On the other hand, powerful air compressors could offer a faster inflation time, but their larger size and high power requirements are not suitable for mobile form factors, requiring a tethered setup with a pneumatic tube connecting to the compressor. After evaluating the advantages and disadvantages, we decided to choose fan-based actuation, prioritizing its fast transformation and compact design.

3.2 Mobile Robot Base

3.2.1 Omni-Directional Robots.

Our mobile robot base employs an omni-directional mobile robot (Nexus Robot 4WD Mecanum Wheel Robot 10011 [46]). Each robot measures 36 cm x 40 cm and it can move seamlessly in any direction, thanks to its omni-directional wheels at the maximum speed of 58.5 cm/sec. These robots are equipped with reel-based inflatable structure and portable fans, as mentioned above. Once assembled, the basic module occupies a horizontal footprint of 36 cm x 40 cm and minimum stands 40 cm tall.

3.2.2 Position Tracking.

Position tracking for each robot is achieved using the HTC Vive tracker 3.0, which is attached to a base plate. The entire interaction area is monitored by four external HTC Vive Base Station 2.0 lighthouses, capable of covering expansive areas measuring 3.5 m x 2.5 m (shown in Figure 5).

3.2.3 Path Planning.

Our system integrates efficient path planning with real-time body tracking, ensuring that the movements of all robots are synchronized with the user’s hand movements. By tracking and predicting potential touchpoints, we guide the robots to meet the user’s hands in a timely manner. Utilizing the RVO algorithm [65], we determine the direction in which the robot should move during each time step. Given the omni-directional capabilities of our robot, it can directly navigate to its target position. Robot control commands are executed through another microcontroller (ESP32), which controls the robot. The robot’s speed is adjusted based on a PID control.

3.3 VR System

3.3.1 VR Scene.

Our VR scenes are created and rendered using Unity (version 2021,3,10f1) and Steam VR (version 1.27.5). The main computer, a Windows machine equipped with a CPU of Intel Core i7-8086K, a GPU of NVIDIA GeForce RTX 3060, and 32 GB RAM, runs Unity and streams the content to a VR headset (HTC VIVE Pro). For applications, we also use pre-defined 3D models from several Unity assets, such as Toon Farm Pack.
Figure 5:
Figure 5: System setup of InflatableBots.

3.3.2 Communication and Control.

Unity also communicates with ESP32 microcontrollers through Bluetooth protocol. Our software first receives a robot’s tracking data through HTC Vive tracker 3.0, and then visualize the robot’s position in the Unity scene (See Figure 5). The software simultaneously stores the current height of each inflatable structure, rendering the current height by changing a semi-transparent cube on top of the robot. The VR scene is rendered through HTC VIVE Pro via connected cable. When the user’s position or hand moves, the system prompts the robots to dynamically adjust its position and height accordingly. To do this, the system first measures the height of the virtual object that the user encounters, then actuate the inflatable to the target height.

3.3.3 Approximating a Virtual Surface.

To approximate the current height of the surface, the system utilizes vertical ray casting. This technique gauges the height of the virtual contact point based on the robot’s position. Given a virtual entity or surface in Unity space, we cast a ray vertically from a robot’s position. The distance between the ray’s origin and its intersection point provides the estimation of the current height.

3.3.4 Hand Tracking and Target Assignment.

We create the interaction area (3.5 m x 2.5 m) by room setup of Steam VR. Using this area, the system can get target heights for each object and surfaces in the scene. The system tracks the user’s hand with the HTC Vive tracker 3.0. As the user’s hand moves in 3D space, the robot repositions within this target zone. If the identified targets are more than the number of available robots, the system optimizes robot placement based on the proximity. When multiple robots are available, they can collaboratively cover an extensive zone.

3.4 Technical Evaluation

We conducted a technical evaluation of InflatableBots to assess their capabilities. Our assessment criteria include: 1) The speed of both the robots and inflatables, 2) The precision of the inflatable control using open-loop control, 3) The accuracy in determining the robot’s position and orientation given our tracking and control mechanisms, 4) The robot’s capability of pushing force, 5) The latency experienced in the control loop, 6) The reliability of the tracking system.

3.4.1 Method.

The methods to measure each criteria is as follows: 1) Robot and Inflatable Speed: We determined the speed of the robot and inflatables by analyzing video footage of the movement alongside a ruler for reference. 2) Inflatable Extension Accuracy: We gauged the accuracy of the inflatable’s extension by activating the motor for ten distinct durations. For each duration, we measured its height in a similar way as the first one. 3) Position and Orientation Precision: To assess the accuracy in position and orientation, we documented the deviations in distance and angle between the robot’s actual position and its intended target. We report the average errors of ten trials. 4) Pushing Force Measurement: We utilized Imada’s dual-range force sensor (with a precision of 0.001N) to measure force. Specifically, we recorded the peak impact force exerted against the sensor when the InflatableBots was propelled over a 10 cm distance. 5) Latency Measurement: We evaluated the latency at each stage of the process, including Bluetooth communication and path planning. This was achieved by comparing timestamps from the initiation of each event to its conclusion. We repeated this process ten times. 6) Tracking Robustness: To gauge the robustness of our tracking system, we assessed the error rate when the tracking is lost within 1 minute.

3.4.2 Results.

Table 1 shows all of the technical measurements for each criterion. The table provides a summary of the results for metrics (1) through (6). Regarding the sixth metric, the average latencies recorded were 1.4 ms for Bluetooth communication and 1.0 ms for path planning computation. In total, the control loop’s latency measures 111 ms (maximum fps in unity with HTC VIVE Pro is around 90). Lastly, for the fifth metric, over 10 trials, the system lost tracking approximately once every 0.27 seconds.
Table 1:
1) Maximum Speed of Horizontal Movement58.5 [cm/sec]
1) Maximum Speed of Vertical Linear Actuator10.4 [cm/sec]
2) Average Vertical Linear Actuator Error7.4 [cm]
3) Average Position Error1.0 [cm]
3) Average Rotation Error0.7 [deg]
4) Maximum Pushing Force (40 cm)1.676 [N]
4) Maximum Pushing Force (100 cm)1.221 [N]
4) Maximum Pushing Force (150 cm)1.045 [N]
4) Maximum Pushing Force (200 cm)1.007 [N]
5) Latency of the System111 [ms]
6) Lost Tracking4.07 [%]
Table 1: Result of the technical evaluation

4 Application Scenarios and Interaction Techniques

Figure 6:
Figure 6: Interaction techniques with InflatableBots, such as large-scale stationary objects, multiple moving objects, continuous surfaces, shape-changing objects, and interactions with handheld tools.

4.1 Unique Benefits and Functionalities

This section highlights the unique applications and interaction techniques enabled by unique advantages of InflatableBots. In particular, we discuss how our system enables unique and novel applications that are difficult to achieve with existing inflatable-based haptic systems like LiftTiles [55] and TilePop [63]. Similar to existing models, our system is designed for safety and durability, withstanding rigorous interactions like hits and pushes in applications such as tennis or fighting games. It also ensures user safety, which is crucial in sports simulations like boxing.
However, our system has unique capabilities beyond the existing works. For example, our system’s distinct capabilities are as follows:
(1)
Concurrent Locomotion and Height Alteration: Our inflatable robots differ from the static and fixed inflatables in prior systems by their mobility. This mobility allows for simultaneous adjustments in its height and position, which enables unique applications. For instance, in expansive spatial explorations like mazes, users can experience slopes through concurrent locomotion and height adjustment. The system can also mimic smooth, continuous surfaces such as a car body or a horse.
(2)
Expansive and Versatile Interaction Area: The mobility of both robots and users within the space facilitates system deployment and diverse interactions. For example, in sports simulations like boxing, users and robots can move around freely, enhancing the realism of the experience. The robots can also play defensive roles in virtual sports like soccer or basketball.
(3)
Rapid Transformation Speed: Our system employs a fan-based mechanism for fast shape transitions, enabling new interactive experiences and applications. For instance, it supports continuous surface rendering and rapid height adjustments crucial in activities like tennis, where the system adapts to the trajectory of the ball.
In contrast, due to its mobile nature and rapid transformation capabilities, the system sacrifices some structural stability and robustness. Unlike LiftTiles [55] and TilePop [63], our fan-based inflatables cannot support human weight, preventing users from sitting or leaning on them. Despite these trade-offs, we believe our system introduces novel interaction techniques and application possibilities previously unexplored. In particular, we demonstrate these unique possibilities by leveraging 1) multiple shape-changing robots, 2) fast and safe shape transformations, 3) simultaneous changes in height and position, and 4) large-scale interaction areas for both users and robots. Based on these features, this section showcases interaction techniques and applications enabled by InflatableBots.

4.2 Interaction Techniques

In this section, we present several haptic interaction techniques enabled by InflatableBots (Figure 6). By leveraging both mobility and shape transformation capability, InflatableBots facilitates a range of unique and expressive haptic interactions.

4.2.1 Interacting with Multiple Stationary Objects.

One of the fundamental capabilities is rendering multiple stationary objects. For instance, users can haptically interact with trees and plants situated at different locations (Figure 8 (a)). As users navigate, the robot dynamically move itself to haptically represent these objects in real-time. By coordinating multiple robots, these robots can reach to the object in a timely manner.

4.2.2 Interacting with Moving Objects.

Another key interaction technique is to engage with moving virtual objects. Imagine virtual animals like dogs and pigs roaming a farm; users can touch and pet them (Figure 8 (b)). The coordinated efforts of multiple robots enable the haptic representation of several moving objects simultaneously.

4.2.3 Interacting with Continuous Surfaces.

InflatableBots can also haptically render large continuous surfaces by synchronizing vertical shape transformation with horizontal movements (see Figure 7). This allows users to touch large objects like cars and buildings (Figure 8 (c)). As a user slides their hand over a surface, the robot adjusts its position and height to simulate the object’s surface. Rapid shape transitions also let users feel surfaces with steep curves, such as the arched back of a horse.
Figure 7:
Figure 7: InflatableBots can render a continuous surface such as a sphere.

4.2.4 Interacting with Shape-Changing Object.

Another unique feature is to render objects that are changing its shape in real-time. For instance, when a user touches a virtual sleeping bear, they can sense its rhythmic breathing. The system can also simulate objects like a window being lowered, so that when the user opens the window, it shows up the window by changing its height (Figure 8 (d)).

4.2.5 Interacting with Objects through Handheld Tools.

Finally, by leveraging its durability, users can also interact with handheld tools, as InflatableBots can withstand intense interactions like hitting and knocking. For example, by using a variety of tools, from sticks and swords to hammers, users can strike an object with a hammer, engage in a whack-a-mole game, practice drumming with drumsticks (Figure 8 (e)), or hit the enemy with a sword. The system remains resilient even when subjected to forceful hits.
Figure 8:
Figure 8: Applications of InflatableBots. (a) Interacting with multiple stationary objects (plants in the forest). (b) Interacting with moving objects (dogs). (c) Interacting with continuous surface (car surface). (d) Interacting with shape-changing object (windows in the room). (e) Interacting with object through handheld tools (drumming with drumsticks).

4.3 Application Examples

Through the results and application explorations, we propose a set of application scenarios that leverage the unique functionalities of InflatableBots.

4.3.1 Sports.

The first exploration is full-body sports, which is now a critical application example using room-scale VR setup. InflatableBots allows users to haptically interact with height-changing moving objects (e.g. airborne elements). For example, the InflatableBots’s height-changing and mobile capabilities can simulate a VR tennis game where the parabolas of tennis balls in different flight paths are haptically rendered. While the haptic sensation of touching the balls may not be perfect due to the shape and vinyl material (as discussed), such real-time, large-scale, and multiple physical feedback can significantly improve the user’s experience, engagement, and body control. Furthermore, inflatable structures additionally enable safe and robust features, such as supporting repeatedly striking balls with a racket (Figure 9) or punching sandbags (Figure 10). Such safe physical interactions is vital for VR users who are fully immersed in virtual sport fields, and robust features make the device easily reusable.
Figure 9:
Figure 9: Playing tennis: the user can attack spatially moving balls in the room-scale VR.
Figure 10:
Figure 10: Playing boxing: the user can punch safely by the soft body of InflatableBots.

4.3.2 Large-Scale Space Exploration.

The second exploration is to render 3D structures of a room by highlighting the high inflatable structure of up to 200 cm. When supposing a walkthrough in a darkroom or maze, various type of room structures such as walls and slopes can be haptically rendered (Figure 11), helping the user’s spatial understanding. Unlike ZoomWalls[71], the InflatableBots can simualte handrail of slopes and higher/lower walls.
Figure 11:
Figure 11: Walking in the dungeon: the InflatableBots can render the various types of walls such as high or slanted.

4.3.3 Tool Station.

Utilizing the flexibility of height-changing and the inflatable top’s shape versatility, the InflatableBots can be used as a physical stand for various tools. For example, just like the robotic ergonomic assistant [14] for room-scale VR, users can install a physical canvas on InflatableBots which allows them to keep their hands (Figure 12) at a comfortable height, while enhancing accuracy in illustration activities in room-scale VR. Various tools can be considered, such as cameras, notebooks, auditory equipment, and so on. By placing a bar or plate across two InflatableBots, more complex shapes can be rendered, such as a hurdle or a bar of limbo game. Furthermore, while an additional force-sensing mechanism is required, the inflatable structure itself can effectively be used as a full-body input device (e.g., joysticks).
Figure 12:
Figure 12: Sketching: the InflatableBots support light object like a canvas.

5 User Study

5.1 Aim

We conducted a user study to understand the user experiences with InflatableBots. Among many factors of InflatableBots, the goal of this study is to test the fundamental questions regarding InflatableBots’ haptic rendering capability and its system operation capabilities in room-scale VR experiences.

5.2 Method

The user study was designed to evaluate InflatableBots’ (1) haptic rendering capability, (2) geometric rendering capability, and (3) user experience with InflatableBots through three designated tasks. The first two tasks were inspired by the user study of the previous distributed haptic device [56] offering these two types of rendering capabilities in a smaller desktop scale, allowing us to understand how our unique structure supports larger scale encounter-type haptics. Task three was designed to evaluate room-scale haptic experience with InflatableBots. For all tasks, we gathered participant’s subjective assessments with a questionnaire and semi-structured interview. The study design was officially approved by our university’s ethics committee.
Figure 13:
Figure 13: Experimental space and apparatus

5.2.1 Participants.

We recruited 12 participants (age: 20-24 years old, 4 females and 8 males) from our university who have experience with VR headsets. The total duration of the study was about two hours per participant. They received payment of about 20 USD for their participation according to the university’s regulations.

5.2.2 Apparatus.

We used Unity scripts to create and render our experimental VR world and an HTC VIVE system to track an HMD, hand-held controllers, and trackers affixed to the moving robots. Figure 13 shows the 2.5 m x 3.5 m tracking area, the HMD, and the InflatableBots prototype with two trackers. The rendered virtual world was the same size as the tracking area. For safety reasons, besides the experimenter, an assistant constantly monitored the participant to ensure no collisions with the InflatableBots, walls, or other objects, and was prepared to press the emergency stop button if necessary (although this never occurred).
Due to the sensors’ potential limitations, there might be a positional discrepancy between the visual content and tactile stimulus rendered with the InflatableBots. Participants were instructed to point out if they felt any inconsistency so that the experimenter could correct it by manually adjusting the robot’s parameters to avoid any misalignment issues significantly affecting the reality assessment.

5.3 Task 1: Reality of Haptic Rendering

5.3.1 Purpose and Design.

This task was designed to evaluate the reality of InflatableBots’ haptic rendering for different shapes, hardness, and surface textures of objects. We compared six types of objects under the following three haptic conditions: no use (mid-air), an actual object (ground-truth), then use of InflatableBots. Figure 14 shows the six objects used in task 1: a cushion, an exercise ball, a rough ball, a table, a stuffed bear, and a plant, all of which have soft (elastic) properties with different shapes, hardness, and surface textures.

5.3.2 Procedure.

When the VR world started in one of the three haptic conditions, one of the six virtual objects appeared in the center of the world. Participants were asked to touch a stationary red dot (Shown in Figure 14 (a)) on the virtual object with their fingers for 10 seconds. They then answered the question "How realistic was the experience?" displayed in the VR world on a Likert scale from 1 (not at all) to 7 (very much). Afterward, the next object was presented, and this process was repeated for all six objects with one trial for each object. The order of object presentation was randomized for each haptic condition. Next, participants performed the same procedure for the other two haptic conditions. The order of the haptic conditions was counterbalanced across participants. For the ground-truth and InflatableBots conditions, the real object or the InflatableBots’ cylinder top was placed exactly on the red dot in the virtual object. In contrast, no physical object was placed in the mid-air condition.

5.3.3 Results.

Figure 15 shows an overview of the results. The * mark indicates a significant difference as suggested by the Wilcoxon signed-rank test with Bonferroni correction. For all objects, the scores in both the InflatableBots and the ground-truth conditions were significantly higher than in the mid-air condition (p < 0.01). However, the ground-truth score was significantly higher than the InflatableBots score (p < 0.01). The average score of the mid-air condition was less than 2 for all objects, while that of the ground-truth condition exceeded 6. The InflatableBots condition had a different trend with a high variance for the different materials of the objects, as seen between cushion (5.2±1.1 SD) and plant (2.8±0.69 SD).

5.3.4 Discussion.

Looking at the mid-air vs. InflatableBots results, the presence of the tactile feedback from the physically inflated upper part successfully improved the reality of the haptics compared to the mid-air interaction. From the interview, they also felt an overall increased sense of physicality. However, the reality score of the InflatableBots was still significantly lower than the ground truth. These results suggest that the soft physical feedback of the InflatableBots offers some advantages for haptic representation but does not reach a realistic touch experience. Factors specifically mentioned in the interviews were surface hardness (softness), shape, and texture. InflatableBots are soft and almost flat on top, and their texture is uniform with thin vinyl. Therefore, objects with completely different materials, such as tables with a hard, flat surface, rough balls with many surface irregularities, or plants with complex leaf veins, cannot be fully rendered with InflatableBots. On the other hand, the cushion, which had similar hardness, shape, and surface texture with the virtual object, resulted in high reality. The animal had a relatively good result, we think the reason might be due to its softness. Participants mentioned that the virtual model did not visually render the animal’s fur fibers, so the reality of touch remained high even with InflatableBots’ vinyl material (P10, P11). This suggests that InflatableBots may be more effective for cartoon-like smooth models than rough textures.
Figure 14:
Figure 14: Six objects of task 1 in (a) virtual space (b) physical space. The participant touches the red dot on each object in the virtual space.
Figure 15:
Figure 15: Result of task 1 (Haptic rendering capability) (**: p < 0.01, an x-mark shows each mean).

5.4 Task 2: Reality of 3D Surface Rendering

We evaluated how well the InflatableBots represents continuous surfaces at different angles in the air. Since the InflatableBots can simultaneously change both their horizontal position and vertical height, with multiple InflatableBots combined they can reproduce various inclined angles of life-size content, such as the curves of a car or the undulations of a map (as shown in Figure  7). However, it is unclear how the current InflatableBots prototype can correctly render such virtual continuous surfaces. To address this fundamental question, we decided to evaluate how well users perceive an inclined virtual surface in the air through touch. We prepared five inclination angles (0°, 15°, 30°, 50°, and 70°) under two haptic conditions: InflatableBots (Figure 16(b)) and an actual inclined board (ground-truth) (Figure 16(a)).

5.4.1 Procedure.

When the VR system started, a red point immediately appeared in the air (Figure  16(c)). Participants were asked to keep touching the point with their fingers. The red point continuously moved 50 cm from the initial location in a given incline angle for 5 seconds to represent the virtual inclined board (Figure  16(c)). For the InflatableBots condition, its top’s height and position follow the red point’s motion to physically represent the inclined board (Figure 16(b)). For the ground-truth condition, we used the simplest way by setting up a mock-up with physically inclined boards (Figure 16(a)).
After finishing one trial, participants answered two questions regarding the realism of the rendered 3D surface in the virtual world. The first question concerning the surface’s continuous rendering ability“How much did you feel like you were touching a continuous surface?” and they responded on a Likert scale from 1 (did not feel) to 7 (felt). The second concerning the surface’s incline rendering ability “which angle did you perceive while touching the surface? ”. Several inclined board options were displayed in the VR (see Figure 16(d)), and they selected the one closest to what they experienced. After answering these questions, a subsequent trial was presented. Participants conducted one trial each for all five inclined angles. The inclination angles were given randomly, and the two haptic conditions were counterbalanced among participants.
Figure 16:
Figure 16: A participant touching the virtual surface in task 2 at (a) the ground-truth and (b) InflatableBots conditions. The participant keeps touching (c) the red point shown in the virtual space, and sees (d) several inclined boards when selecting the closest one they experienced.

5.4.2 Results.

Figure 17 shows an overview of the results. The * mark indicates a significant difference based on the Wilcoxon signed-rank test.
- Continuity: As shown in Figure 17(a), for all inclination angles, the continuity score under the ground-truth condition was significantly higher than the InflatableBots condition (p < 0.01). Furthermore, the average score under the ground-truth condition was above 6 for all inclination angles; the InflatableBots’ average score was above 4 for all angles.
- Angle estimation: Only for the 30-degree inclination angle, the error was significantly larger under the InflatableBots condition than the ground-truth condition (Figure 17(b)). No significant difference was observed between the InflatableBots and Ground Truth conditions for other inclination angles.

5.4.3 Discussion.

According to the interview, one reason that the InflatableBots condition scored lower than the ground-truth was that the real slope has an incline on the upper surface, but the top of the InflatableBots is almost flat. Another reason was no friction felt when moving the hand with the InflatableBots, and the force in the upward direction (e.g., pushing up the hand) from the inflated vinyl surface was weak, causing the hand to sink into the vinyl. Furthermore, the tilt estimation error becomes more negligible for extreme and clearly perceivable angle conditions (i.e., horizontal or near-vertical) which can be taken into account in designing applications.
Figure 17:
Figure 17: Result of task 2 (**: p < 0.01, *: p < 0.05, an x-mark shows each mean); (a) continuous score, (b) error degree of the tilt estimation.

5.5 Task 3: Application Experience

5.5.1 Purpose.

In this task, we evaluated the user experience by obtaining various user feedback through representative application scenarios with InflatableBots. For this purpose, we implemented an application with four experiences, all themed around a farm (nature) (as shown in Figure 18). All experiences used different features of the InflatableBots.

5.5.2 Experience Design.

Unlike Task 1 and 2, the InflatableBots maneuvered extensively around the room in this task. For safety considerations, their positions were visualized in the VR space. Furthermore, earplugs were used to reduce the noise from cooling fans and the robot’s mecanum drive. For hygiene reasons, disposable earplugs were adopted. The first experience involves touching plants. This is an action in which one can touch a stationary object, replicating Task 1. The second experience consists of repeatedly touching the entire horse’s body (e.g., from lower back to higher neck). As the participant’s hand moves, the InflatableBots moves to follow the hand, adjusting its height based on the height of the horse’s body parts that the user wants to touch. This experience is an extension of Task 2 and simulates changing height along an incline. The third experience is touching the necks of two dogs, where two InflatableBots were used, allowing two dogs to be touched simultaneously. This allows us to examine how multiple touch experience is perceived by users. The final experience involves using a controller instead of the highly sensitive hand to test how casual controller-based interaction is perceived. Here, when the controller in VR touches the pile, the pile is driven, lowering its height by 20 cm. The users can repeatedly touch the controller to the piles. They tried each experience once, and the order of the experiences was fixed. We did not prepare a mid-air or ground-truth condition, as such direct comparison was already tested with Task 1. Rather, we focused on testing users’ impressions when interacting with multiple InflatableBots around the users and also at real operating speeds.
Figure 18:
Figure 18: Each experience in task 3. (a) touching a plant. (b) touching the entire of a horse. (c) touching two dogs. (d) driving a pile.

5.5.3 Procedure.

Participants experienced the application in the order of plants, horse, dogs, and stakes.
After all of the experiences, participants answered survey questions regarding each experience in terms of "preference" and "how helpful InflatableBots is in understanding the content" on a 1 (did not like/not at all) to 7 (liked very much/very much) Likert scale, and "How loud was the sound" on a 1 (not at all) to 7 (very much) Likert scale, and gave their overall impressions of the application.

5.5.4 Results.

Figure 19(a) shows the scores for preference and Figure  19(b) shows how the InflatableBots aided the content understanding. The preference scores for plants, horses, dogs, and stakes were 3.7±1.1 SD, 5.3±1.1 SD, 5.5±1.0 SD, and 6.0±1.3 SD, respectively, with an average score exceeding 4 for all except plants. The scores for content understanding were 4.6±1.0 SD, 5.1±1.8 SD, 5.5±1.1 SD, and 5.8±1.3 SD for plants, horses, dogs, and stakes, respectively, with all experiences having an average score exceeding 4.6. The score for the sound being loud was 2.8±1.1 SD, with the average score being below 4.

5.5.5 Discussion.

Each experience: <Plant> The tactile presentation of the InflatableBots for stationary objects like plants deepens content understanding just by the sensation of touch. However, as mentioned in Task 1, unless there’s a match in hardness, shape, and texture, its effect is limited. In the interview, several participants mentioned that they felt a strong sense of unity with the InflatableBots, and they could not discern the independent sensations of leaves or branches. For a better experience, it is necessary to design applications considering these elements. <Horse> The interaction of touching the horse showed that participants could continuously touch the entire horse at varying heights. This indicates that the InflatableBots can adapt to body-scale objects. However, the InflatableBots moved to follow the participant’s hand, and some pointed out noticeable delay or waiting times for the robots to arrive at the hand position. Therefore, similar to the existing encounter-type haptic devices, using faster robots or employing multiple InflatableBots on the same continuous surface to reduce waiting time might be necessary. The following description, which is the result of the next experience, would support this consideration. <Dogs> The two InflatableBots were used, one was moving from the previous experience, and another was prepared around the second dog position. Overall, we received positive feedback that they could touch one dog and immediately touch the other without waiting time or noise, enhancing immersion. Some participants responded positively in touching both dogs simultaneously with both hands. This multi-robot approach can compensate for the disadvantages of single-robot operation and increase possible interactions. <Pile> Since the touch was through the controller, the influence of different textures was mitigated, and some suggested that it felt more realistic than direct touch. The fact that the InflatableBots are soft and slightly indented when hit with the controller made it feel like they were actually hammering. However, some were concerned about damaging the InflatableBots by hitting too hard.
Sound score: Thanks to earplugs, the score for the noise from the omni-wheel robot, fans, and motor drive was below 3. Participants could focus on the application without extra anxiety. The use of noise-canceling headphones could also be effective and might further reduce awareness of noise by playing appropriate sounds or background music during the application and interactions.
Safety: In Task 3, compared to the other tasks, the robot moved more rapidly around the room. However, interviews revealed that 10 out of 12 participants did not feel any fear of the moving robot. One reason was the in-VR visualization of InflatableBots’ positions, which allowed participants to adjust their hands or body postures to maintain sufficient distance from the robots. However, some felt that always-available visualization made them feel as if they were touching the visualized robot rather than the virtual object, which could potentially reduce immersion (P11). Additionally, some participants mentioned that the vinyl part of the InflatableBots was soft, safe to touch, and wouldn’t hurt. Others noted that they were familiar with household robots such as Roombas and trusted they wouldn’t collide with them.
Figure 19:
Figure 19: Result of task 3 (an x-mark shows each mean); (a) preference score, (b) score for understanding the contents.

5.6 Summary of the User Studies

Overall, InflatableBots was sufficiently safe and functioned without posing any danger to participants or causing any unexpected accidents. Haptic rendering can be realistic for soft and smooth objects (Task 1). In the case of objects with different materials, the use of a controller might be recommended (Task 3). The inclined angle representation is still challenging in terms of haptics and the shape of the top (Task 2), but its movement was sufficiently smooth and the current structure might be suitable for rendering discrete touching of multiple steps.

6 Limitations and Future Work

6.1 Haptic Rendering

As the study results showed, we acknowledge that the current InflatableBots prototype is ineffective in rendering various types of haptic materials. We consider two types of future directions to improve upon this. The first is to focus on the issue of the shape and texture of the top, and we could consider changing the physical properties of the top part. An example is using different types of vinyl or adding haptic accessories to the top while maintaining the roll-based stretchability. Additionally, a thin plate can be placed on the top. The plate can prevent users’ hands or controllers from getting buried in the vinyl, and such a flatter top surface could support natural friction when fingers continuously slide across virtual objects. In terms of simulating friction, inspired by the previous rotating haptic prop [37], we could use automatic yaw rotation of the inflatable structure to induce friction when the user moves while touching the virtual wall’s surface. Although it requires more sophisticated engineering and robust hardware, simulating large-scale detailed friction is an open-ended question and an important future direction with InflatableBots’ high motion freedom. The second is a more practical direction that applies the InflatableBots to cases such as mild or soft touches (e.g., social touch with avatars in remote collaboration), overhead content haptics (e.g., touch with flying balls in sports games, with the condition of Figure  3 right), and progressively rising content haptics (e.g., understanding water level in flood visualization, with the sequences of the three steps of Figure  3).
One participant pointed out that realism was degraded due to different temperatures between the real object and the InflatableBots’ surface when touched (P12) in the user study. Inspired by this, it would be interesting to install a cooler or heater around the air fan at the base plate to change the air temperature inside the cylinder and its surface. This feature is feasible for our fan-base inflation mechanism with thin vinyl, which might compensate for the material differences. A challenge is the time to change the surface temperature thoroughly, yet it might be worth exploring.
Furthermore, we acknowledge that our application-based exploration in the user study is still limited because we did not include a baseline condition in the task 3. Future work is desired to deepen the understanding of the InflatableBots’ practical benefits. Possible future work could be conducting comparative studies with relevant baselines or testing more dynamic scenarios with multiple InflatableBots instances.

6.2 Stability

We mentioned that the soft inflatable structure would be helpful in some cases (e.g., soft touch), but a more stable structure is generally preferred to increase interaction possibilities, including touching lateral surfaces. As we tested, we could again consider using more rigid vinyl materials and an air-compressor system to create a more solid height-changing cylinder. Such stable inflatable props could support physical item delivery, full-body interactions (e.g., sitting and leaning), or force representation (e.g., [6, 68]) in VR. While we noticed the tradeoff between the stability and inflation time, two types of inflation mechanisms could be selectively used in the same swarm robotic operation system, which might extend the flexibility and practicability of the InflatableBots.

6.3 Multi-Robot Operations

We implemented three InflatableBots prototypes and used them in a part of the user study. However, we did not demonstrate how they can be coordinated nor examined their technical performances (e.g., delay vs. robot speed, etc.) as encounter-type haptic devices or swarm robotic systems. We now understand the fundamental benefits and challenges of a single and two sets of the InflatableBots. The next step is exploring operating systems to manage multiple robots. Previous efforts (e.g., [71]) reported a simulation study to figure out the necessary setups to achieve room-scale haptic representation in a just-in-time manner. While such knowledge can be partially applied, we identified two challenges to be updated for the InflatableBots. The first is on the robot’s control mechanism; we used omnidirectional robots, which makes the travel time and motion path significantly different from the previous setup with two-wheel robots (e.g., [54, 68, 71]). The second update is to manage inconsistencies in the speed of horizontal positioning and height-changing motions. These technical improvements could contribute to the fields of distributed ETHD and room-scale swarm robotics.

6.4 Shape-Changing

Our inflatable cylinder is currently limited to a 1-dimensional upward or downward motion. An additional extension for InflatableBots itself is putting rotational actuators to change the cylinder’s base angle, allowing for a tilt and height-changing prop. In this case, we can design more flexible haptic rendering possibilities, such as rendering a tilted surface (e.g., plant leaves, car’s front window, and dog faces in our application). Another approach is leveraging the ability of multi InflatableBots combinations, so we could render various shaped touch surfaces by re-configuring multiple cylinders at necessary positions, angles, and heights, as shown in Figure 8(d) and (e).

6.5 Implementation Improvements

Our prototype functions well, and our technical test highlighted InflatableBots’ highly practical vertical and horizontal actuation speed compared to prior works (e.g., [9, 63]), supporting natural walkable room-scale interactions. However, a set of minor improvements were identified while implementing and examining InflatableBots. We currently employ an open-loop height control system, which results in increased height positioning errors (See Table 1). Therefore, the next revision will involve adding a rotary sensor to achieve a feedback-loop system that can provide stable and accurate height control. We also encountered limitations due to the location and errors of the position sensor. Since the VIVE tracker is too heavy to be placed on the vinyl cylinder’s top, we had to use an open-loop system. Furthermore, the trackers are situated on the low base unit, which can be easily occluded, resulting in relatively high positional errors. To mitigate this issue, we could potentially utilize modern HMD’s sophisticated inside-out tracking system to improve the overall experience.

7 Conclusion

In this paper, we presented InflatableBots, a novel integration of shape-changing inflatable robots designed to enhance large-scale encountered-type haptic experiences in VR. Diverging from the conventional inflatable shape displays that are stationary and confine interaction zones, our innovative approach amalgamates the mobility of robots with the versatility of fan-driven inflatable structures. This fusion facilitates expansive, safe, and easily implementable haptic interactions. Our design encompasses three synchronized inflatable mobile robots. Each robot is equipped with an omni-directional mobile base paired with a reel-based inflatable structure. This design allows the robot to swiftly adjust both its position and height (horizontal speed: 58.5 cm/sec, vertical speed: 10.4 cm/sec, with a height range of 40 cm to 200 cm). This rapid adaptability empowers the system to dynamically render haptic feedback, simulating a diverse range of body-scale objects and terrains in real-time over expansive areas (measuring 3.5 m x 2.5 m). Through a user study involving 12 participants and a set of application implementations, we demonstrated the benefits of InflatableBots in terms of safe and deployable large-scale haptic interaction capabilities. These advantages collectively contribute to substantially elevating the authenticity and immersion of VR experiences.

Acknowledgments

This work in part is supported by JSPS Kakenhi (19KK0258, 20K21799, 21H03473), and the Cooperative Research Project of the Research Institute of Electrical Communication, Tohoku University.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation
MP4 File - Video Figure
This is the video about InflatableBots.

References

[1]
Parastoo Abtahi, Benoit Landry, Jackie Yang, Marco Pavone, Sean Follmer, and James A Landay. 2019. Beyond the force: Using quadcopters to appropriate objects and the environment for haptics in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 1–13.
[2]
Bruno Araujo, Ricardo Jota, Varun Perumal, Jia Xian Yao, Karan Singh, and Daniel Wigdor. 2016. Snake Charmer: Physically Enabling Virtual Objects. In Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 218–226.
[3]
Mahdi Azmandian, Mark Hancock, Hrvoje Benko, Eyal Ofek, and Andrew D Wilson. 2016. Haptic retargeting: Dynamic repurposing of passive haptics for enhanced virtual reality experiences. In Proceedings of the 2016 CHI conference on Human Factors in Computing Systems. ACM, 1968–1979.
[4]
Hrvoje Benko, Christian Holz, Mike Sinclair, and Eyal Ofek. 2016. Normaltouch and texturetouch: High-fidelity 3d haptic shape rendering on handheld virtual reality controllers. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 717–728.
[5]
Laura H Blumenschein, Margaret M Coad, David A Haggerty, Allison M Okamura, and Elliot W Hawkes. 2020. Design, modeling, control, and application of everting vine robots. Frontiers in Robotics and AI 7 (2020), 548266.
[6]
Elodie Bouzbib, Gilles Bailly, Sinan Haliyo, and Pascal Frey. 2020. CoVR: A Large-Scale Force-Feedback Robotic Interface for Non-Deterministic Scenarios in VR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, 209–222.
[7]
Shaoyu Cai, Pingchuan Ke, Takuji Narumi, and Kening Zhu. 2020. Thermairglove: A pneumatic glove for thermal perception and material identification in virtual reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 248–257.
[8]
Lung-Pan Cheng, Patrick Lühne, Pedro Lopes, Christoph Sterz, and Patrick Baudisch. 2014. Haptic turk: a motion platform based on people. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3463–3472.
[9]
Lung-Pan Cheng, Thijs Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Robert Kovacs, Johannes Jasper, Jonas Kemper, and Patrick Baudisch. 2015. Turkdeck: Physical virtual reality based on people. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology. ACM, 417–426.
[10]
Inrak Choi, Heather Culbertson, Mark R Miller, Alex Olwal, and Sean Follmer. 2017. Grabity: A wearable haptic interface for simulating weight and grasping in virtual reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, 119–130.
[11]
Alexandra Delazio, Ken Nakagaki, Roberta L Klatzky, Scott E Hudson, Jill Fain Lehman, and Alanson P Sample. 2018. Force jacket: Pneumatically-actuated jacket for embodied haptic experiences. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 1–12.
[12]
Cathy Mengying Fang, Ryo Suzuki, and Daniel Leithinger. 2023. VR Haptics at Home: Repurposing Everyday Objects and Environment for Casual and On-Demand VR Haptic Experiences. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, 1–7.
[13]
Jose Fermoso. 2008. Kinetic Structure at BMW Museum Interprets Car Design Process. Wired. com. July (2008).
[14]
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura. 2023. UbiSurface: A Robotic Touch Surface for Supporting Mid-Air Planar Interactions in Room-Scale VR. Proc. ACM Hum.-Comput. Interact. 7, ISS, Article 443 (nov 2023), 22 pages.
[15]
Eric J Gonzalez, Parastoo Abtahi, and Sean Follmer. 2020. REACH+: Extending the Reachability of Encountered-type Haptics Devices through Dynamic Redirection in VR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, 236–248.
[16]
Mark Goulthorpe, Mark Burry, and Grant Dunlop. 2001. Aegis Hyposurface©: the bordering of university and practice. (2001).
[17]
Sebastian Günther, Dominik Schön, Florian Müller, Max Mühlhäuser, and Martin Schmitz. 2020. PneumoVolley: Pressure-based haptic feedback on the head through pneumatic actuation. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–10.
[18]
Zachary M Hammond, Nathan S Usevitch, Elliot W Hawkes, and Sean Follmer. 2017. Pneumatic reel actuator: Design, modeling, and implementation. In Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 626–633.
[19]
Chris Harrison and Scott E Hudson. 2009. Providing dynamically changeable physical buttons on a visual display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 299–308.
[20]
Elliot W Hawkes, Laura H Blumenschein, Joseph D Greer, and Allison M Okamura. 2017. A soft robot that navigates its environment through growth. Science Robotics 2, 8 (2017), eaan3028.
[21]
Zhenyi He, Fengyuan Zhu, and Ken Perlin. 2017. Physhare: sharing physical interaction in virtual reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, 17–19.
[22]
Seongkook Heo, Christina Chung, Geehyuk Lee, and Daniel Wigdor. 2018. Thor’s hammer: An ungrounded force feedback device utilizing propeller-induced propulsive force. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 525.
[23]
Anuruddha Hettiarachchi and Daniel Wigdor. 2016. Annexing reality: Enabling opportunistic use of everyday objects as tangible proxies in augmented reality. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 1957–1967.
[24]
Hunter G Hoffman. 1998. Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments. In Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No. 98CB36180). IEEE, 59–63.
[25]
Matthias Hoppe, Pascal Knierim, Thomas Kosch, Markus Funk, Lauren Futami, Stefan Schneegass, Niels Henze, Albrecht Schmidt, and Tonja Machulla. 2018. VRHapticDrones: Providing Haptics in Virtual Reality through Quadcopters. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, 7–18.
[26]
Hsin-Yu Huang, Chih-Wei Ning, Po-Yao Wang, Jen-Hao Cheng, and Lung-Pan Cheng. 2020. Haptic-go-round: A surrounding platform for encounter-type haptics in virtual reality experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–10.
[27]
Brent Edward Insko, M Meehan, M Whitton, and F Brooks. 2001. Passive haptics significantly enhances virtual environments. Ph.D. Dissertation. University of North Carolina at Chapel Hill.
[28]
Hiroo Iwata. 1999. Walking about virtual environments on an infinite floor. In Proceedings IEEE Virtual Reality (Cat. No. 99CB36316). IEEE, 286–293.
[29]
Hiroo Iwata, Hiroaki Yano, Fumitaka Nakaizumi, and Ryo Kawamura. 2001. Project FEELEX: adding haptic surface to graphics. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. ACM, 469–476.
[30]
Seungwoo Je, Myung Jin Kim, Woojin Lee, Byungjoo Lee, Xing-Dong Yang, Pedro Lopes, and Andrea Bianchi. 2019. Aero-plane: A Handheld Force-Feedback Device that Renders Weight Motion Illusion on a Virtual 2D Plane. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. ACM, 763–775.
[31]
Seungwoo Je, Hyunseung Lim, Kongpyung Moon, Shan-Yuan Teng, Jas Brooks, Pedro Lopes, and Andrea Bianchi. 2021. Elevate: A walkable pin-array for large shape-changing terrains. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, 1–11.
[32]
Asif Khan. 2014. MegaFaces. http://www.asif-khan.com/project/sochi-winter-olympics-2014/
[33]
Robert Kovacs, Eyal Ofek, Mar Gonzalez Franco, Alexa Fay Siu, Sebastian Marwecki, Christian Holz, and Mike Sinclair. 2020. Haptic PIVOT: On-demand handhelds in VR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, 1046–1059.
[34]
Jaeyeon Lee, Mike Sinclair, Mar Gonzalez-Franco, Eyal Ofek, and Christian Holz. 2019. TORC: A virtual reality controller for in-hand high-dexterity finger interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 1–13.
[35]
William A McNeely. 1993. Robotic graphics: a new approach to force feedback for virtual reality. In Proceedings of IEEE Virtual Reality Annual International Symposium. IEEE, 336–341.
[36]
Luis Andres Mendez, Ho Yin Ng, and Ping-Hsuan Han. 2022. MovableBag: Exploring Asymmetric Interaction for Multi-user Exergame in Extended Reality. In Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers. ACM, 515–519.
[37]
Victor Rodrigo Mercado, Maud Marchal, and Anatole Lecuyer. 2021. ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating Props. IEEE Transactions on Visualization and Computer Graphics 27, 3 (2021), 2237–2243.
[38]
M Morillo, C Dell’Era, and R Verganti. 2013. Radical Innovative Scenarios Enabled By New Technologies: Exploring The Role Of Outsider Partners. In 14th International Continuous Innovation Network (CINet) Conference’Business Development and Co-Creation’. 1–22.
[39]
Ken Nakagaki, Daniel Fitzgerald, Zhiyao Ma, Luke Vink, Daniel Levine, and Hiroshi Ishii. 2019. inFORCE: Bi-directional Force’Shape Display for Haptic Interaction. In Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 615–623.
[40]
Ken Nakagaki, Joanne Leong, Jordan L Tappa, Joao Wilbert, and Hiroshi Ishii. 2020. HERMITS: Dynamically Reconfiguring the Interactivity of Self-Propelled TUIs with Mechanical Shell Add-ons. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, 882–896.
[41]
Yuki Onishi, Kazuki Takashima, Shoi Higashiyama, Kazuyuki Fujita, and Yoshifumi Kitamura. 2022. WaddleWalls: Room-scale Interactive Partitioning System using a Swarm of Robotic Partitions. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. ACM, 1–15.
[42]
Jifei Ou, Mélina Skouras, Nikolaos Vlavianos, Felix Heibeck, Chin-Yi Cheng, Jannik Peters, and Hiroshi Ishii. 2016. aeroMorph-heat-sealing inflatable shape-change materials for interaction design. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 121–132.
[43]
Jifei Ou, Lining Yao, Daniel Tauber, Jürgen Steimle, Ryuma Niiyama, and Hiroshi Ishii. 2014. jamSheets: thin interfaces with tunable stiffness enabled by layer jamming. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction. ACM, 65–72.
[44]
Majken K Rasmussen, Esben W Pedersen, Marianne G Petersen, and Kasper Hornbæk. 2012. Shape-changing interfaces: a review of the design space and open research questions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 735–744.
[45]
Sharif Razzaque, Zachariah Kohn, and Mary C Whitton. 2005. Redirected walking. Citeseer.
[46]
Nexus Robot. [n.d.]. 4WD 100 mm Mecanum Wheel Mobile Arduino Robotics Car 10011. https://www.nexusrobot.com/product/4wd-mecanum-wheel-mobile-arduino-robotics-car-10011.html
[47]
Harpreet Sareen, Udayan Umapathi, Patrick Shin, Yasuaki Kakehi, Jifei Ou, Hiroshi Ishii, and Pattie Maes. 2017. Printflatables: printing human-scale, functional and dynamic inflatable objects. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3669–3680.
[48]
Hiroki Sato, Young Ah Seong, Ryosuke Yamamura, Hiromasa Hayashi, Katsuhiro Hata, Hisato Ogata, Ryuma Niiyama, and Yoshihiro Kawahara. 2020. Soft yet strong inflatable structures for a foldable and portable mobility. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–4.
[49]
Dominik Schmidt, Rob Kovacs, Vikram Mehta, Udayan Umapathi, Sven Köhler, Lung-Pan Cheng, and Patrick Baudisch. 2015. Level-ups: Motorized stilts that simulate stair steps in virtual reality. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2157–2160.
[50]
Adalberto L Simeone. 2015. Substitutional reality: Towards a research agenda. In 2015 IEEE 1st Workshop on Everyday Virtual Reality (WEVR). IEEE, 19–22.
[51]
David Sirkin, Brian Mok, Stephen Yang, and Wendy Ju. 2015. Mechanical ottoman: how robotic furniture offers and withdraws support. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM, 11–18.
[52]
Alexa F Siu, Eric J Gonzalez, Shenli Yuan, Jason B Ginsberg, and Sean Follmer. 2018. Shapeshift: 2D spatial manipulation and self-actuation of tabletop shape displays for tangible and haptic interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 291.
[53]
Anthony Steed, Eyal Ofek, Mike Sinclair, and Mar Gonzalez-Franco. 2021. A Mechatronic Shape Display based on Auxetic Materials. In Nature Communications.
[54]
Ryo Suzuki, Hooman Hedayati, Clement Zheng, James L Bohn, Daniel Szafir, Ellen Yi-Luen Do, Mark D Gross, and Daniel Leithinger. 2020. RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm Robots. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–11.
[55]
Ryo Suzuki, Ryosuke Nakayama, Dan Liu, Yasuaki Kakehi, Mark D. Gross, and Daniel Leithinger. 2020. LiftTiles: Constructive Building Blocks for Prototyping Room-scale Shape-changing Interfaces. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 143–151.
[56]
Ryo Suzuki, Eyal Ofek, Mike Sinclair, Daniel Leithinger, and Mar Gonzalez-Franco. 2021. Hapticbots: Distributed encountered-type haptics for vr with multiple shape-changing mobile robots. In The 34th Annual ACM Symposium on User Interface Software and Technology. ACM, 1269–1281.
[57]
Ryo Suzuki, Yasuaki Zheng, Clement Kakehi, Tom Yeh, Ellen Yi-Luen Do, Mark D Gross, and Daniel Leithinger. 2019. ShapeBots: Shape-changing Swarm Robots. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. ACM, 493–505.
[58]
Saiganesh Swaminathan, Michael Rivera, Runchang Kang, Zheng Luo, Kadri Bugra Ozutemiz, and Scott E Hudson. 2019. Input, Output and Construction Methods for Custom Fabrication of Room-Scale Deployable Pneumatic Structures. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 2 (2019), 62.
[59]
Kazuki Takashima, Naohiro Aida, Hitomi Yokoyama, and Yoshifumi Kitamura. 2013. TransformTable: A Self-Actuated Shape-Changing Digital Table. In Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces. ACM, 179–188.
[60]
Kazuki Takashima, Takafumi Oyama, Yusuke Asari, Ehud Sharlin, Saul Greenberg, and Yoshifumi Kitamura. 2016. Study and design of a shape-shifting wall display. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM, 796–806.
[61]
Wen Yen Tang, Sheng Kai Tang, and Yuzn Zone Lee. 2011. Tangible Pixels: Interactive Architectural Modules for Discovering Adaptive Human Swarm Interaction. In Proceedings of 30th eCAADe. 301–307.
[62]
Shan-Yuan Teng, Tzu-Sheng Kuo, Chi Wang, Chi-huan Chiang, Da-Yuan Huang, Liwei Chan, and Bing-Yu Chen. 2018. Pupop: Pop-up prop on palm for virtual reality. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. ACM, 5–17.
[63]
Shan-Yuan Teng, Cheng-Lung Lin, Chi-huan Chiang, Tzu-Sheng Kuo, Liwei Chan, Da-Yuan Huang, and Bing-Yu Chen. 2019. TilePoP: Tile-type Pop-up Prop for Virtual Reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. ACM, 639–649.
[64]
Ching-Yi Tsai, I-Lun Tsai, Chao-Jung Lai, Derrek Chow, Lauren Wei, Lung-Pan Cheng, and Mike Y Chen. 2022. Airracket: Perceptual design of ungrounded, directional force feedback to improve virtual racket sports experiences. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. ACM, 1–15.
[65]
J. van den Berg, Ming Lin, and D. Manocha. 2008. Reciprocal Velocity Obstacles for real-time multi-agent navigation. In 2008 IEEE International Conference on Robotics and Automation. 1928–1935. https://doi.org/10.1109/ROBOT.2008.4543489
[66]
Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann. 2017. VRRobot: Robot actuated props in an infinite virtual environment. In 2017 IEEE Virtual Reality (VR). IEEE, 74–83.
[67]
Li-Yang Wang, Ping-Hsuan Han, and Liwei Chan. 2022. Push-Ups: Enhancing Kinesthetic Experience with Shape-Forming Devices on the Feet Soles. In Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 1–8.
[68]
Yuntao Wang, Zichao Chen, Hanchuan Li, Zhengyi Cao, Huiyi Luo, Tengxiang Zhang, Ke Ou, John Raiti, Chun Yu, Shwetak Patel, 2020. MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–12.
[69]
Yu-Wei Wang, Yu-Hsin Lin, Pin-Sung Ku, Yōko Miyatake, Yi-Hsuan Mao, Po Yu Chen, Chun-Miao Tseng, and Mike Y Chen. 2021. JetController: High-speed ungrounded 3-DoF force feedback controllers using air propulsion jets. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, 1–12.
[70]
Lining Yao, Ryuma Niiyama, Jifei Ou, Sean Follmer, Clark Della Silva, and Hiroshi Ishii. 2013. PneUI: pneumatically actuated soft composite materials for shape changing interfaces. In Proceedings of the 26th annual ACM symposium on User interface software and Technology. ACM, 13–22.
[71]
Yan Yixian, Kazuki Takashima, Anthony Tang, Takayuki Tanno, Kazuyuki Fujita, and Yoshifumi Kitamura. 2020. ZoomWalls: Dynamic Walls that Simulate Haptic Infrastructure for Room-scale VR Worlds. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, 223–235.
[72]
Bowen Zhang and Misha Sra. 2021. Pneumod: A modular haptic device with localized pressure and thermal feedback. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology. ACM, 1–7.

Cited By

View all
  • (2024)LoopBot: Representing Continuous Haptics of Grounded Objects in Room-scale VRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676389(1-10)Online publication date: 13-Oct-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Check for updates

Badges

Author Tags

  1. Encountered-Type Haptics
  2. Haptics
  3. Inflatables
  4. Mobile Robots
  5. Shape-Changing Interfaces
  6. Virtual Reality

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3,265
  • Downloads (Last 6 weeks)565
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)LoopBot: Representing Continuous Haptics of Grounded Objects in Room-scale VRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676389(1-10)Online publication date: 13-Oct-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media