Abstract
Most current devices are passive regarding their locations by being integrated in the environment or require to be carried when used in mobile scenarios. In this paper we present a novel type of self-actuated devices, which can be placed on vertical surfaces like whiteboards or walls. This enables vertical tangible interaction as well as the device interacting with the user through self-actuated movements. In this paper, we explore the application space for such devices by aggregating user-defined application ideas gathered in focus groups. Moreover, we implement and evaluate four interaction scenarios, discuss their usability and identify promising future use cases and improvements.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The variety of input and output devices that can be used to interact with computing systems is steadily increasing. Traditionally we can discriminate interaction devices into two groups.
The first group covers stationary devices, including desktop computers, TVs, and public displays. These are not mobile while in use. In many cases they are installed and become part of the environment. Notebook computers, even though they are often carried and used in different settings fall in this group, too, as they are stationary while in use. The second group describes mobile devices including, smart phones, tablets, and interactive glasses that are carried or worn by the user. Interaction with these devices takes place while the user is mobile. These two groups of devices are well explored and the design space is well understood (e.g., for input devices [2]). In recent years a third group of devices is emerging: devices which can move themselves and act autonomously, called self-actuated devices.
Interactive self-actuated devices combine the advantages of stationary devices – as the user does not have to carry them – with the advantages of mobile devices – as the device can always be with the user. Prominent examples of this device category are known from robotics. Domestic robots, such as Wakamaru [26] which provides companionship to elderly and disabled people, can autonomously serve the user. In recent conferences attendees participated through a robotic device, e.g. using the Beam remote presence systemFootnote 1. Besides systems that are designed for a specific application domain recent work in HCI proposed interactive self-actuated general purpose devices (e.g. [24, 25, 29]). These works introduce devices that move freely, while providing rich input and output possibilities which is similar to state-of-the-art mobile devices, but without restricting the application purpose.
Much like current stationary and mobile devices, interactive self-actuated devices can conceptually facilitate a large range of applications by combining different device behaviors and fitting several user roles. In this paper we explore an application space of how interactive self-actuated displays can be used from a human-centered perspective. Based on the idea that devices can move on any vertical surface, we implemented a prototype that can freely move on ferromagnetic planes. With the size and abilities of a standard tablet computer it can easily be carried and moved by the user while it has the additional abilities of a self-actuated device. Thus, the device combines the advantages of mobile, tangible, and self-actuated devices, which allows to support a broad set of use cases (see Fig. 1). Steerable projector systems [22] or display walls can provide visual output across large spaces, however, a free-moving device enriches the Everywhere Display with a tangible dimension. In contrast to flying devices like Midair Displays [24] less power is consumed during operation. Furthermore unlike self-actuated devices for vertical surfaces flying and floor based devices share a movement space with users and thus may get in the user’s way.
Using the interactive self-actuated prototype as a stimulus, we conducted a series of focus groups to explore the space of promising applications. Participants were asked to envision and discuss potential use cases. They proposed a truly broad range of corresponding ideas which we grouped in four categories: role, context, application, and device behavior. Using these categories, we identified four promising application scenarios that were implemented as cinematic showcases. Through a survey we further investigated the usability and emotional impact of the presented scenarios.
After revising the related work on interactive self-actuated devices, we investigate the concept and implementation of the self-actuated display device. This implies its use cases and application scenarios that we investigated by different focus groups. Afterwards, we present four exemplary scenarios, their implementation, and the results of their evaluation that lead to a promising conclusion about the potential of self-actuated displays. The contribution of this paper is as follows:
-
The concept and implementation of a novel self-actuated display device.
-
An application space for the self-actuated displays for vertical surfaces.
-
An evaluation of promising application scenarios for these devices.
2 Related Work
Before self-actuated user interfaces were proposed, user-actuated interfaces were used to manipulate digital information through manually moving physical representations of virtual information. This concept of tangible user interfaces (TUIs) has been explored as passive physical user interface and advanced towards self-actuated physical user interfaces. Nowadays, a wide range of autonomous or semi-autonomous moving user interfaces has been proposed and built, including self-actuated TUIs, devices, and robots.
2.1 Physical User-Actuated Interfaces
Even before coining the notion Tangible User Interface, Fitzmaurice, Ishii, and Buxton introduced Graspable Interface [4] that allow direct control and manipulation of digital objects through moving physical wooden bricks. Ishii and Ullmer later introduced Tangible Bits [10], a vision to use the whole real world as medium for manipulation of the virtual world. One of these prototypes was transBOARD, a digitally-enhanced whiteboard system that monitors the activity of physical objects on its vertical surface and has the capability of storing pen strokes. Another example was Urp [30], a TUI for collaborative urban planning using physical models of buildings on a tabletop system. Video projection and electromagnetic tagged wireless mice were used as pucks on the Sensetable [21], while the music interface reacTable [11] works with optical markers that are placed underneath the tangibles are moved on a tabletop system to play music. Geckos [17], Magnetic Appcessories [1], and GaussBits [18] use magnets to attach passive tangible elements on vertical surfaces and thus demonstrate that interaction with TUIs is not limited to horizontal planes. In addition to magnetic solutions, vacuum adhesion forces for sticking tangible objects on vertical surfaces were used in Vertibles [9].
2.2 Physical Self-Actuated Interfaces
Technologies proposed for actuating tangibles were for instance merged arrays of electromagnetic coils embedded in a tabletop system [20, 31], the six-legged HexbugTM [27], and vibrating bristles [19]. Moreover, tabletop systems were using robots instead of TUIs (Touch and Toys [7], RoboTable [14], RoboTable2 [28], RemoteBunnies [6], and TabletopCars [3]). PhyBots [12] introduced a prototyping toolkit for adding locomotion on floors to everyday objects. Curlybot [5] was a driving educational toy robot that allows to be equipped with a pen extension. The PMD system uses tracked physical objects, whereby physical elements are moved by both, users and computers [23]. Self-actuated devices have also been developed for vertical surfaces. WallBots [15] are magnetic as well as self-actuated, autonomous wall-crawling robots equipped with a tri-colored LED and used in street art. Interactive self-actuated objects can also move in a three-dimensional space. ZeroN [16], a magnetic-controlled volume with a levitating tangible element tracked by a Kinect demonstrates a physical computer-actuated 3D interface. The Midair Displays [24], a display mounted to a quad-copter, is conceptually a spatially unlimited levitating and moving interface. Similarly, Seifert et al., developed Hover Pad, a tablet that is attached to a static crane construction and can thereby freely move within a 3D space [25].
2.3 Summary
Previous works on self-actuated interactive devices mainly focused on the technical aspects to realize novel types of devices, and mostly the device was build for one single application to demonstrate the technical concept. In contrast, our aim is to explore the application space of self-actuated interactive displays for vertical surfaces. Considering self-actuated devices as a new class of devices in this paper we explore potential applications of such devices from a human-centered perspective.
3 Self-Actuated Displays for Vertical Surfaces
In this section, we introduce the concept of self-actuated displays for vertical surfaces. Additionally we describe our prototypical implementation which realizes the main aspects of the concept using currently available technology.
3.1 Concept
In this work, we explore the possibilities of self-actuated displays that are able to move on horizontal as well as vertical surfaces. We present a device that can be grabbed and placed on surfaces (similar to tangible user interfaces [10]) and freely moved on these surfaces. This enables the usage of vertical surfaces such as walls, whiteboards, or ceilings. In contrast to most prior work, we thereby focus on devices that are actuated by the user as well as self-actuated depending on context and task.
To cover a broad range of interactions possibilities, we envision several input and output modalities. As input modality we mainly focus on touch screen, camera, and further sensors such as an accelerometer. The primary output modalities are visual and auditory. Since we envision a self-actuated device, we take the movement of the display itself into account as well. Moreover, the device can be equipped with traditional tools. For example, a pen can be attached to it, serving as an additional output means.
Embedded sensors which are integrated in the device not only enable user tracking, they also facilitate coordination with other devices since we envision an active communication between multiple devices, so they can interact with each other and/or create a unified display space. Thus, the maximum display size is only limited by the number of devices used.
Whereas most current self-actuated interfaces are limited to horizontal surfaces, like interactive tables, our approach focuses on vertical surfaces. Therefore the device is also attachable to walls and whiteboards for example.
3.2 Prototype Implementation
We transferred the concept of our self-actuated display into two working prototypes (see Fig. 3).
Both are based on the commercially available 3pi robot platform by PololuFootnote 2 with the 3pi expansion kit without cutoutsFootnote 3 attached. We added support for external LiPo-battery usage and charging to compensate for increased battery drainage caused by vertical movement. For wireless communication with external and attached devices a Bluetooth enabled microcontroller was used.
We enabled the prototype to move on ferromagnetic vertical surfaces by attaching a 3D printed frameFootnote 4 to its bottom that holds up to 22 neodymium magnets.
To allow upwards movement on vertical surfaces, the motors have to generate enough torque to overcome both gravity F G and rolling resistance F F as depicted in Fig. 2. Thus, we replaced the default 30:1 gear motors with 298:1 gear motors. Furthermore, wheel slippage has to be prevented by generating enough stiction F S . In our case this is done by increasing magnetic force F M and contact pressure by adding magnets. However, increasing contact pressure also increases rolling resistance and reduces acceleration and maximum speed. We empirically determined the number and locations of the magnets needed to enable stable operation and ended up using 15 magnets with some bias towards the ball caster.
To expand the robot’s input and output modalities, we attached a 3D printed frame that encloses a Google Nexus 7 tabletFootnote 5. Besides its display, the tablet also provides the robot with additional peripherals like cameras, inertial sensors, Wi-Fi, and speakers. The tablet frame may also be extended with additional tools. As examples for such tools (see Fig. 3) we built servo motor actuated pen and eraser holders to draw on whiteboards. Each prototype is approximately 205 × 117 × 49 mm in size and weighs 495 g.
4 Creating Potential Use-Cases
To increase our understanding of the application space and to explore potential use cases for the device we conducted a series of focus groups [13] and evaluated the results. We first presented the developed prototype to the focus groups to create a common understanding of the possibilities and interaction modalities. In the following, we first describe the design of the focus groups and afterwards, we present the results and a discussion.
4.1 Study Method
Three focus groups with 19 participants (15 male, 4 female) aged 22 to 41 (M = 26.9 years, SD = 4.3) were conducted – six to seven participants took part in each of them. We recruited participants through our mailing lists and from our peer group. We strove for a broad cultural background and, thus, we invited participants originating from five different countries, namely the U.S., Germany, Egypt, Belgium and Argentina. Each participant was compensated with 15 €.
After welcoming a group of participants and providing them with basic information about the procedure, we asked them to fill in a consent form and to answer a brief demographic questionnaire. After an introduction round, we introduced the main goal and procedure of a focus group to the participants. This was followed by a demonstration of the prototype (see Fig. 4) on a whiteboard and its capabilities as a stimulus for participants. We also highlighted the ability to control a pen and an eraser to show its potential for further extension. Directly after the presentation, we asked participants to write down their initial reactions (Result R1). We asked them to discuss them afterwards, and we took notes during the discussion (R2).
After the discussion of the participants’ initial reactions, we asked them to write down potential use cases on large post-it notes (R3). This was followed by a discussion about the most promising and the most controversial cases, which were recorded in a written protocol for post hoc analysis (R4). During this discussion participants could write down additional ideas on post-its (R3). After the discussion, we closed the respective session.
4.2 Results
Participants’ first impression (R1) and their discussion (R2) were mainly positive. Answers can be categorized in three main categories. (1) Participants were impressed by the overall idea. They, for example, stated that “it looks impressive” (P5), “opens a new space” (P6) and is a “pretty interesting technology” (P16). (2) Participants also imagined applications for similar devices. They stated that it would be “useful if you have hands full” (P18) and could be used “in the kitchen” (P17). Finally, (3) two participants expressed concerns about the presented technology. One asked “For what?” (P13) and another participants wondered if it “must be able to move?” (P19).
In total, participants created 137 potential use cases (R3) for the presented technology. Using a bottom-up analysis and open coding, we identified 49 groups of ideas in total that again could be grouped into four main categories. An emerging group, for example, contains 31 ideas that propose to use the device in home environments and another group with 25 ideas proposed ideas where the device follows the user’s position. The groups were categorized by their Role, Context, Application, and Device Behavior (see Table 1). Role can be further divided in ownership, audience, and controlling subject. The device can, for example, be autonomous, part of the infrastructure, and with a single person as audience. Context was mostly provided in the form of description of a location, such as an office or a classroom but also through specific situations such as emergencies. Participants’ ideas provide diverse applications.
Exemplary applications include navigation and route guidance, sending messages, as well as providing alarms and notifications or using the device as smart companion. The fourth category describes device behaviors. For example, arranging multiple devices in a grid generates a large display. Another example proposes that the device follows a predefined path and expects the user to follow for delivering location – and situation-based information to the user.
For a final improvement of the category consistency, we revised all ideas by going through the individual post-it notes and categorized them using the identified four idea categories. This procedure ensured that all ideas are covered by the four identified categories. During that process, we determined how often particular ideas appear in each group.
4.3 Discussion
The three focus groups identify a wide range of use cases for self-actuated displays on vertical surfaces (see Table 1). Using a bottom-up analysis we identified 76 groups to structure the ideas that can further be fused into the four categories Role, Context, Application, and Device Behavior. Moreover, these categories can be used to generate new application scenarios, which were not explicitly envisioned by one of our participants. Cells of Table 1, for example, can be fused to the following scenario: multiple devices that are part of an office’s infrastructure (Role, Context), can build a large display (Device Behavior), to form a window by enabling the user to see through walls (Application). Whereas this particular scenario was not envisioned by one of our participants, it could be derived by combining idea groups across the four categories. However neither the results of the focus group can be generalized nor the amount of design ideas per group should be over interpreted. That means that ideas mentioned the most are not necessarily the most interesting ones. Yet we were able to cover a broad range of potential application scenarios.
5 Implementation of Example Scenarios
In the previous section, we developed application ideas for self-actuated displays. In this section, we select four scenarios that cover a broad range of application possibilities for self-actuated displays. For a later survey evaluation, we extended the previously described device prototype and prepared video prototypes. It will be described in the next chapter.
5.1 Selection of Scenarios
We designed four scenarios (kitchen, class room, museum, and office) through combining idea groups of each of the four idea categories we presented in Table 1. This allowed us to cover a broad variety of different application types. During that process, we aimed for covering diverse scenarios that are well represented through the ideas generated by the focus groups.
As shown by color coding in Table 1 the scenarios cover each sub-category of the role dimension: ownership, audience, and controlling subject. Moreover, we consider diverse context types as well as combinations of them: home (kitchen), classroom and teaching, public space (museum), as well as office and meeting. Afterwards, we selected applications that suit the selected contexts, while also representing a good coverage of the idea groups: guidance (task), teaching and presentation/data visualization, guidance (navigation) and podcast as well as communication/telepresence. Finally, we chose a device behavior for each scenario which fitted the combination of role, context, and application best.
5.2 Video Prototype
To evaluate the scenarios we created videos of the prototypes being used for each scenario. We present these videos to participants in an online survey. Thus, four storyboards, one for each scenario, were designed which describe how a user interacts with the self-actuated display during the four scenarios (see Fig. 5). The storyboards explain the context the scenario takes place in (e.g., museum or kitchen), the interaction sequence when a user is using the device in a specific application, and the content that is displayed on the device’s screen during the interaction.
Scenario 1 – Kitchen. A person is in an unknown environment, for instance she just started to work in a new office. In the kitchen of this office is a commonly used coffee machine. She would like to have a coffee but she neither knows how the machine works nor where the cups and ingredients are. This video shows how a self-actuated display guides the person to find the cups and the ingredients as well as how to use the coffee machine.
Scenario 2 - Class Room. The teacher is teaching trigonometry and explains a new formula. The self-actuated display assists through being an interactive display where the teacher can write any formula on. The self-actuated display draws the formula on the whiteboard to present it to the class.
Scenario 3 – Museum. Visitors may want some kind of additional information about the exhibits, but some of them want to follow their own path and not being guided in a tour. A self-actuated – and at the same time tangible display – can follow the visitor in an exhibition and provides further information about exhibits of interest. Furthermore, placing the device manually (like TUIs) enables the user to gain further background information about an exhibit of interest.
Scenario 4 – Office. In this scenario the display can increase its size if that is necessary, for example, if additional people join an ongoing video conference meeting. For communicating with a single person, the display is sufficiently large enough to show that person. However, when another person joins the conversation, the display may be too small and, thus, one cannot see all conversation parties at once. In the video prototype we show how self-actuated displays allow to change their size through automatically gathering together and extend the display size similar to puzzle pieces.
The content displayed on the device during the video (text, images, and video) was pre-produced and presented in a remotely controlled slideshow while the video prototypes were recorded. The device has been remotely controlled in scenarios 1, 3 and 4 using a Microsoft Xbox360 game controller. For scenario 2 the predefined curve has been drawn autonomously using the control scheme described below. In summary, we developed four video prototypes with an average duration of 69 s.
5.3 Device Prototype
To realize the video prototypes we implemented two control schemes – remote control and autonomous behavior. For remote control we used a commercially available game pad connected to a laptop. Motor speeds were calculated according to the direction of the analogue sticks and sent to the prototype via Bluetooth.
The whiteboard task also requires the display to move autonomously so the position and orientation needs to be detected. As the prototype has a tablet attached, we exploited its gravity sensor to determine the orientation. This orientation data is send to a laptop using Wi-Fi. To obtain the position of the device, we used an external Asus Xtion depth cameraFootnote 6 which was oriented perpendicular towards the prototype. The device position is obtained by segmenting depth values in a short distance to the surface (2 to 7 cm). After filtering out small segments we determined the exact position on the surface by calculating the center of mass for each segment in screen space. We chose this method for its simplicity. In future versions the position could also be obtained by the prototype without an external sensor using the built in camera and feature tracking, for example.
Based on the position and orientation data we implemented a control scheme using a proportional-integral-derivative controller which allowed us to implement two simple movement commands:
-
look at: rotates the robot around its center until it is heading towards a target point.
-
move to: reaches a specific point by following a straight line to the target.
We used these simple commands to build scripts for drawing axes and plotting simple mathematical functions such as a sine curve by linear approximation. The autonomous behavior is, for instance, used in the classroom scenario in which the self-actuated display draws mathematical functions to assist the professor.
6 Scenario Evaluation
6.1 Method
For evaluating the four scenarios, we conducted an online survey, which we distributed via mailing lists and social networks. The survey started with an introduction about its purpose and a questionnaire that records the age and the gender of the participants. Then the scenarios were presented in randomized order. At the beginning, a brief introduction was given, then the video was shown, and afterwards we asked (using a 5 item Likert scale) if the presented scenario was liked. Moreover, we used the AttrakDiff mini questionnaire [8] to collect opinions about the scenarios. Finally, in two open questions the participants were asked to report positive (e.g., strengths or possibilities) and negative (e.g., weaknesses or risks) aspects of the scenario.
6.2 Results
In total, 57 participants (13 female, 44 male) aged 20 to 58 years (M = 33.7, SD = 9.6) completed our online survey. Thus, we collected 57 completed AttrakDiff questionnaires for each scenario as quantitative results and 269 (64.9 %) out of 456 possible qualitative answers to open questions regarding positive or negative aspects.
Quantitative Results.
The user perception of the emotional impact was evaluated according to the AttrakDiff scheme (based on a 1–7 Likert scale). Table 2 compares the mean values of the scores of each presented scenario. The hedonic quality (HQ) consists of the HQ-Identity (HQ-I) and HQ-Stimulation (HQ-S). In terms of pragmatic quality Scenario 1 performs slightly better than the others. Relating to attractiveness Scenario 4 performs best. Also in terms of hedonic quality Scenario 4 achieves the highest scores. The small and overlapping confidence intervals indicate that the participants generally assess the presented scenarios similarly.
We further analyzed the data using a Friedman Analysis of Variance. However, the Friedman ANOVA yielded no significant differences for the AttrakDiff scales of hedonic qualities (HQ-I: χ 2 (3) = 0.587, p = .899, HQ-S: χ 2 (3) = 3.322, p = .345) as well as for attractiveness (ATT: χ 2 (3) = 5.818, p = .121), we found a statistically significant difference in the pragmatic qualities (PQ: χ 2 (3) = 8.051, p = .045). Post hoc analysis with Wilcoxon signed-rank tests was conducted with a Bonferroni correction applied, resulting in a significance level set at .008. However, we did not find any statistically significant differences for the perceived pragmatic qualities between the four scenarios (p > .008).
Qualitative Results
We analyzed the 269 qualitative answers of the questionnaire by manual assessment. Analysis was done in two iterations: derivation of a categorization and answer reassignment afterwards.
In the first iteration two researchers independently derived categories for each scenario based on the answers and counted their occurrences. Then both categorizations were discussed and merged into unified categorization schemes. Analysis revealed seven scenario independent categories, which we then separated from the scenario specific ones.
In the second iteration we went through the answers once again, reassigned them to the previously derived categorization and counted the occurrence of each category. Furthermore, occurrences of independent categories were also summed up over the scenarios.
Concept. We assigned 24 answers to the concept category, which deals with the device being attached to walls. This property has been mostly commented on in the kitchen and office scenarios. One answer stated that the “contents of the [kitchen] drawer may be changed without having to tell someone” (P96) and another one wrote “it’s nice to have additional screen space as needed and helpful that the screen aligns itself with the one already there.” (P86) or simply that the system is “cool and universally applicable” (P97). Whereas others mentioned “the device seems to be limited to a 2d-region. If the procedure extends beyond this region (e.g., throwing the coffee filter into the wastebin at the door), the display is not usable adequately” (P40), and “is it able to move around corners or does it have to be detached?” (P97).
Usefulness. In total 82 answers directly addressed the usefulness of the device. It “fulfills its tasks” (P59), gives a “simple and lucid explanation, useful for complex tasks” (P90), and “might help some physically limited people” (P73). Likely due to the simplistic scenarios some users asked “what is its purpose?” (P76), and regarded the device a “gimmick” (P48).
Attention. Another set of 35 answers addressed the attention of users. The system was described to have a “good entertainment value” (P72) and it “attracts attention to itself” (P90). Although some of them noted that the “show-off & wow effect (works probably just once)” (P73). In the museum scenario there were concerns that “the device may attract negative attention from the viewer or disturbs him from viewing artworks since movement of the device automatically attracts attention to itself” (P100), as stated by one answer.
Alternatives. Sixty eight answers compared the prototype with various alternative products including mobile phone applications (“better [realized] with indoor-navigation app” (P105)), camera projection systems (“[I] think projector + software are clearly better suited for this [kitchen scenario].” (P14)), or head mounted displays (“I’d rather consider this [museum] a scenario for Google Glass or CastAR.” (P16)). These approaches, however, have additional drawbacks that need to be taken into account as well (e.g., stationary setup for projection, challenges in augmented reality). Others mentioned advantages like (e.g. “ “pointing” [at something] is easier than using static displays.” (P79)). This was especially the case in the museum scenario where there is “no need for one’s own tablet/smartphone” (P36), and it also “replaces guide, [and has] individuality” (P42).
Complexity. In total 39 answers dealt with the influence of task complexity in the scenarios. The devices are a “very nice possibility to maintain eye contact with many participants at video conferences” (P10), “It is quite easy for students, they can write the formula and the machine draws for them” (P98), and “managing new situations is definitively simplified” (P87).We chose simple tasks that were easy to implement, like drawing simple functions. On such tasks there is a risk that it “complexes a simple process” (P16). Also when preparing coffee one participant found that it “looks way too inconvenient for something such simple to me” (P17).
Multiple users. Multi-user support has been identified as another category which was mentioned in 30 answers. In the office scenario “the number of participants involved in the conversation may be easily varied” (P87) using multiple devices. In public spaces like a museum where many people act in a quite limited space questions regarding the use of multiple displays that need to be addressed arose. For example: “What happens when multiple people are looking at the same piece of art or passing each other?” (P86) and “how should probably dozens of displays be controlled and be distinguished from each other?” (P45).
Technical characteristics. We categorized 102 comments into technical characteristics which describe particular aspects of the prototype. Our prototype is driven by two gear motors with relatively high gear transmission ratios, so “the device is slow and noisy” (P3). Especially one participant said “it is too noisy to hear that this machine follows you. I need quiet and silence to visit the art exhibition.” (P98) In contrast, in the office the device was “quiet-running” (P59).
In the kitchen one answer remarked “it is good that I don’t need to ask anyone or spend more time to find out” (P98) how to prepare coffee, whereas “the teacher probably would have manually drawn the curve faster” (P87) in the classroom.
Asthetics were also subject to potential improvement in future versions, especially in the museum scenario the “metal driving plates around the exhibits are little aesthetic” (P35).
Scenario specific categories. Besides the independent categories described above, we identified some categories that applied to specific scenarios only:
-
kitchen: Nine answers dealt with the social aspect of preparing coffee. As with other technology that was introduced before (e.g. smart phones), some participants saw the risk that “communication between colleagues is lost” (P64).
-
classroom: We identified two more categories with sixteen participants commenting on the precision and seven on the didactic meaning. The prototype used for the video clearly lacks precision and is “quite scrawly, [and] surely problematic for more complex functions” (P14), but future versions may as well produce a “potentially better/more exact drawing of functions than quickly sketched hand drawings” (P57) can. Thus some answers were skeptic about “what is the didactic meaning and learning success?” (P65).
-
museum: Five answers saw the device as a “personalized guide” (P52) that is “more personal than fixed video installations” (P14). Additionally three answers emphasized the self-actuated aspect, for example “further multimedia information [is available] at any time without having me to carry something with me” (P61).
-
office: Dynamically scaling the display with multiple devices was mentioned in ten answers. One participant found “it’s nice to have additional screen space as needed and helpful that the screen aligns itself with the one already there” (P86), and another wrote “having multiple displays merge into a single large one is fantastic” (P3). Six participants were not sure “where did the second display come from?” (P14) since the scenario looked somewhat constructed with only a whiteboard and two devices.
6.3 Discussion
We evaluated four scenarios that have been derived from the application ideas that resulted from the focus groups. For each scenario, we collected qualitative comments as well as the scenarios’ pragmatic and hedonic qualities. Participants’ quantitative assessment was similar for the four scenarios. Despite the limitation of the concrete prototype used, the overall reaction is positive in terms of hedonic and pragmatic qualities with a tendency towards being desired.
Comments regarding the scenarios’ usefulness were mainly positive, describing specific use cases. Some participants, however, also wondered about the additional value the device could provide. Accordingly, participants compared the device with existing devices that can support similar tasks. In this sense the self-actuated display is similar to devices that fill a position complementary to existing devices, such as tablets that fill a position between large static displays and small mobile smart phones. Participants also highlighted unique aspects of self-actuated displays and that in certain situation they could replace static but also mobile displays.
Participants widely addressed the technical characteristics of the concrete prototype. Comments suggest that self-actuated displays must be fast enough to draw or to follow a walking user. Furthermore, participants criticized the noise level. These technical limitations can be tackled by using more powerful gear motors with reduced noise emission. Concerns about the margin of the display’s border could be addressed by further developments in display technologies that reduce the frame thickness. This would allow seamless display connections with several devices.
Challenge might emerge if a large number of self-actuated devices are used at the same time. On the one hand, devices might interfere with each other and on the other hand it might become difficult for a user to identify his or her devices.
Participants appreciated the general concept of a device that is attached to and can move on walls. They envisioned a general purpose device that can provide additional screen space. A limitation of the current prototype is the restriction to a single 2D surface. For the device to be more general purpose, the device must be able to change surfaces itself.
It was appreciated that the device attracts attention. Participants partially assigned this to the device’s novelty but also to its ability to move into the users’ field-of-view. While this can be seen as an advantage it can also distract users from other tasks or content.
Participants discussed the potential benefit of the developed scenarios. They agreed on using the device could reduce the complexity of new tasks through being a support, for instance by providing in situ information (assistance when acting in an unknown environment, e.g. kitchen scenario or museum scenario).
7 Conclusion
In this paper, we explored the space of applications for self-actuated displays. Assuming that self-actuated devices are a third class of devices that fill a space between mobile and static devices we developed the concept for a novel type of self-actuated display device. We implemented this concept through a prototype that is able to autonomously move vertically on ferromagnetic surfaces. Based on the results of a series of focus groups we derived a categorization for applications of self-actuated displays. To further explore this space we derived four application scenarios and implemented them as video prototypes showing interactive self-actuated displays in four application domains. Evaluating the video prototypes revealed that participants see advantages but also limitations of self-actuated displays. In particular, it is important that self-actuated devices are quiet and sufficiently fast to follow or guide a moving user. If this is the case, the device’s physical position and movement provides a way to attract users’ attention and can also encode information.
In this work, we used a particular self-actuated display to explore use-cases and to further explore them through concrete scenarios. Therefore, we are interested in extending the work through the use of other self-actuated devices [24, 25, 29]. On a technical level we are interested in approaches that extend the mobility of self-actuated displays for vertical surfaces. In particular, ferromagnetic wall paint could be used to make existing surfaces accessible for the current prototype. Further options are adding moveable suction cups and adhesive pads that could either be used to get over non-ferromagnetic spaces or to enable free movement on arbitrary surfaces.
Notes
- 1.
- 2.
- 3.
- 4.
3D-models to reproduce the robot are available at: https://github.com/patrigg/WallDisplay.
- 5.
- 6.
References
Bianchi, A., Oakley, I.: Designing tangible magnetic appcessories. In: Proceedings of TEI 2013, pp. 255–258 (2013)
Card, S.K., Mackinlay, J.D., Robertson, G.G.: A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9(2), 99–122 (1991)
Dang, C.T., André, E.: Tabletopcars: Interaction with active tangible remote controlled cars. In: Proceedings of TEI 2013, pp. 33–40 (2013)
Fitzmaurice, G.W., Ishii, H., Buxton, W.A.S.: Bricks: Laying the foundations for graspable user interfaces. In: Proceedings of CHI 1995, pp. 442–449 (1995)
Frei, P., Su, V., Mikhak, B., Ishii, H.: Curlybot: Designing a new class of computational toys. In: Proceedings of CHI 2000, pp. 129–136 (2000)
Guerra, P.: Remotebunnies: Multi-agent phenomena mapping between physical environments. In: Proceedings of TEI 2013, pp. 347–348 (2013)
Guo, C., Young, J.E., Sharlin, E.: Touch and toys: New techniques for interaction with a remote group of robots. In: Proceedings of CHI 2009, pp. 491–500 (2009)
Hassenzahl, M., Monk, A.: The inference of perceived usability from beauty. Hum. Comput. Interact. 25(3), 235–260 (2010)
Hennecke, F., Wimmer, R., Vodicka, E., Butz, A.: Vertibles: Using vacuum self-adhesion to create a tangible user interface for arbitrary interactive surfaces. In: Proceedings of TEI 2012, pp. 303–306 (2012)
Ishii, H., Ullmer, B.: Tangible bits: Towards seamless interfaces between people, bits and atoms. In: Proceedings of CHI 1997, pp. 234–241 (1997)
Jordà, S., Geiger, G., Alonso, M., Kaltenbrunner, M.: The reactable: Exploring the synergy between live music performance and tabletop tangible interfaces. In: Proceedings of TEI 2007, pp. 139–146 (2007)
Kato, J., Sakamoto, D., Igarashi, T.: Phybots: A toolkit for making robotic things. In: Proceedings of DIS 2012, pp. 248–257 (2012)
Kitzinger, J.: The methodology of focus groups: the importance of interaction between research participants. Sociol. Health Illn. 16(1), 103–121 (1994)
Krzywinski, A., Mi, H., Chen, W., Sugimoto, M.: Robotable: A tabletop framework for tangible interaction with robots in a mixed reality. In: Proceedings of ACE 2009, pp. 107–114 (2009)
Kuznetsov, S., Paulos, E., Gross, M.D.: Wallbots: Interactive wall-crawling robots in the hands of public artists and political activists. In: Proceedings of DIS 2010, pp. 208–217 (2010)
Lee, J., Post, R., Ishii, H.: Zeron: Mid-air tangible interaction enabled by computer controlled magnetic levitation. In: Proceedings of UIST 2011, pp. 327–336 (2011)
Leitner, J., Haller, M.: Geckos: Combining magnets and pressure images to enable new tangible-object design and interaction. In: Proceedings of CHI 2011, pp. 2985–2994 (2011)
Liang, R.H., Cheng, K.Y., Chan, L., Peng, C.X., Chen, M.Y., Liang, R.H., Yang, D.N., Chen, B.Y.: Gaussbits: Magnetic tangible bits for portable and occlusion-free near-surface interactions. In: CHI EA 2013, pp. 2837–2838 (2013)
Nowacka, D., Ladha, K., Hammerla, N.Y., Jackson, D., Ladha, C., Rukzio, E., Olivier, P.: Touchbugs: Actuated tangibles on multi-touch tables. In: Proceedings of CHI 2013, pp. 759–762 (2013)
Pangaro, G., Maynes-Aminzade, D., Ishii, H.: The actuated workbench: Computer-controlled actuation in tabletop tangible interfaces. In: Proceedings of UIST 2002, pp. 181–190 (2002)
Patten, J., Ishii, H., Hines, J., Pangaro, G.: Sensetable: A wireless object tracking platform for tangible user interfaces. In: Proceedings of CHI 2001, pp. 253–260 (2001)
Pinhanez, C.: The everywhere displays projector: a device to create ubiquitous graphical interfaces. In: Abowd, G.D., Brumitt, B., Shafer, S. (eds.) UbiComp 2001. LNCS, vol. 2201, pp. 315–331. Springer, Heidelberg (2001)
Rosenfeld, D., Zawadzki, M., Sudol, J., Perlin, K.: Physical objects as bidirectional user interface elements. IEEE Comput. Graph. Appl. 24(1), 44–49 (2004)
Schneegass, S., Alt, F., Scheible, J., Schmidt, A.: Midair displays: Concept and first experiences with free-floating pervasive displays. In: Proceedings of PerDis 2014, pp. 27:27–27:31 (2014)
Seifert, J., Boring, S., Winkler, C., et al.: Hover pad: Interacting with autonomous and self-actuated displays in space. In: Proceedings of UIST 2014, pp. 139–147 (2014)
Shiotani, S., Tomonaka, T., Kemmotsu, K., Asano, S., Oonishi, K., Hiura, R.: World’s first full-fledged communication robot “wakamaru” capable of living with family and supporting persons. Mitsubishi Juko Giho 43(1), 44–45 (2006)
Somanath, S., Sharlin, E., Sousa, M.: Integrating a robot in a tabletop reservoir engineering application. In: Proceedings of HRI 2013, pp. 229–230, March 2013
Sugimoto, M., Fujita, T., Mi, H., Krzywinski, A.: Robotable2: A novel programming environment using physical robots on a tabletop platform. In: Proceedings of ACE 2011, pp. 10:1–10:8 (2011)
Tominaga, J., Kawauchi, K., Rekimoto, J.: Around me: A system with an escort robot providing a sports player’s self-images. In: Proceedings of AH 2014, pp. 43:1–43:8 (2014)
Underkoffler, J., Ishii, H.: Urp: A luminous-tangible workbench for urban planning and design. In: Proceedings of CHI 1999, pp. 386–393 (1999)
Weiss, M., Schwarz, F., Jakubowski, S., Borchers, J.: Madgets: Actuating widgets on interactive tabletops. In: Proceedings of UIST 2010, pp. 293–302 (2010)
Acknowledgements
This work was supported by the graduate program Digital Media of the Universities of Stuttgart and Tübingen, and the Stuttgart Media University.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 IFIP International Federation for Information Processing
About this paper
Cite this paper
Bader, P. et al. (2015). Self-Actuated Displays for Vertical Surfaces. In: Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2015. INTERACT 2015. Lecture Notes in Computer Science(), vol 9299. Springer, Cham. https://doi.org/10.1007/978-3-319-22723-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-22723-8_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-22722-1
Online ISBN: 978-3-319-22723-8
eBook Packages: Computer ScienceComputer Science (R0)