[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access November 13, 2020

How context and design shape human-robot trust and attributions

  • Hannah Biermann EMAIL logo , Philipp Brauner and Martina Ziefle

Abstract

In increasingly digitized working and living environments, human-robot collaboration is growing fast with human trust toward robotic collaboration as a key factor for the innovative teamwork to succeed. This article explores the impact of design factors of the robotic interface (anthropomorphic vs functional) and usage context (production vs care) on human–robot trust and attributions. The results of a scenario-based survey with N = 228 participants showed a higher willingness to collaborate with production robots compared to care. Context and design influenced the trust attributed to the robots: robots with a technical appearance in production were trusted more than anthropomorphic robots or robots in the care context. The evaluation of attributions by means of a semantic differential showed that differences in robot design were less pronounced for the production context in comparison to the care context. In the latter, anthropomorphic robots were associated with positive attributes. The results contribute to a better understanding of the complex nature of trust in automation and can be used to identify and shape use case-specific risk perceptions as well as perceived opportunities to interacting with collaborative robots. Findings of this study are pertinent to research (e.g., experts in human–robot interaction) and industry, with special regard given to the technical development and design.

1 Introduction

Responding to global trends, human-robot collaboration (HRC) is on the rise in many areas [1]. Especially areas which are affected by demographic change on the one hand and digitization on the other, as health care and production. In an aging society where there are fewer working citizens and more people in need of care [2] while, at the same time, innovative advances in automated technology become a tangible experience [3], the use of autonomous robots provides relieving assistance for future working and living environments [4].

Today, collaborative robots are already an integral and essential part of manufacturing industries where, for example, routine tasks in assembly lines are automated using robots under human supervision and control [5]. Next to gains in efficiency and effectiveness, robot roles and responsibilities include the execution of strenuous, risky, and dangerous tasks, such as handling heavy, toxic, or dangerous objects, to increase safety and enhance human labor [6]. Considering advanced HRC, humans and robots interact in close physical proximity by sharing their workspace [6]. This particularly applies to health care where robots can be used for daily nursing services (e.g., cleaning, feeding, and bathing), physical and cognitive coaching (e.g., physiotherapy or memory training), as well as psychological and social caregiving (e.g., communication and conversation to prevent loneliness in old age) [7].

Automation, however, may lead to risk perceptions among users due to several reasons, such as lacking experience [8], technology misuse [9], media priming [10], or the fear of innovation [11]. Also, especially in fragile usage contexts, as e.g., health care, the idea of being cared for by autonomous robots can, under some circumstances, be incompatible with the requirements of a human and human care which roots in a deep bond between caregiver and patient [12]. In this context, trust is a key factor for the development of users’ acceptance toward automated technology and crucial for users’ decisions to accept and use or reject a technology [12,13].

Previous studies on trust evolving and moderating factors are manifold (see, for example, [8,14,15,16,17]), including the development of measurement scales to assess trust in automation. Yagoda and Gillan [18] provided a tool specially tailored to the many facets of human-robot interaction (HRI) relevant to trust (e.g., context, task, and system), particularly suitable for empirical-experimental research to measure trust in human interaction with an automated robotic system. However, trust is often measured using Likert scales, leaving out other valuable instruments useful for a deep understanding and measurement of individual perceptions and mental models, such as semantic differentials which we have therefore included in this study.

Robot design and appearance has been revealed as a decisive factor for trust across different research settings, usage contexts, and user needs. Special research focus was given to anthropomorphism, meaning that the design of a robot, its physical form, but also its behavior, and the way it interacts mimics humans [19,20]. To examine the influence of anthropomorphism on peoples’ attitudes and trust toward robots, embodiment was manipulated and evaluated in regard to acceptance, including humanlike (anthropomorphic), zoomorphic, caricatured, and functional designs [21]. In general, robot appearance (from anthropomorphic to functional) influenced users’ likeability and empathy: less human looking robots were less liked and users’ empathy was stronger toward anthropomorphic robots [22,23]. Further studies revealed that anthropomorphic design features (e.g., facial expression, gestures, communication) increased humans’ trust and intention to use robots [24,25].

Particularly in social use cases, robot appearance seems to be important. Regarding care, Stuck and Rogers [26] found that older adults consider robot materials and clothing as trust indicators, presumably due to increased body contact during nursing. They further identified context-specific trust dimensions and features, such as easy handling and gentleness as a preferred robot characteristic [26]. Interestingly, using a robot in health care is not a clear-cut “yes” or “no” decision [12]. In a study, in which the participants were asked for which specific care situations they would prefer human vs robotic care, situations were identified in which humans were preferred (e.g., giving medication, spoon-feeding), but also care situations in which the care robot was selected as the care authority (e.g., putting someone to bed, helping someone to visit the toilet). Apparently, it is not only the context per se which shapes the acceptance of robotic support but also the specific function for which the robotic care is applied for [12]. To realize personalized robotic care, Portugal et al. [27] developed a mobile service robot with flexible function adaptive to (changing) user needs. The robot design considered size, shape, color, and acoustics to enable multimodal services for advanced HRI, including facial expressions for robotic emotional interaction [27]. The robot was perceived more machine-like than humanlike, which did not limit its acceptance but was subject to the demand for additional design elements (e.g., arms) [27], which altogether emphasizes the relevance of the functional and at the same time visual robot design.

In other contextual settings (organizational group work, for example), trust depended on environmental and situational factors (e.g., team collaboration and task type) [15]. These findings indicate a strong impact of the concrete application and usage context on users’ perception and evaluation of HRC. However, most studies have primarily focused on single applications and usage contexts. Research that compares trust perceptions toward robots across different contexts and domains to identify key predictors is still rare and so far limited to other application fields, such as smart home and autonomous driving [28]. The present research connects to this research gap.

2 Questions addressed and logic of the empirical approach

The aim of this study was to derive scientifically based conclusions, with regard to robot design and behavior, for the development of use case-specific robots in order to increase peoples’ trust and the willingness to use robots in different contexts.

Our research addresses the user-centered evaluation of autonomous robots, depending on contextual and design factors. We study peoples’ trust, use intention, and differences in attribution of autonomous robots in a care vs a production scenario – two particularly relevant application fields for using robots due to global challenges – taking into account design factors that have already been identified as trust decisive.

Due to a lack of knowledge of the influence of contextual factors on human-robot trust, we used an explanatory and structure discovering mix of methods, including a two-step empirical approach for the development and validation of theses. Based on qualitative data from preceding focus group discussions, we developed a quantitative online questionnaire to answer the following research questions:

  1. How is the use of robots perceived and evaluated in the care vs production context?

  2. What impact do contextual and design factors have on human-robot trust?

  3. Which attributes do users associate with HRC?

3 Method

Data and questionnaire items (Table 1) provided in the following sections have been translated from German. If not stated otherwise, the scales in the survey are based on self-developed items resulting from the qualitative preliminary study.

Table 1

Items used in the survey

Item Construct
I regularly care for other people Care Experience
I have cared for another person before
I have only witnessed care but never cared for myself
I have no care experience
I work in the field of production Production Experience
I have worked in production before
I have an insight into the production but never worked there
I have no production experience
I have contact with robots at work Robot Experience
Both privately and professionally I have contact with robots
In my free time I have contact with robots
I have no experience with robots
Robots are good for society because they help people Attitude Towards Robots [30]
Robots destroy jobs
Robots are necessary because they can do jobs that are too heavy or too dangerous for humans
Robots are a technology that requires careful handling
The widespread use of robots can lead to promotion of employment opportunities
The use of robots reduces communication between people
You can’t rely on anyone these days Interpersonal Trust Disposition [31]
In general, people can be trusted
I am convinced that most people have good intentions
You can’t rely on robots Trust Disposition Towards Robots (adapted from [31])
In general, robots can be trusted
I am convinced that most robots have good intentions
I like to deal more closely with technical systems Affinity Towards Technology (shortened from [29])
I like to try out the functions of new technical systems
I enjoy spending time getting to know a new technical system
I try to understand exactly how a technical system works
There will be more autonomous robots Perception (Believe/Judgment)
Humans remain important in the future
Robots will increasingly replace human labor
Robots will increasingly assume responsible tasks
Robots will relieve humans but not replace them
Humans and robots will collaborate well
I welcome the use of robots Use Intention (cf. [32])
I would like to collaborate with robots
I consider robots to be useful
I am suspicious of the system’s actions Trust in Automation (shortened from [33])
The system’s actions will have a harmful or injurious outcome
The system is reliable
The system is dependable
I can trust the system

3.1 Empirical research design

The questionnaire consisted of three parts: first, we surveyed the participants’ demographics (e.g., age, gender, and educational background) as well as explanatory factors, such as the affinity toward technology [29], attitude toward robots [30], interpersonal trust [31], and the disposition to trust robots (adapted from Beierlein et al.’s scale [31]).

Second, we introduced care and production as two usage contexts for robots and we measured the participants’ intention to interact with the robots [32]. Also, we asked participants for their estimation of the role robots will play in these areas in the future (context as within-subject factor). Here we wanted to know which developments in regard to HRC the participants expect and whether they personally are critical or optimistic toward this development.

Third, we presented two different robotic designs in two different usage scenarios to the participants and asked for an evaluation of the robots. Here we used a 2 × 2 mixed design with Context (production vs care) as between-subject and Design (anthropomorphic vs functional) as within-subject factor. For each of the factors, we created a scenario with a textual description and an image illustrating the scenario. For both, the texts and the images, we ensured that they were similar in regard to factors not under investigation (for example, they had the same perspective, same image style, and same colors) and different only in regard to the experimental factors (that is, context and design). Figure 1 shows the robots presented in the survey for both designs and contexts. In this research design, we used two attributes for each of the two dimensions to keep the required sample size and duration of the survey reasonable.

Figure 1 
                  Images for illustrating the scenarios with two different designs and in two different contexts.
Figure 1

Images for illustrating the scenarios with two different designs and in two different contexts.

We measured the effect of both factors by asking the participants whether they trusted the robot and for their perception of the robots. For measuring trust, we used the five items from Jian et al.’s scale [33] with the highest item-total correlation. For measuring the perception, we used a semantic differential that we developed through brainstorming sessions and focus groups. It consists of 22 items and captures various functional and nonfunctional aspects of the robots as social agents (e.g., reliableunreliable, prettyugly, warmheartedcoldhearted; see Table 2).

Table 2

Items and respective means of the semantic differential from 0 to 5 points for evaluating the robotic designs and scenarios. Items with (r) were reversed in the survey. Lower numbers represent more positive evaluations.

Item pair Production Care
Anthropomorphic Functional Anthropomorphic Functional
Curious Uninterested 2.36 2.78 1.93 2.61
Open Reserved (r) 2.53 2.86 2.11 2.76
Conscientious Sloppy 1.30 1.09 1.59 1.48
Sociable Critical (r) 1.96 1.84 1.65 2.29
Balanced Irritable 1.53 1.40 1.57 1.51
Intelligent Foolish 1.65 1.89 1.69 1.77
Supportive Restricting 1.17 0.90 1.33 1.32
Cautious Adventurous (r) 1.68 1.75 1.77 1.73
Good Bad (r) 1.68 1.68 1.32 1.94
Attractive Disgusting (r) 2.33 2.46 1.98 2.78
Appealing Disagreeable 2.16 2.67 1.72 2.92
Warmhearted Coldhearted 3.12 3.49 2.02 3.48
Pretty Ugly 2.42 2.97 2.11 3.18
Creative Unimaginative 2.97 3.32 2.44 3.31
Independent Dependent 1.93 2.17 1.91 1.90
Strong Weak (r) 1.63 1.32 2.46 1.31
Self-confident Shy 1.93 2.02 2.05 1.84
Predictable Unpredictable (r) 1.93 1.57 1.84 1.98
Usable Inoperable 1.64 1.00 0.97 1.64
Trustful Doubtful 1.89 1.83 1.94 2.20
Reliable Unreliable 1.35 1.27 1.48 1.62
Social Unsocially (r) 2.63 3.51 2.06 3.14
Average 1.99 2.08 1.82 2.21

We used 4-point Likert items (ranging from min = 0 I don’t believe that/I don’t appreciate that to max = 3 I believe that/I appreciate that) and 6-point (ranging from min = 0 totally disagree to max = 5 totally agree) as well as 6-point semantic differentials (with pairs of two opposing adjectives) to collect the responses. The order of the pairs was randomized in the survey but recoded to the canonical order for further analyses (lower numbers indicating more positive attributions). Responses were voluntary (no forced choice).

Sample acquisition took place in Germany. We distributed the online link to the questionnaire through our personal social networks by personal address, instant messaging, e-mail as well as Facebook and other computer-mediated social networks without incentives. As we did not seek a specific target group, we were interested in collecting answers of a broad random audience in order to receive an unbiased public perception on HRC scenarios. The median completion time of the survey was 16 min. Figure 2 illustrates the experimental design.

Figure 2 
                  Flow of the survey starting with explanatory user factors, measurement of usage intention and future role of robots (for care and production), and assessment of trust and robot perception for the two contexts and two designs.
Figure 2

Flow of the survey starting with explanatory user factors, measurement of usage intention and future role of robots (for care and production), and assessment of trust and robot perception for the two contexts and two designs.

3.2 Description of the sample

In total, 388 participants started the web-based survey. Yet we had to exclude 160 participants of which the vast majority has not completed the survey (e.g., opened the survey without participating) or rushed through the survey. The rate of rejected responses is typical for this sampling technique [34].

The final sample ( N = 228 ) consisted of 125 women (54.8%) and 99 men (43.4%); 4 participants provided no gender information (1.8%). Age ranged between 15 and 80 years ( M = 34.6 ; SD = 14.4 ; n = 227 ). With 80 high school leavers (35.1%) and 71 university graduates (31.1%), the educational level was comparatively high [35]. The participants were slightly trustful toward other people ( M = 2.86 ; SD = 1.14 ; range 0–5) and had an affinity for technology ( M = 3.18 ; SD = 1.47 ; range 0–5).

Asked for prior care experience, 12.7% ( n = 29 ) of the participants cared for other people regularly, followed by 19.3% ( n = 44 ) who cared for another person in the past and 30.3% ( n = 69 ) care witnesses; 37.7% ( n = 86 ) had no care experience. Regarding production experience, 6.1% ( n = 14 ) of the participants worked in production contexts at the time of questioning, while 18.4% ( n = 42 ) worked there before, followed by 24.1% ( n = 55 ) with nonworking insights. The majority of the sample (51.3%, n = 117 ) reported to have no prior experience in production.

Regarding the use of toy, household, or industrial robots, 7.0% of the participants ( n = 16 ) collaborated with robots at work, followed by 11.8% ( n = 27 ) who used robots both in private and in business and 35.1% ( n = 80 ) who experienced robots in their free time. Nearly half of the participants (46.1%, n = 105 ) had no experience with robots.

The participants’ disposition to trust robots was only slightly above the scale’s midpoint ( M = 2.53 ; SD = 1.57 ; range 0–5). Their highest agreement was for the statement that one cannot rely on robots ( M = 3.08 ; SD = 1.47 ). Thus, the participants had only limited trust in robots (see Figure 3, upper part).

Figure 3 
                  Attitude toward robots and trust in robots (mean values with 95% confidence interval).
Figure 3

Attitude toward robots and trust in robots (mean values with 95% confidence interval).

In regard to the general attitude toward robots (before any introduction of the concrete usage scenarios), the participants found that robots are necessary to relieve humans from heavy or dangerous work ( M = 3.97 ; SD = 1.35 ) but also that robots require careful handling ( M = 3.86 ; SD = 1.36 ; see Figure 3, lower part, range 0–5). The least agreed item from the scale was whether robots will lead to more employment opportunities ( M = 2.39 ; SD = 1.54 ): on the contrary, there is some fear that robots might destroy jobs ( M = 3.00 ; SD = 1.56 ). The question whether robots are good for the society was rated just slightly above the center of the scale ( M = 3.13 ; SD = 1.39 ). In summary, and as depicted in Figure 3, the participants see certain advantages of robots, on the one hand; on the other hand, disadvantages and risks, which justifies a deeper investigation of this phenomenon.

4 Results

Descriptive and inferential statics were used for data analysis. The level of significance ( α ) was set at 5%. For effect sizes, the partial eta-square ( η 2 ) and Cohen’s d were reported.

The results section is structured as follows: first, we analyze the expectations and evaluations regarding the future role of robots and use intentions in care vs production. Then we describe the influence of contextual and design factors on human–robot trust. Subsequently, to better understand the cross-domain trust dimensions, we compare how context and design shape properties attributed to the robots using the semantic differential.

4.1 Expectations and evaluations of future HRI

First, we analyzed the expectations of robots in care and production on two different scales (range 0–3): the question was whether the participants believed that the same scenarios for robot use (presented as items, such as there will be more autonomous robots) apply to the contexts of care and production (scale 1: Do you believe that?) and further, whether they considered them good or bad (scale 2: Do you appreciate that?).

According to the first scale, expectations about future HRC differed significantly between the usage contexts (Figure 4a). Particularly as regards the rise of robots, item ratings showed strong differences, t ( 11.04 ) , p < 0.001 , d = 0.731 , provided that the use of robots was more likely assumed in the production scenario ( M = 2.63 ; SD = 0.72 ) than in the care scenario ( M = 1.70 ; SD = 1.19 ). Besides, opinions differed significantly as regards robots will increasingly replace human labor, t ( 10.33 ) , p < 0.001 , d = 0.684 , and robots will increasingly assume responsible tasks, t ( 8.10 ) , p < 0.001 , d = 0.536 : in both cases, agreements were higher for the production scenario. In contrast, the assumption that humans remain important in the future was particularly expected for care, t ( 8.21 ) , p < 0.001 , d = 0.543 .

Figure 4 
            Effect of usage context on expectations (left) and evaluations (right) of future HRC. (a) Expectations about future HRC in both contexts (“Do you believe that […]?”) (mean values with 95% confidence interval). (b) Evaluations of future HRC in both contexts (“Do you appreciate that…?”) (mean values with 95% confidence interval).
Figure 4

Effect of usage context on expectations (left) and evaluations (right) of future HRC. (a) Expectations about future HRC in both contexts (“Do you believe that […]?”) (mean values with 95% confidence interval). (b) Evaluations of future HRC in both contexts (“Do you appreciate that…?”) (mean values with 95% confidence interval).

According to the second scale, evaluations of future HRC only partially differed significantly in the two contexts (Figure 4b). This included there will be more autonomous robots, t ( 3.93 ) , p < 0.001 , d = 0.260 , which was more likely accepted in production ( M = 1.87 ; SD = 1.06 ) than in care ( M = 1.55 ; SD = 1.15 ); although in both contexts, the level of agreement was rather low and only just above the scale center. Also, evaluations as regards robots will increasingly assume responsible tasks differed significantly, t ( 4.51 ) , p < 0.001 , d = 0.298 , with little appreciation in either scenario, particularly in care. This was different regarding humans remain important in the future: evaluations differed significantly, t ( 4.56 ) , p < 0.001 , d = 0.302 , but were favorable in both contexts. The participants appreciated a good teamwork of humans and robots, t ( 5.83 ) , p < 0.001 , d = 0.386 , particularly in production, though.

These findings point to an ambivalence between what the participants believed and what they thought was good: they expected an increased use of robots in the future, particularly in production, whereas the overall appreciation was rather restrained, especially in care which was considered to remain human. However, participants also indicated and supported an expected relief through the use of robots in working environments and were generally positive about future HRC.

In a next step, we analyzed the use intention of HRC (range 0–5) which, on an average, differed significantly between the contexts, t ( 7.69 ) , p < 0.001 , d = 0.509 , and was generally more affirmative in production ( M = 3.53 ; SD = 1.36 ) than care ( M = 2.72 ; SD = 1.66 ). Figure 5 shows item ratings in detail. In particular, the perceived usefulness of robots was significantly higher, t ( 8.51 ) , p < 0.001 , d = 0.563 , in production ( M = 4.00 ; SD = 1.41 ) compared to care ( M = 2.96 ; SD = 1.79 ). Furthermore, the participants welcomed the use of robots in production ( M = 3.70 ; SD = 1.55 ) significantly more, t ( 7.18 ) , p < 0.001 , d = 0.475 , than in care ( M = 2.83 ; SD = 1.79 ). They also intended to like collaborating with robots in production ( M = 2.88 ; SD = 1.79 ), unlike in care ( M = 2.37 ; SD = 1.81 ) where the willingness to use was significantly lower, t ( 4.04 ) , p < 0.001 , d = 0.267 .

Figure 5 
                  Intention to use robots in production vs care (mean values with 95% confidence interval).
Figure 5

Intention to use robots in production vs care (mean values with 95% confidence interval).

In addition, correlation analyses revealed significant relations between user factors and the intention to use robots in care and production. In particular, the participants’ disposition to trust robots, attitude toward robots, technology affinity as well as their highest educational attainment correlated positively with the use intention in both contexts (see Table 3).

Table 3

Spearman-Rho correlation coefficients for user factors and use intention (UI) of care and production robots (* p < 0.05 , ** p < 0.01 , *** p < 0.001 )

User factors UIcare UIproduction
Attitude toward robots 0.472*** 0.513***
Trust disposition toward robots 0.350*** 0.419***
Technology affinity 0.226** 0.323***
Education 0.182** 0.169*

4.2 Impact of context and design on human-robot trust

Next we analyzed whether the two presented usage contexts (production and care) and both robotic designs (anthropomorphic and functional) influenced the trust people attributed to the robots. Hereto, we calculated a repeated-measures (RM) analysis of variance (ANOVA) using context as the between-subject variable, the design as the within-subject variable, and perceived trust as the dependent variable (Section 3.1).

Surprisingly, neither the usage context, F ( 1 , 226 ) = 2.453 , p = 0.119 , η 2 = 0.011 , nor the design, F ( 1 , 226 ) = 1.962 , p = 0.163 , η 2 = 0.009 , of the robot had a significant main effect on the reported trust in the robot. However, there is a small but significant interaction effect between the context and the design, F ( 1 , 226 ) = 5.606 , p = 0.019 < 0.05 , η 2 = 0.024 , meaning that although we could not identify a direct effect of both factors, the combination of both context and shape affected trust in automation. As Figure 6 shows, the participants trusted robots with a functional appearance in production environments significantly more than anthropomorphic robots or robots in the care context. The effect by context and design (measured on the trust in robots scale) is small. We assume that this is due to the lack of an actual interaction with the robotic system and due to the missing reference frames in the scenario-based survey. Nonetheless, we consider this difference as a starting point to further investigate how design and context affect the robots’ perception.

Figure 6 
                  Influence of design and context on trust in the robot (mean values with 95% confidence interval).
Figure 6

Influence of design and context on trust in the robot (mean values with 95% confidence interval).

4.3 Perception of robots

As trust was influenced by context and design, we now analyzed whether design and context influenced the participants’ attributions of the robots. Hereto, we calculated an RM-ANOVA with context as the between-subject variable, design as the within-subject variable, and average attribution from the 22-item semantic differential as the dependent variable. The difference in perception between the functional and anthropomorphic designs was less pronounced for the production scenario than for the care scenario. While the usage context as the between-subject factor had no effect on the perception of the robots, F ( 1 , 226 ) = 0.053 , p = 0.818 , η 2 < 0.001 , the design of the robots, F ( 1 , 226 ) = 35.161 , p < 0.001 , η 2 = 0.158 , as well as the interaction of both factors (context × design) significantly influenced the participants’ perception of the robots, F ( 1 , 226 ) = 13.877 , p < 0.001 , η 2 = 0.058 . According to Table 2, in the production context, the participants’ overall evaluation of the functional design ( M = 2.08 ; SD = 0.66 ) was similar to the evaluation of the anthropomorphic design ( M = 1.99 ; SD = 0.74 ). In contrast, both designs were evaluated differently in the care context: here, the anthropomorphic design was perceived as more positive ( M = 1.82 ; SD = 0.76), whereas the functional design was perceived as rather negative ( M = 2.21 ; SD = 0.75 ).

As Figure 7a shows, the functional design was perceived as more usable, more predictable, and stronger than the anthropomorphic design in the production scenario. In contrast, the anthropomorphic design was seen as much more warmhearted, curios, appealing, pretty, and social (see Figure 7a).

Figure 7 
            Perception of anthropomorphic and functional designs (within-subject factor) in care and production usage scenarios (between-subject factor) measured on a semantic differential. (a) production scenario, (b) care scenario.
Figure 7

Perception of anthropomorphic and functional designs (within-subject factor) in care and production usage scenarios (between-subject factor) measured on a semantic differential. (a) production scenario, (b) care scenario.

Figure 7b illustrates that the differences between the functional and the anthropomorphic robot design were much bigger in the care context: here, the functional design was perceived as much stronger, but it was also much more associated with negative aspects than the anthropomorphic design. Specifically, the participants rated the functional design as rather evil, disgusting, and ugly. They believed that the robot is uninterested, unimaginative, and critical, while also thinking that it is inoperable. Finally, they rated the functional design as disagreeable, reserved, unsocial, and coldhearted.

In summary, despite some differences between anthropomorphic and functional design in the production context, the differences were greater in the care domain. In regard to care robots, the participants had rather negative views on robots with functional design, whereas the anthropomorphic design received more positive attributions. Table 2 lists the 22 items and respective arithmetic means of the semantic differential for evaluating the robotic designs and scenarios.

5 Discussion

This study revealed deeper insights into the topic of HRI by comparing two rising application fields: production and health care. Key findings indicate that collaborative robots are not only perceived differently across the investigated contexts but also depending on their physical shape and appearance. This demonstrates the importance of application-specific designs and evaluations, especially regarding research on and engineering of human-robot trust. In the following, we discuss the research findings respecting their contribution to the state of the art and their implications for academia and industry.

The willingness to collaborate with robots was higher in the production compared to the care context. Also, expectations for the future use of robots differed considerably between the contexts: the participants expected a large increase in robotic coworkers in production – different than in care. In line with previous research [36], it is not particularly surprising that expectations of robots in care – as a highly intimate, socially interactive space – are ambivalent: despite perceived benefits, roles and responsibilities were strongly considered to remain human. Regarding HRC in production, there seems to be a discrepancy between the participants’ expectation of robots and their willingness to use them in practice: interestingly, participants indicated to like and be willing to collaborate with robots in production, whereas they also showed only restrained support for actual robot use in this context (increasing robot use was commonly presumed but less appreciated). A possible explanation for this finding can be derived from our qualitative focus groups which were run prior to the questionnaire study: here, perceptions of production robots were closely related to a functional perspective by thinking of technical workflows or objects as in the manufacturing industry, for example, possibly giving the reduction in human workload a strong motive for use, while at the same time the fear of losing one’s job became apparent. This is supported by Lotz et al. [37] who identified job loss as one key anxiety for HRC in industry. Therefore, it is advisable to look into this issue in more detail, including trade-offs between perceived barriers and benefits but also potential usage incentives.

Concerning human-robot trust, we find that context and design are influencing factors. However, neither the area of use nor the robot appearance was decisive in themselves, but taken together they determined users’ trust perceptions. Apparently, there is no design for all based on the context or based on the more or less anthropomorphic appearance. Rather, the very specific combination of application context, the robot’s role, and its appearance seems to form human trust toward the robotic assistance. Robots with a functional design in production gained the highest level of trust. In care, anthropomorphic robots were perceived most trusting, reflecting the participants’ attitude of care is human. This also becomes apparent with regard to perceptual differences in the scenario evaluations: in production, anthropomorphic design features were not an obstacle; but in the care context, they were essential. For the robot development, it is of great interest to further identify the design factors that are (and are not) accepted in detail and also to better understand users’ (dis)trust in automation contexts. With a deeper look at the findings of the semantic differential, some dimensions can already be derived as initial input relating to the robot’s visual appearance and aesthetics but also (supporting Stuck and Rogers [26]) to social attitudes, soft skills, and personality traits. Therefore, it is important for future works not only to look at robot design (What exactly is perceived as (un)attractive?) but also to consider which behavioral patterns (e.g., sociability) and characteristics (openness, creativity, maliciousness, etc.) might unfold a positive or negative influence on the perception and evaluation of HRC in specific use cases. These findings could then be integrated into research on the behavior design of robots for social interaction [38] but also on the cognitive capabilities of robots and their role as social agents as to recognize the needs and moods of the users and interact with them more intuitively [39,40] toward – an improved and accepted HRC. Besides, such findings could also serve as an argument or caution against excessive humanization in robot design, since it has already been shown that robots that appear too human are perceived as deterrent (uncanny valley effect [41]).

With regard to the influence of individual differences on trust attribution patterns, particularly trust dispositions and attitudes toward robots were positively related to the use intention in two presented contexts. This confirms previous studies on the relation between trust and use intention [13], demonstrating the importance of trust-exploring research in rising automation contexts. When it comes to the use of innovative robot technology, future research should investigate not only (dis)trust-related factors but also the extent to which these may be intrinsically motivated and related to people’s interpersonal trust perceptions as basis for psychometric analyses (cf. [42,43]). As regards attitudes toward robots, the results obtained (Figure 3) are similar to a study of the European Commission [30], demonstrating the overall positive and intentionally focused (e.g., in terms of (work) relief) assessments but also concerns, particularly as regards the handling of robots, which may be due to lacking experience. Follow-up studies are needed to gain deeper insights into attitude-related dimensions and identify potential predictors of the use intention. In this regard, implicit attitudes (as important implications for trust in automation [44]) should also be considered, possibly by taking into account the pairs of attributes set up in the semantic differential.

From a methodological perspective, the semantic differential as a preliminary scale for assessing robots’ “personalities” represents a further contribution of this study. The scale with its adjectival pairs and dimensions has been developed on the findings of our focus group discussions and reacts sensitive in certain areas depending on robot design and usage context (Table 2). Of course, the scale requires further refinements, but it is already a good starting point for other researchers in this field. Measuring trust and perceptions of robots is hard and offering more than Likert-type scales is mandatory to increase the participants’ motivation and to capture broader aspects of HRI. Therefore, empirical approaches are necessary, which make it possible to measure users’ trust and acceptance in close interaction with robots, such as in experimental trials in which robot materials, expressions, and gestures can be experienced in real-life environments.

6 Limitations

Of course, this study is not without its limitations. First, the sample of the current study is not representative for the general population. Nevertheless and despite its homogeneity, we were able to identify differences in the perception of robots in the two different contexts and between the two designs. Thus, we assume that the identified effects will get more pronounced in a larger and more representative sample. Of course, this is subject to a future study. However, as a starting point, it will be important that user factors are precisely addressed and balanced as regards the sample construction. This study’s sample was rather young and we could not investigate age-related differences in robot perceptions. Consequently, further studies should investigate how age shapes the perception of robots or attributions. However, novel HRC in production and care is only slowly leaving academia and entering real-world environments. Consequently, the studied sample may not represent the current workforce but rather the future workforce of actual HRI scenarios. In this context, other user factors, as, e.g., innovation openness, risk behavior, technical self-confidence, or even anxiety, might be relevant personality constructs that might interact with the willingness to accept robotic assistance. In addition, care experience, gender, age, as well as aging concepts could significantly impact the HRC acceptance [45,46]. Furthermore, a comparison of perspectives could reveal exciting insights, for example, in care, by considering not only patients as future users but also caregivers as future coworkers [47]. This, of course, requires further studies that are focused on the specific task, context, and target audience, whereas this work studied the broader influence of context and design on perception and attribution.

Second, we used a scenario-based survey method instead of confronting participants with actual robots to measure robot trust and perception. Although studying real interactions would have been delightful, this approach enabled us to study a significantly larger sample. Furthermore, the visual illustrations and the textual descriptions of the robot usage scenarios enabled the participants to imagine their interaction with the robots and develop their judgments accordingly. In addition, the subjects evaluated only two of the four scenarios (between-subject design), which limits statistical procedures and underestimates potential effects. On the other hand, this shortened the length of the questionnaire and the participants were probably more motivated to respond conscientiously. In sum, this should have increased the quality of the quantitative data the study builds on. Yet further studies should validate the findings with tangible and interactive robots.

Third, in this study, we have only investigated two different design alternatives and two different usage scenarios. Next to design variations (e.g., zoomorphic, caricatured [21]), the context should continue to be considered in a broader sense. On the one hand, this refers to the integration of further automation contexts (e.g., smart living environments [48]) and, on the other, to the investigation of cultural and country-specific differences as the acceptance of robot technology and trust may be motivated both personally and socially [22,49]. Since the participants of this survey were acquired in Germany, it would be of great interest to examine the extent to which the obtained key findings are different or similar as regards the perception and evaluation of test persons with diverse cultural backgrounds, experience in dealing with, and attitudes toward robots, especially as there are already indications for cultural differences in users’ likability and trust toward robots between Germany and Asian cultures, for example [22].

Acknowledgments

The authors thank all participants for their patience and openness to share opinions on future human-robot interaction. Special thanks to Semih Yildirim and Imke Haverkämper for research assistance. This study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2023 Internet of Production – 390621612.

References

[1] B. Chandrasekaran and J. M. Conrad, “Human-robot collaboration: a survey,” in Proceedings of the IEEE SoutheastCon, Fort Lauderdale, Florida, USA, 2015, pp. 1–8.10.1109/SECON.2015.7132964Search in Google Scholar

[2] United Nations, “World Population Prospects 2019. Highlights,” Tech. Rep., Department of Economic and Social Affairs, New York, 2019.Search in Google Scholar

[3] European Commission, “Digitalisation research and innovation – transforming European industry and services,” Tech. Rep., European Commission, 2017.Search in Google Scholar

[4] J. Schmidtler, V. C. Knott, C. Hoelzel, and K. Bengler, “Human centered assistance applications for the working environment of the future,” Occup. Ergonomics, vol. 12, no. 3, pp. 83–95, 2015.10.3233/OER-150226Search in Google Scholar

[5] T. B. Sheridan, “Human-robot interaction: status and challenges Thomas,” Hum. Factors, vol. 58, no. 4, pp. 525–532, 2016.10.1177/0018720816644364Search in Google Scholar PubMed

[6] A. Vysocky and P. Novak, “Human-robot collaboration in industry,” MM Sci. J., vol. 9, no. 2, pp. 903–906, 2016.10.17973/MMSJ.2016_06_201611Search in Google Scholar

[7] A. Vercelli, I. Rainero, L. Ciferri, M. Boido, and F. Pirri, “Robots in elderly care,” Sci. J. Digital Cult., vol. 2, no. 2, pp. 37–50, 2017.Search in Google Scholar

[8] K. A. Hoff and M. Bashir, “Trust in automation: integrating empirical evidence on factors that influence trust,” Hum. Factors, vol. 57, no. 3, pp. 407–434, 2015.10.1177/0018720814547570Search in Google Scholar PubMed

[9] R. Parasuraman and V. Riley, “Humans and automation: use, misuse, disuse, abuse,” Hum. Factors, vol. 39, no. 2, pp. 230–253, 1997.10.1518/001872097778543886Search in Google Scholar

[10] P. A. Hancock, D. R. Billings, and K. E. Schaefer, “Can you trust your robot?,” Ergonomics Des., vol. 19, no. 3, pp. 24–29, 2011.10.1177/1064804611415045Search in Google Scholar

[11] M. König and L. Neumayr, “Users’ resistance towards radical innovations: the case of the self-driving car,” Transportation Res. Part F, vol. 44, pp. 42–52, 2017.10.1016/j.trf.2016.10.013Search in Google Scholar

[12] M. Ziefle and A. C. Valdez, “Domestic robots for homecare: a technology acceptance perspective,” in Proceedings of the International Conference on Human Aspects of IT for the Aged Population, Vancouver, Canada, 2017, pp. 57–74.10.1007/978-3-319-58530-7_5Search in Google Scholar

[13] T. Sanders, A. Kaplan, R. Koch, M. Schwartz, and P. A. Hancock, “The relationship between trust and use choice in human-robot interaction,” Hum. Factors, vol. 61, no. 4, pp. 614–626, 2019.10.1177/0018720818816838Search in Google Scholar PubMed

[14] J. Lee and K. See, “Trust in automation: designing for appropriate reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004.10.1518/hfes.46.1.50_30392Search in Google Scholar

[15] K. E. Schaefer, J. Y. Chen, J. L. Szalma, and P. A. Hancock, “A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems,” Hum. Factors, vol. 58, no. 3, pp. 377–400, 2016.10.1177/0018720816634228Search in Google Scholar

[16] S. M. Merritt and D. R. Ilgen, “Not all trust is created equal: dispositional and history-based trust in human-automation interactions,” Hum. Factors, vol. 50, no. 2, pp. 194–210, 2008.10.1518/001872008X288574Search in Google Scholar

[17] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. De Visser, and R. Parasuraman, “A meta-analysis of factors affecting trust in human-robot interaction,” Hum. Factors, vol. 53, no. 5, pp. 517–527, 2011.10.1177/0018720811417254Search in Google Scholar

[18] R. E. Yagoda and D. J. Gillan, “You want me to trust a ROBOT? The development of a human-robot interaction trust scale,” Int. J. Soc. Robot., vol. 4, no. 3, pp. 235–248, 2012.10.1007/s12369-012-0144-0Search in Google Scholar

[19] J. Fink, “Anthropomorphism and human likeness in the design of robots and human-robot interaction,” in Proceedings of the International Conference on Social Robotics, Chengdu, China, 2012, pp. 199–208.10.1007/978-3-642-34103-8_20Search in Google Scholar

[20] R. de Kervenoael, R. Hasan, A. Schwob, and E. Goh, “Leveraging human-robot interaction in hospitality services: Incorporating the role of perceived value, empathy, and information sharing into visitor’s intentions to use social robots,” Tour. Manag., vol. 78, art. 104042, 2020.10.1016/j.tourman.2019.104042Search in Google Scholar

[21] T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially interactive robots: concepts, design, and applications,” Robot. Autonomous Syst., vol. 42, no. 3–4, pp. 143–166, 2003.10.1016/S0921-8890(02)00372-XSearch in Google Scholar

[22] D. Li, P. L. Rau, and Y. Li, “A cross-cultural study: effect of robot appearance and task,” Int. J. Soc. Robot., vol. 2, pp. 175–186, 2010.10.1007/s12369-010-0056-9Search in Google Scholar

[23] L. D. Riek, T. C. Rabinowitch, B. Chakrabarti, and P. Robinson, “How anthropomorphism affects empathy toward robots,” in Proceedings of the International Conference on Human-Robot Interaction, 2009, pp. 245–246.10.1145/1514095.1514158Search in Google Scholar

[24] K. Liaw, D. Simon, and M. R. Fraune, “Robot sociality in human-robot team interactions,” in Proceedings of the International Conference on Human-Computer Interaction, pp. 434–440, 2019.10.1007/978-3-030-30712-7_53Search in Google Scholar

[25] M. M. van Pinxteren, R. W. Wetzels, J. Rüger, M. Pluymaekers, and M. Wetzels, “Trust in humanoid robots: implications for services marketing,” J. Serv. Mark., vol. 33, no. 4, pp. 507–518, 2019.10.1108/JSM-01-2018-0045Search in Google Scholar

[26] R. E. Stuck and W. A. Rogers, “Older adults’ perceptions of supporting factors of trust in a robot care provider,” J. Robot., vol. 2018, art. 6519713, 2018.10.1155/2018/6519713Search in Google Scholar

[27] D. Portugal, P. Alvito, E. Christodoulou, G. Samaras, and J. Dias, “A study on the deployment of a service robot in an elderly care center,” Int. J. Soc. Robot., vol. 11, no. 2, pp. 317–341, 2019.10.1007/s12369-018-0492-5Search in Google Scholar

[28] T. Brell, H. Biermann, R. Philipsen, and M. Ziefle, “Trust in autonomous technologies. A contextual comparison of influencing user factors,” in Proceedings of the HCI for Cybersecurity, Privacy and Trust, Orlando, Florida, USA, 2019, pp. 371–384.10.1007/978-3-030-22351-9_25Search in Google Scholar

[29] T. Franke, C. Attig, and D. Wessel, “A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale,” Int. J. Hum.-Comput. Interact., vol. 35, no. 6, pp. 456–467, 2019.10.1080/10447318.2018.1456150Search in Google Scholar

[30] European Commission, “Einstellungen der Öffentlichkeit zu Robotern [Public attitudes towards robots],” Tech. Rep., European Commission, 2012.Search in Google Scholar

[31] C. Beierlein, C. Kemper, A. Kovaleva, and B. Rammstedt, “Kurzskala zur Messung des zwischenmenschlichen Vertrauens: Die Kurzskala Interpersonales Vertrauen (KUSIV3) [short scale for measuring interpersonal trust: the short scale interpersonal trust (KUSIV3)],” Tech. Rep., GESIS – Leibniz-Institut für Sozialwissenschaften, Mannheim, 2012.Search in Google Scholar

[32] F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Q., vol. 13, no. 3, pp. 319–340, 1989.10.2307/249008Search in Google Scholar

[33] J.-Y. Jian, A. M. Bisantz, and C. G. Drury, “Foundations for an empirically determined scale of trust in automated system,” Int. J. Cognit. Ergonomics, vol. 4, no. 1, pp. 53–71, 2000.10.21236/ADA388787Search in Google Scholar

[34] M. Galesic, “Dropouts on the web: effects of interest and burden experienced during an online survey,” J. Off. Stat., vol. 22, no. 2, pp. 313–328, 2006.Search in Google Scholar

[35] Destatis, “17% of the population with academic degree,” 2018.Search in Google Scholar

[36] S. Frennert, H. Eftring, and B. Östlund, “What older people expect of robots: A mixed methods approach,” in Proceedings of the International Conference on Social Robotics, Bristol, UK, 2013, pp. 19–29.10.1007/978-3-319-02675-6_3Search in Google Scholar

[37] V. Lotz, S. Himmel, and M. Ziefle, “You’re my mate – acceptance factors for human-robot collaboration in industry,” in Proceedings of the International Conference on Competitive Manufacturing, Stellenbosch, South Africa, 2019, pp. 405–411.Search in Google Scholar

[38] P. Lanillos, J. F. Ferreira, and J. Dias, “Designing an artificial attention system for social robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4171–4178.10.1109/IROS.2015.7353967Search in Google Scholar

[39] E. Wiese, G. Metta, and A. Wykowska, “Robots as intentional agents: Using neuroscientific methods to make robots appear more social,” Front. Psychol., vol. 8, art. 1663, 2017.10.3389/fpsyg.2017.01663Search in Google Scholar PubMed PubMed Central

[40] S. M. Anzalone, S. Boucenna, S. Ivaldi, and M. Chetouani, “Evaluating the engagement with social robots,” Int. J. Soc. Robot., pp. 1–14, 2015.10.1007/s12369-015-0298-7Search in Google Scholar

[41] M. Mori, “The uncanny valley,” Energy, vol. 7, no. 4, pp. 33–35, 1970.10.5749/j.ctvtv937f.7Search in Google Scholar

[42] P. Madhavan and D. Wiegmann, “Similarities and differences between human-human and human-automation trust: an integrative review,” Theor. Issues Ergonomics Sci., vol. 8, no. 4, pp. 277–301, 2007.10.1080/14639220500337708Search in Google Scholar

[43] D. R. Billings, K. E. Schaefer, J. Y. C. Chen, and P. A. Hancock, “Human-robot interaction: developing trust in robots,” in Proceedings of the International Conference on Human-Robot Interaction, Boston, Massachusetts, USA, 2012, pp. 109–110.10.1145/2157689.2157709Search in Google Scholar

[44] S. M. Merritt, H. Heimbaugh, J. LaChapell, and D. Lee, “I trust it, but I don’t know why: effects of implicit attitudes toward automation on trust in an automated system,” Hum. Factors, vol. 55, no. 3, pp. 520–534, 2013.10.1177/0018720812465081Search in Google Scholar PubMed

[45] H. Biermann, J. Offermann-van Heek, S. Himmel, and M. Ziefle, “Ambient assisted living as support for aging in place: quantitative users’ acceptance study on ultrasonic whistles,” JMIR Aging, vol. 1, no. 2, art. e11825, 2018.10.2196/11825Search in Google Scholar PubMed PubMed Central

[46] J. Offermann-van Heek and M. Ziefle, “Nothing else matters! trade-offs between perceived benefits and barriers of AAL technology usage,” Front. Public Health, vol. 7, art. 134, 2019.10.3389/fpubh.2019.00134Search in Google Scholar PubMed PubMed Central

[47] S. Erebak and T. Turgut, “Caregivers’ attitudes toward potential robot coworkers in elder care,” Cogn. Tech. Work, vol. 21, no. 2, pp. 327–336, 2019.10.1007/s10111-018-0512-0Search in Google Scholar

[48] M. M. de Graaf, S. Ben Allouch, and J. A. van Dijk, “Why would I use this in my home? A model of domestic social robot acceptance,” Hum.-Comput. Interact., vol. 34, no. 2, pp. 115–173, 2019.10.1080/07370024.2017.1312406Search in Google Scholar

[49] T. Turja and A. Oksanen, “Robot acceptance at work: a multilevel analysis based on 27 EU countries,” Int. J. Soc. Robot., vol. 11, no. 4, pp. 679–689, 2019.10.1007/s12369-019-00526-xSearch in Google Scholar

Received: 2020-03-05
Revised: 2020-06-30
Accepted: 2020-09-10
Published Online: 2020-11-13

© 2021 Hannah Biermann et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.12.2024 from https://www.degruyter.com/document/doi/10.1515/pjbr-2021-0008/html
Scroll to top button