Abstract
With the increased use of social robots in prominence and beyond functional performance, they are expected to foster trust and confidence in people. Various factors involve providing social robots with more trustworthy behavior. This study investigated whether the listening behavior of a social robot can affect the perception of being trustworthy in human–robot interaction. Therefore, we designed four different listening behaviors, including nonactive listening, active listening, active empathic listening, and verbal-only empathic listening, for a social robot and evaluated the impact of each behavior on the participants’ likelihood of trusting the robot, using a between-subject design. Participants in the four conditions conversed with a robot that simulated one of the listening behaviors, and their general, cognitive and affective trust toward the robot was measured. The results indicated that active empathic listening behavior provided the participants with the highest impression of trustworthiness, specifically in affective trust. Both active listening and active empathic listening were evaluated higher than nonactive listening in general, affective, and cognitive trust. However, active empathic listening behavior was differentiated from active listening behavior only in terms of affective trust. For verbal and nonverbal dimensions of listening behaviors, it was confirmed that nonverbal behaviors such as nodding, body movement, and eye gaze along with verbal behaviors, had a significant effect in eliciting higher affective trust in human-robot interaction. Consequently, we concluded that designing social robots with active (empathic) listening behavior can enhance trust perception in human-robot interaction in different fields such as education, healthcare, and business.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Social robots designed to interact with people in a natural and interpersonal manner are becoming popular in several contexts, from domestic uses to public spaces [1]. Thus, the field of human-robot interaction (HRI) has attempted to design social robots that are useful, intuitive, and user-friendly to interact and collaborate with humans [2]. For such human-robot collaborations to be successful and effective, the ability of social robots to foster trust and confidence of people is a predominant issue. Additionally, the willingness of people to accept robot-generated information and the robot’s suggestions are strongly associated with the perceived trust of the robot [3, 4]. Accordingly, the concern for this issue is increasing with advancements in robot functionality [4].
Trust is a topic in human communication (HC) [5] and the foundation of interpersonal cooperation [6]. Many interpersonal relationships in marriage, friendship, and management [5], as well as important personality traits and survival of social groups, depend on the presence of trust [7]. Trust in HRI was derived from HC and interpersonal trust. Muir [8] stated that individuals’ trust in machines can be affected by similar factors in interpersonal trust; therefore, the model of trust between humans can be useful and valuable to designers of HRI in general. However, other researchers believe that it is unclear whether findings related to trust in HC can be transferred and applied to HRI [9]. Thus, a deep understanding of factors and human behaviors causing trust and examining them in HRI applications across multiple dimensions (i.e., cognitive, affective, physical, and behavioral) is necessary to design reliable robots with beneficial roles and as contributors to human decisions.
Several factors from cognitive to emotional, physical, and behavioral incorporate the building of interpersonal trust. Rempel [10] introduced three basic factors, including predictability, dependability, and faith, that promote the growth of interpersonal trust. These factors have also been examined for human-machine trust [11]. Other researchers extended the factors to include affective responses and established that trust involves emotion [12] and empathy [13], or presented evidence of cultural effects [14], gender [15, 16], previous experiences [7], nonverbal cues [17], physical similarities [3], and facial similarity [18,19,20] in interpersonal relations. Moreover, prior research has shown the influence of other nonverbal behaviors such as eye contact, body posture, and smiling on trust [21]. “Listening behavior” is another factor empowering interpersonal trust, and numerous studies have approved it. McGarvey [22] stated that trust is an important element of listening and an outcome of good active listening behavior. Ramsey and Sohi [23] counted effective listeners as more trustworthy. Among the different listening behaviors, active listening (AL) and active empathic listening (AEL) are more closely related to trust. AL behavior builds trust by showing attention and confirming the emotions and experiences of the speaker [24]. When AL is accompanied by empathy, it results in AEL and acts even more powerfully than AL, enhancing trust, because it is equipped with empathic reactions. Studies in different fields showed that AL and AEL build up trust, for instance, between patients and their psychotherapists or controlling pessimism in marital conflicts between couples (e.g., [25]).
Some of the mentioned factors that contribute to interpersonal trust have also been examined in human-robot trust (HRT). For instance, studies have been conducted on the effects of gender stereotypes on trust in humanoid robots [26, 27], the impact of robot body shape on the attribution of gender stereotypical traits and cognitive and affective trust in robots [28], the influence of facial similarity between humans and social robots on trust [29, 30], and the impact of robot appearance, physical presence [31], matched speech [32], empathetic language, and physical expression [33] in eliciting trust. In the case of listening behavior and trust, there are limited research conducted on resembling AL or AEL behavior in social robots and examining their influence on HRT. AL is emerging as a novel concept in HRI [34]. Some researchers tried to simulate natural listening in robots [35,36,37], while others have emphasized on more AL behavior in robots [38]. Certain studies have focused only on the verbal part of AL behavior using backchannels or fillers to conduct attentive listening and produce coherent dialogues during conversations [39, 40] or in large-scale projects such as SimSensei [41]. Others have searched for both verbal and nonverbal components of AL to evaluate body movements and utterances simultaneously [42]. However, no research has thoroughly investigated the role of AL or AEL behavior of social robots in establishing trust.
This study attempted to investigate the effect of different listening behaviors of social robots on human’s perceived trust. We conducted an experiment, asking participants to have a conversation with a robot while showing AL and AEL behaviors. Different types of trust, including general, cognitive and affective trust, were measured as variables to capture the perceived trustworthiness of participants. With a better understanding of the effect of listening behavior of social robots on trust, we can design more trustworthy, cooperative, and friendly robots and effectively enhance HRI.
2 Hypotheses Development
2.1 AL and AEL Behavior and Trust
As mentioned in the introduction, listening behavior is a powerful factor in interpersonal trust. Several modes of listening have been studied in the literature (e.g., [43,44,45]), and we recognized two notable concepts of AL and AEL behavior in relation to interpersonal trust. AL goes back to Thomas Gordon [25] and Carl Rogers [46]. AL is a form of carefully and attentively listening and responding, to achieve a deeper understanding of the speaker’s message and context [44, 47]. Although active listeners try to be attentive to verbal and nonverbal cues, they sometimes appear mechanically and fail to project emotions to the speaker [48]. Therefore, a listener sometimes acts with empathic tendencies to make the functional components of AL more emotional and insightful [49], resulting in AEL. AEL was originally defined in the context of product sales as a form of listening practiced by salespeople in which conventional AL is combined with empathy to achieve a superior and more effective form of listening ( [50], p. 162). It is common in counseling, therapy, and marketing literature to help them better understand their clients and customers.
For AL behavior and trust incorporation, evidence shows the effectiveness of AL on trust in HC and its impact on being friendly [51] or socially attractive [52]. Nugent and Halvorson [53] stated that AL behavior builds trustworthy relationships with clients during therapy. Fassaert et al. [54] showed that good listeners are more liked and trusted, and Lasky [55] established AL behavior as an important first step in communication for developing trust. It is widely recognized that trust plays a vital role in seller and buyer relations, as Ramsey and Sohi [23] proved in their research that salespersons with better listening behavior are considered more trustworthy. Additionally, AEL behavior is positively related to salespersons’ trust in marketing, and the findings support the notion that salespeople with higher levels of AEL behavior would have higher quality relationships and are regarded as trustworthy [56].
AL and AEL behavior for social robots, are both new concepts in HRT and their effect on trust have not been previously investigated and confirmed. Thus, as for the first hypotheses, to know if these listening behaviors can contribute toward enhancing HRT, we assumed comparing them with “nonactive, nonempathic listening behavior” (NAL), as control condition, that is “just listening” without showing attention or empathy to the speaker. We discussed about the differences of AL and AEL behavior in fostering HRT, respectively. It should also be emphasized that it is important to know the impact of AL behavior on trust individually, as sometimes we may only rely on AL behavior, because of limitations in making empathic responses between humans and robots. Therefore, we proposed the first hypotheses considering the impact of AL and AEL behavior of social robots on trust.
H1a
AL behavior of social robots results in more (general) trust by the user toward the robot than NAL behavior.
H1b
AEL behavior of social robots results in more (general) trust by the user toward the robot than NAL behavior.
Although, some authors have used the terms AL and AEL synonymously and interchangeably [53, 57], AEL is considered superior to AL in selling [48], as it is empowered with empathy. Comer and Drollinger [48] admitted that, as empathy increases in salesperson behavior, the level of listening increases. Aggarwal et al. also established a strong positive correlation between a salesperson’s empathy and listening behavior with trust and satisfaction. Therefore, based on the aforementioned arguments, the next hypothesis was added to the comparison of AL and AEL behavior of social robots, to measure how much empathy in listening behavior of social robots can be influential in making higher trust perception.
H1c
AEL behavior of social robots results in more (general) trust by the user toward the robot than AL behavior.
2.2 Listening Behaviors and Types of Trust
There are different categories and types of trust in HC (e.g., [58,59,60,61]). Each type of trust relates to a belief about a specific configuration of trust-warranting properties [62]. One of the most frequently used categories in the literature is affective and cognitive trust [63, 64]. According to the classification of trust into cognitive and affective dimensions, trust can be based on rational decision-making [10, 65] or an emotional, affective foundation [10]. In cognitive trust, “we cognitively choose whom to trust in which respects and under which circumstances, and we base the choice on what we take to be ’good reasons’, constituting evidence of trustworthiness” ( [64], p. 970). McAllister [63] defined knowledge and good reasons as the basis for trust decisions. Cognitive trust is performance-based [66], and results from accumulated knowledge [64, 67], and warrants trusting the trustee with certain level of confidence [67]. Furthermore, affective trust, which is complementary to its cognitive type, consists of the emotional bonds between individuals in the relationship [64] that reciprocally express care and concern [63]. Affective trust relies on a partner’s emotions and it may go beyond the available knowledge [67]. Regarding the relation of cognitive and affective trust, cognitive trust is considered one of the antecedents of affective trust in some studies (e.g., [64, 67]), and emotions, on the other side, do influence the perception and cognitive evaluations, even after the dissipation of emotions [14, 68]. Therefore, the reciprocity of cognitive and affective trust is intertwined and they cannot be counted individually.
Considering the definitions of AL and AEL, it was mentioned that AEL behavior is characterized by the inclusion of an empathetic and emotional overlay. The empathetic behavior of AEL behavior constructs an emotional link between individuals, as Floyd [69] argued that empathic listening operates as a form of indirect nonverbal affection and conveys a message of care, love, and tenderness for the partner. However, AL behavior attempts to deeply understand the sender’s point of view by confirming, rationalizing, or seeking more details [44]. Thus, considering the relationship between affective trust and emotional clues, as well as the potential of AEL behavior in conveying emotions and empathy, we proposed the next hypothesis.
H2a
Affective trust is significantly higher when social robots exhibit AEL behavior than AL behavior.
It was discussed that the relation of cognitive and affective trust is reciprocal and they are highly interdependent. Additionally, according to modern psychology, affection involves several basic cognitive functions and appears to be necessary for normal conscious experiences. It influences, modulates, and mediates basic cognitive processes [68]. Therefore, considering this point that AL and AEL behavior includes similar cognitive behaviors, and additionally, due to the substantial influence of affection on the cognitive processes in humans, we concluded that AEL behavior would be more influential in eliciting cognitive trust as well. Based on the above arguments, the next hypothesis was derived.
H2b
Cognitive trust is significantly higher when social robots exhibit AEL behavior than AL behavior.
2.3 Verbal and Nonverbal Dimensions of Listening Behaviors and Trust
Listening behaviors have two components which provide verbal and nonverbal feedback to the speaker [48, 70]. Different aural techniques such as paraphrasing, restating a version of the speaker’s message, asking clarifying questions, and nonverbal messages such as nodding when talking, proper posture and body positioning, facial expressions and eye contact, as well as summarizing, paying attention, or encouraging and balancing are used to fully catch the speaker [44, 47, 48, 52, 71]. Table 1 summarizes some examples of verbal and nonverbal components of listening behaviors.
Some researchers claimed that regardless of the specific verbal responses of listeners, showing concern and attention to a person produces positive perceptions [72]. Furthermore, concise findings showed that 55% of the total impact of a message is associated with nonverbal aspects and only 7% is due to verbal segments [73]. Specifically, many studies have confirmed the incorporation of nonverbal behaviors and effective empathic listening [74]. As Weger et al. [52] stated that listeners receive more care and concern from nonverbal behaviors of listening than specific verbal behaviors such as paraphrasing, questioning, giving advice, or reflecting the emotional content of messages. Additionally, Floyd [69] argued that listeners who use nonverbal behaviors convey empathy and support more effectively than those who do not. In relation to nonverbal behavior and trust, it was also confirmed that, although verbal messages of physicians influence patients’ interpersonal trust, nonverbal behavior is most likely to have a more crucial influence on trust [75]. Thus, owing to nonverbal behaviors empathic effect, the importance of these behaviors in enhancing trust is particularly dominant in AEL behavior.
Several studies have considered the influence of nonverbal behaviors of social robots in different HRI fields [29]. Although, the inclusion of nonverbal behaviors is confirmed as essential, it is not clear how effective the nonverbal component of AEL behavior of social robots contributes to HRT. The importance of this issue increases when considering the limitations of robots’ nonverbal behaviors, as they mostly lack facial expressions, as an important source of conveying emotions. Therefore, for the third set of hypotheses, we considered the comparison of “AEL behavior of social robots without nonverbal components” (AELVO = AEL-verbal only) with AEL behavior which includes both verbal and nonverbal components, to assess the power of nonverbal behaviors of AEL behavior of social robots in enhancing HRT. Therefore, considering the abovementioned aspects, the following hypotheses were developed.
H3a
Trust is significantly higher when social robots exhibit AEL behavior than AELVO behavior.
H3b
Affective trust is significantly higher when social robots exhibit AEL behavior than AELVO behavior.
H3c
Cognitive trust is significantly higher when social robots exhibit AEL behavior than AELVO behavior.
3 Method
3.1 Overview
According to the derived hypotheses, we concluded four listening behaviors for the robot. For the set of hypotheses 1 and 2, three listening behaviors were considered: (1) NAL behavior (nonactive, nonempathic listening behavior), (2) AL behavior (active, nonempathic listening behavior), and (3) AEL behavior (active, empathic listening behavior). For hypotheses 3, we considered (4) AELVO behavior (AEL behavior-verbal only), compared to AEL behavior (both verbal and nonverbal). Therefore, we developed four different listening behaviors for the robot. Table 2 lists the components of each listening behavior, and we explain design of each behavior in Sect. 3.6 in more details. Testing the hypotheses, a between–subject experiment with four conditions was designed. The four conditions were defined regarding the listening behaviors of the robot.
NAL Condition: Participants talk with the robot acting with NAL behavior.
AL Condition: Participants talk with the robot acting with AL behavior.
AEL Condition: Participants talk with the robot acting with AEL behavior.
AELVO Condition: Participants talk with the robot acting with AELVO behavior.
The interaction between the robot and participants was designed by conducting a conversation and questionnaire-based evaluation. The idea was to simulate a dyadic conversation between participants and the robot, in which the robot showed different listening behaviors. “Wizard-of-Oz” (WOZ) methodology was employed in the experiment, through which the participants believed that they were interacting directly with the robot by utilizing a natural language interface; however, in reality they were communicating with an operator. Considering the complexity of natural dialogue, the WOZ technique gives subjects more freedom of expression or constrains them in more systematic ways [79].
3.2 Participants
A total of 120 international students, aged between 19 and 47 years (male: 54, female: 66; Mage = 25.71, SDage = 4.28) from Tokyo Institute of Technology in Japan, participated in the experiment. The participants were randomly assigned to four experimental conditions. Thirty participants were allocated to each of the four conditions. We attempted to counterbalance the gender between conditions as much as possible, although it was not fully achievable because of the unexpected situation of future participants. Table 3 listed the number and gender disposition for each of the four conditions.
Because the experiment was conducted in English, a high level of fluency was required; thus, the participants were asked about their level of English proficiency (native speaker: n = 5, non-native speaker: n = 115; fluent professional = 55.8%, working or studying English = 44.2%). Thirty-five participants (29.2%) had seen the robot previously. Among the participants who knew the robot, only a few participants (n = 7, 5.8%) had prior interaction with the robot, similar to talking with the robot in exhibitions or attending other experiments. All participants submitted a written consent form and were informed of the experimental procedures and ethical concerns prior to the experiment.
3.3 Equipment
In this study, we used the humanoid robot, NAO, previously was developed by Aldebaran Robotics and now by Softbank Robotics. NAO is a small (58 cm tall) programmable and popular robot in education and research. For this study, we required a robot that could talk and move. Considering the abilities of NAO and its ease of programming, it was selected for the experiment.
Four listening behaviors consisting of verbal and nonverbal components were designed for NAO. During the conversation, NAO was controlled by an experimenter using the WOZ method through an interface developed in the HTML. However, a small greeting and conversation were programmed to be executed autonomously prior to the main conversation between the robot and participants, to be realized as intelligent and autonomous robot for participants. NAO communicated verbally using NAO’s default text-to-speech settings. Body movements involving head, arms, eyes, and other parts were designed and developed using Choregraphe 2.1.4, based on the behaviors needed for each listening behavior. The behaviors were then attached to specific texts for each listening behavior. NAO can be operated in two sitting or standing situations. The latter was chosen in this study owing to the size of the robot and the variety of behaviors in a standing situation. NAO was placed on a table in front of the participant to make an almost parallel viewpoint between the robot and the participants.
3.4 Common Experimental Setup
The experiment was performed in a room separated into three parts: (1) where the experimenter controlled NAO; (2) where participants answered questionnaires before and after conversation; and (3) where NAO was set down and a conversation was held. Figure 1 shows the experimental setup. Sitting behind a partition, the wizard team could observe and hear the participants interacting with NAO and control it by considering what participants asked or answered to the robot. The experiment was recorded by using a camera placed on the side of the robot. NAO was positioned in the user’s personal space (between 0.3 and 1 m) according to Rossi et al. [80].
3.5 Procedure
Before initiating the experimental session, participants were given documents explaining the experiment to review freely and decide voluntarily if they desired to participate in the experiment. The experimenter then explained the procedure and highlighted important points, such as the video recording of the session, to ensure that everything was clear to participant prior to signing the consent form. None of participants opted out of the experiment. Each participant was randomly assigned to one of the four conditions corresponding to NAO’s different listening behavior styles.
Participants were asked to fill out their demographic information (i.e., gender, age, nationality, English proficiency, and prior interactions with NAO robot) and an attitude to robot’s questionnaire about their tendency toward robots (note: analyses considering participants’ attitudes toward robots are outside the scope of this study). First, participants learned about NAO through picture in the explanation form and a demographical questionnaire. After completing the primary questionnaire, the experimenter introduced NAO to participant, and NAO shortly greeted autonomously to create first impression before initiating the main conversation. Before starting the main conversation with NAO, participants rated their trust in the robot on 56-item pre-trust questionnaire, which included 40 items on robot-human trust, 9 items on cognitive trust, and 7 items on affective trust. Thereafter, they were asked to sit in front of NAO to begin the conversation. The conversation was about living experiences in Japan, which was a familiar topic for all international students in Japan. The topic was chosen from various topics based on familiarity, popularity, and security of information regarding potential participants.
Before starting the conversation, the experimenter solicited NAO to start the conversation, and NAO replied to the experimenter autonomously. Thereafter, NAO was left alone with participant. The first short greeting and brief talk before starting the conversation was programmed to be done autonomously to ensure that the participants considered NAO as an intelligent and autonomous robot. The conversation lasted approximately 8–12 min. After completing the conversation as planned, participants were asked to rate their trust in the robot on post-trust questionnaires. The entire experiment was completed in an average of approximately 30–40 min for each participant. Figure 2 illustrates the experimental procedure.
3.6 Manipulation of the Robot
As described in Sect. 3.1, we concluded to design four listening behaviors for NAO. Listening behaviors included active or empathic behavior together with verbal and nonverbal components. Therefore, we required to know how active and empathic listening behaviors are different in verbal and nonverbal components. To accomplish this requirement, we first described about the types of verbal and nonverbal behaviors of listening behaviors which were applicable to the robot. Table 4 lists the summary of four experimental conditions in terms of listening behaviors.
3.6.1 Nonverbal Components of Listening Behaviors
Several nonverbal behaviors contribute to conveying message and empathy in listening behaviors (refer to Table 1). However, due to the limitations of NAO, we relied on certain nonverbal behaviors: eye gaze, head movements and body movements. Although, facial expression is universal means of communicating and expressing emotions, because NAO has an inelastic rigid face, the display of facial expressions was unfeasible.
Eye Gaze
Eye contact is a powerful cue for effective listening behavior through which we transfer a large part of the message. Different types of eye contact have been established depending on duration, direction, frequency and timing. Too little or averted gaze is associated with ignoring, lying and not listening, and evokes negative feelings in conversations [81]. Stanton and Stevens [3] found that situational gaze (robot face to people and make eye contact to show disagreement), acted as averted gaze and was not trustworthy in HRI. Other research replicated that individuals hold constant gaze on their partners, with few averted gazes, while active listening [81]. Furthermore, constant gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings [81]. Therefore, averted and situational gaze are associated with NAL behavior, and constant gaze contributes to AL and AEL behavior. Accordingly, we differentiated four experimental conditions in terms of eye contact as following:
NAL condition:
averted gaze = NAO did not look at participant while he/she was talking, and turned its head to left, right or top and down.
Situational gaze = NAO gazed at participants when expressed disagreement or negative statement.
AL condition:
constant gaze = NAO built constant and direct eye gaze with participants during conversation.
AEL condition:
constant gaze = NAO built constant and direct eye gaze with participants during conversation.
AELVO condition:
constant gaze = NAO built constant and direct eye gaze with participants during conversation.
Head Movements
Head movements, mainly nodding and shaking, which signals “yes” and “no” answers or showing interest, confirmation, and attention, are a major mode of effective listening [78]. Turning head to different directions to avert eyes during conversation shows ignorance [82]. Studies have found that head movements encode emotions during conversation. For example, downward-tilted rotational head to show sadness, head shaking to show regret or sadness, upward-nodding head to show happiness and surprise [78]. Hence, different head movements were designed for NAO according to McClave [82], and their relation to listening behaviors. Head movements were also incorporated into suitable verbal expressions.
NAL condition:
NAO turned its head to right and left, top and down to show ignorance.
AL condition:
NAO nodded when saying “yes”, “I see” and similar backchannels, and while participants talked, to show attention. NAO showed head shaking while saying “no” or similar disagreement statements.
AEL condition:
In addition to AL condition head movements, turning head down, and shaking downward-head to show sadness, upward-tilted and repetitive nodding to show happiness.
AELVO condition: No specific head movements.
Body Movements
Except for the head, we could move other parts of the NAO, such as arms, hands, fingers, and kneels, to obtain an extensive number of movements. Body movements and postures can represent certain emotions and they are widely researched for human-like robots. We modulated the body movements of NAO according to the works of McColl and Nejat [83] and TimmeschGill et al. [84], designing movements such as opening and closing arms, crossed arms, opening palms, and pointing finger. Some movements were synchronized with utterance, whereas others were free movements in reaction to participants to show the activeness and emotional state of the robot.
NAL condition:
NAO pretended to look at watch or playing with hands to show ignorance.
AL condition:
Appropriate movements along with verbal components: for example, NAO opened arm and hand when asking a question, pointed fingers toward itself when expressing its opinions.
AEL condition:
In addition to AL condition movements, NAO showed additional body movements in accordance with emotional statements: for example, NAO opened its arms together or moved arms up and down to show happiness and surprise.
AELVO condition: No specific body movements.
3.6.2 Verbal Components of Listening Behaviors
Verbal components of listening behaviors in this study included two parts: (1) the main content of conversation including questions, answers, comments, attentive and emotional statements, (2) specific verbal techniques to show attention or emotions. Following is the main verbal techniques were used:
Backchannels
Active listeners give more feedback to speakers, which coordinates the speaker’s narratives with what the listener needs to know. A common form of feedback in such settings is backchannels [77], which is short acknowledgement utterances, such as “uh-huh” or “yeah,” or nonverbal gestures, such as head nods or brief smiles [40]. Using backchannels encourages the speakers to continue, but this technique should be followed by asking other questions [38]. We used short acknowledgements phrases like “yeah,” “OK,” “I see,” “Really?”, “oh,” and “good.” for AL, AEL and AELVO conditions.
Paraphrasing
Paraphrasing shortens and clarifies the speaker’s statements (both content and feelings) by restating the information received in another from [38, 44, 52, 85]. Paraphrasing was used for AL, AEL and AELVO conditions. An example from AL condition is as follows:
NAO: What did you find most difficult in Japan?
Participant: Maybe Japanese Language ….
NAO: Yeah, Japanese language specifically kanjis are difficult.
Summarizing
Summarization is a concise overview of several statements, the content, or even feelings of a conversation [85] to bring together important ideas and establish a basis for further discussion [44]. Summarization was used for AL, AEL and AELVO conditions. An example from AEL condition is as follows:
NAO: Have you travelled in Japan?
Participant: Yeah, I went to Kyoto last year. I wore traditional clothing of Japan and ate many foods.
NAO: So, it seems you enjoyed a lot.
Asking
Clarifying questions that can be open or closed is part of good listening [38]. Most active listening treatments suggest that the listener asks questions to encourage the speaker to elaborate on his or her beliefs or feelings [52]. We prepared some questions for NAO, for example like, “what did you do then?” for AL condition, or “did you like it?” for AEL condition. Other verbal techniques, such as encouraging statements, demonstrating concern, and sharing similar emotions, were also used to design the verbal behaviors of NAO.
The conversation between NAO and participants, consisted of greeting in addition to 8 questions: (1) “why did you decide to come to Japan for studying?”, (2) “Have you travelled in Japan?”, (3) “Do you speak Japanese or like to learn?”, (4) “What did you find most interesting about Japan?”, (5) “Do you miss your family or friends here?”, (6) “What did you find most difficult in Japan?”, (7) “how did you find Japanese people and culture different with your country?”, and (8) “any plan for near future in Japan?”. According to four experimental conditions, appropriate verbal statements along with above mentioned nonverbal behaviors was designed for each question.
3.7 Measures
3.7.1 Attitude Towards Robots
To measure the participants’ attitude toward robots, we used a 22-item questionnaire adopted from the multidimensional robot attitude scale [86]. The multidimensional robot attitude scale assesses people’s attitudes toward robots in a comprehensive way using 12 dimensions and provides a multifaceted understanding of attitudes toward this technology. Four sub-dimensions, familiarity (five items), interest (seven items), negative attitude (five items), and utility (five items), were adopted for this study. The participants were asked to respond on a 7-point Likert scale (1= “strongly disagree” to 7= “strongly agree”). The averages of the responses for the corresponding items per sub-dimension yielded the scores for each sub-dimension.
3.7.2 Trust
There is a scarcity of validated measures to evaluate HRT and none of the existing measures covers all extant measures of HRT. The measurement tools currently available for HRT are heavily skewed toward performance trust and do not consider the emotional or moral aspects. Schafer offered the more comprehensive developments of a trust measure that is widely used to evaluate HRT. Therefore, to assess the generalFootnote 1 trust of participants in the robot, the mentioned scale developed by Schaefer [87] was adopted. Schaefer’s human-robot trust scale, which consists of 40 items, measures a human’s general trust in robots without identifying any specific types of trust. Participants rated their trust in the robot in the range of no trust (0%) to complete trust (100%) according to each of the 40 items. The sum of the responses for the items yielded scores on the human-robot trust scale. This score was used as perceived trust to test H1a, H1b, H1c, and H3a.
Schafer’s trust measure has no reference to cognitive or affective trust. There are no other measures to evaluate cognitive and affective dimensions of HRT. Thus, we adopted it from interpersonal trust as another individual measure. The measure proposed by Johnson and Grayson [67] and McAllister [63] was modified to evaluate cognitive and affective trust. This measure has also been modified and used in other HRT studies [88]. The descriptions of the original question items referring to a person, were modified to refer to a robot. The final scale consisted of 16 items, 9 items for cognitive trust, and 7 questions for affective trust. Participants evaluated their feelings and perceptions of trust using a seven-point Likert scale (1= “strongly disagree,” 7= “strongly agree”). The averages of the responses for the corresponding items provided the scores for cognitive and affective trust. Hypotheses H2a, H2b, H3b, and H3c were tested using these scores.
4 Results
4.1 Factor Structure of Trusts
To examine the influence of difference in the listening behaviors of social robots on perceived trust, we first conducted exploratory factor analysis to aggregate the 40 items in general trust, 9 items in cognitive trust, and 7 items in affective trust scale into discrete dimensions. First, the factorability of the 40 items was examined. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.908, above the recommended value of 0.6, and Bartlett’s test of sphericity was significant (χ2(780) = 5329.844, p < 0.001). Therefore, factor analysis was considered suitable. The maximum likelihood method with promax rotation revealed an eight-factor structure. The cumulative contribution of these eight factors was 62.72%. These factors were considered to represent the dimensions of people’s general trust in robots. Four of the 40 items were eliminated because they did not meet the minimum criteria of having a primary factor loading (how much a factor explains a variable) of 0.35 or above.
Variables with high loadings on the first factor contained responses to items such as “act as part of the team,” “work best with a team,” “be considered part of the team,” “be a good teammate,” “protect people,” and “be reliable.” These items represented people’s expectations for robots to work in collaboration with people and work as a team; thus, they were labeled as “team working.”
The second factor contained items such as “be lifelike,” “possess adequate decision-making capability,” “be conscious,” “know the difference between friend and foe,” “perform a task better than a novice human user” and “provide feedback,” as well as “be autonomous.” The variables of this factor are mostly related to how much consciousness the robot has and can act in an intelligent way. Thus, the factor was labeled as “Intelligence.”
The third factor included responses to items such as “malfunction (reversed),” “have errors (reversed),” “require frequent maintenance (reversed),” “be unresponsive (reversed),” “be incompetent (reversed),” and “be led astray by unexpected changes in the environment.” This factor represented people’s expectations that robots should work free of troubles; thus, it was labeled as “Trouble-free.”
Items loaded for the fourth factor included “be pleasant,” “be friendly,” and “be supportive,” and it represented the measure of positive feelings and feedback the users received from the robot. Thus, the factor was labeled as “Likeability.”
Items loaded for the fifth factor included “act consistently,” “openly communicate,” “clearly communicate,” “function successfully,” and “communicate with people.” These items represent people’s expectations regarding the robot’s appropriate function and communication; thus, it was labeled as “Function.”
The sixth factor contained items such as “tell the truth,” “warn people of potential risks in the environment,” “keep classified information secure,” and “be responsible.” These items represent the extent to which the robot is safe in maintaining information and exhibits reliable behavior. Thus, it was labeled as “Reliability.”
Variables that had high loadings on the seventh factor included “perform exactly as instructed,” “follow directions,” and “be predictable”, which were about instructing the robot. Thus, this factor was labeled as “Control.”
The eighth factor contained two items, “make sensible decisions” and “possess adequate decision-making capability”, which represented people’s expectations of robots making proper decisions. Thus, the factor was labeled as “Decision making.”
The internal reliability of the eight factors was tested by calculating the Cronbach’s alpha indices. Alpha values for the eight factors were 0.91 (team working), 0.79 (intelligence), 0.85 (trouble-free), 0.82 (likeability), 0.81 (function), 0.67 (reliability), 0.83 (control), and 0.82 (decision-making), which indicated high reliability.
In addition, factor analysis was conducted for responses on the cognitive trust scale using the maximum likelihood method with promax rotation. A two-factor structure was revealed, with a cumulative contribution of 51.99%. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.810, above the recommended value of 0.6, and Bartlett’s test of sphericity was significant (χ2(36) = 591.17, p < 0.001).
These two factors represented the dimensions of people’s cognitive trust in social robots. Item 16 “If people knew more about this robot, they would be more concerned and monitor its performance more closely” was eliminated because it did not meet minimum criteria of having a primary factor loading of 0.35 or above. There were no items with a cross-loading of 0.35 or above.
The first factor contained responses to items of “other people who must interact with NAO consider it to be trustworthy,” “most people, including those who are not familiar with NAO, trust and respect NAO,” “I can rely on NAO to undertake a thorough analysis of the situation before advising me,” “this robot approaches its duty with professionalism and dedication,” and “when interacting with NAO, I have no reservation about acting on its advice.” This factor represented people’s understanding and perception of the robot. Thus, this factor was labeled as “Social reputation.”
The second factor contained responses to items such as “I have to be cautious about acting on the advice of NAO because its opinions are questionable (reversed),” “I cannot confidently depend on NAO because it may complicate my affairs by careless behavior (reversed),” and “when interacting with NAO, I have good reason to doubt its competence (reversed).” This factor represented personal trust in the robot and was labeled as “Personal credit.” Cronbach’s alpha values for these two factors were 0.87 (social reputation) and 0.80 (personal credit), which indicated good internal reliability.
Finally, factor analysis for responses for the affective trust scale with the maximum likelihood method and promax rotation revealed a single–factor structure, in which all seven questions met the minimum criteria of having a primary factor loading of 0.35 or above; thus, no question was eliminated. The one-factor cumulative contribution was 61.79%, and Cronbach’s alpha value of 7 questions was 0.89.
Hence, for the rest of analyses, eight variables resulted from factor analysis, in addition to sum of 40 items as overall general trust score was used to evaluate the general trust of participants to the robot. Two variables resulted from factor analysis of cognitive trust scale, in addition to average of responses to 9 items (overall cognitive trust) was used to measure the cognitive trust toward the robot. The average of responses to 7 items was used to measure the affective trust toward the robot.
4.2 Effect of the Robot’s Listening Behaviors on Trust
To investigate the effect of different listening behaviors of the robot on the perceived trust of participants, changes in the trust scores before and after conversation were calculated and compared across four experimental conditions. Table 5 shows the means and standard deviations (SDs) of changes before and after conversation in all thirteen trust scores for four experimental conditions. AEL condition had the highest score than other conditions in all trust scores. NAL condition had the lowest score, and the mean score of AL condition was higher than that for NAL condition and moderately lower than that for AELVO condition in most trust scores. Therefore, AEL behavior was the most stimulating behavior for participants to deduce the trustworthiness of the robot’s behavior.
For testing the hypotheses, we conducted a one-way multivariate analysis of variance (MANOVA) with four experimental conditions (NAL, AL, AEL, AELVO) as independent variables and thirteen trust scores as dependent variables, including: teamworking, intelligence, trouble-free, likeability, function, reliability, control, decision-making, overall general trust; social reputation, personal credit, overall cognitive trust; and affective trust, resulted from factor analysis. The results indicated a significant difference (p < 0.001) in trust scores among different listening behaviors of the robot [F (39,305) = 2.60, p = 0.000, Wilk’s Λ = 0.42, η2 = 0.246]. The multivariate effect size was estimated at 0.246, which implies that almost 25% of the variance in the canonically derived dependent variable was accounted for by listening behaviors. The next step was to determine the source of the differences.
MANOVA was followed by analysis of variance (ANOVA) on each of the thirteen dependent variables, and with Bonferroni correction measured the individual mean difference comparisons across the conditions, and effect sizes were measured by Cohen’s d. As can be seen in Table 6, all of the ANOVA results, except for decision-making, were statistically significant, with effect sizes (partial η2) ranging from the lowest one 0.09 (personal credit) which indicated a medium effect, to the highest ones of 0.33 (overall general trust) and 0.30 (affective trust) which indicated a large effect.
4.3 Effect of NAL, AL and AEL Behavior on General Trust
The first set of hypotheses explored the difference among NAL, AL and AEL behavior in general trust. Table 7 indicates where the significant group differences for general trust scores reside. Bonferroni post-hoc test results revealed that the score for overall general trust, was significantly different (p < 0.001) between NAL (M = -43.10, SD = 64.67) and AL behavior (M = 17.33, SD = 42.31, p = 0.000), and between NAL and AEL behavior (M = 48.80, SD = 45.08, p = 0.000). However, the difference in the score of overall general trust between AEL and AL behavior was not significant (p = 0.108). Therefore, H1a and H1b were fully supported and H1c was not supported. A very large effect sizes were observed for both comparisons of AL and NAL behavior and AEL and NAL behavior. Figure 3 panel A shows the means and SDs of changes in overall general trust score before and after the conversation in NAL, AL and AEL conditions.
Additionally, Bonferroni post-hoc analysis revealed significant differences between NAL and AL behavior of the robot for six general trust sub-dimensions with medium to very large effect sizes: teamworking (p = 0.006), intelligence (p = 0.009), trouble-free (p = 0.000), likeability (0.000), function (p = 0.000), and control (p = 0.003). Although participants evaluated AL behavior as higher for reliability and decision-making factors than NAL behavior, the difference was not statistically significant for reliability (p = 0.172), or decision-making (p = 1.00).
For difference among the general trust factors between NAL and AEL behavior, Bonferroni post-hoc test indicated significant difference in seven sub-dimensions of general trust with very large effect sizes in most of factors: teamworking (p = 0.000), intelligence (p = 0.000), trouble-free (p = 0.000), likeability (p = 0.000), function (p = 0.000), reliability (p = 0.000), and control (p = 0.001). However, the results did not show significant difference for decision-making (p = 0.088), between NAL and AEL behavior. Thus, participants perceived AEL behavior of the robot to be more acceptable and effective in most of general trust factors than NAL behavior.
Finally, Bonferroni post-hoc evaluation revealed only a moderate significant difference for reliability (p = 0.052) between AEL and AL behavior. However, the difference in other sub-dimensions of general trust, including team working (p = 0.087), intelligence (p = 0.163), trouble-free (p = 1.00), likeability (p = 0.602), function (p = 1.00), control (p = 1.00), and decision-making (p = 1.00) was not significant, despite showing a higher mean for AEL behavior. Figure 3 panels B-I shows the means and SDs of changes in factor scores of the eight sub-dimensions of general trust before and after the conversation in NAL, AL and AEL conditions.
To summarize, the robot with AEL and AL behavior was perceived more trustworthy than a robot with NAL behavior almost in all trust dimensions, and the potency of AEL and AL behavior in HRT was proved. However, empathic behaviors of AEL condition were not influential enough enhancing general trust in comparison with AL condition.
4.4 Effect of AL and AEL Behavior on Cognitive and Affective Trust
The second set of hypotheses explored the relationship between AL and AEL behavior of the robot on cognitive and affective trust. As shown in Table 7, Bonferroni post-hoc test indicated that there was a significant difference (p < 0.05) in the score of affective trust between AL and AEL behavior, with AEL behavior (M = 1.16, SD = 1.15) scoring discernibly higher than AL behavior (M = 0.63, SD = 1.02, p = 0.011). Therefore, H2a was fully supported. The cohen’s d indicated a large effect size in affective trust between AL and AEL condition. Figure 4 shows the mean and SD of affective trust for AL and AEL conditions.
For overall cognitive trust, although AEL behavior of the robot resulted in a higher mean score than AL behavior, the results of post-hoc analysis indicated that there was not a significant difference comparing AEL behavior (M = 0.48, SD = 0.62) and AL behavior (M = 0.12, SD = 0.75, p = 0.383). Factor analysis elicited two factors for cognitive trust, namely social reputation, and personal credit. Participants measured AEL behavior (M = 0.42, SD = 0.84) higher than AL behavior (M = 0.14, SD = 1.00) in personal credit, and similarly they scored AEL behavior (M = 0.70, SD = 0.81) higher than AL behavior (M = 0.31, SD = 0.79) in personal credit. However, there was not any significant difference for social reputation (p = 0.577) or personal credit (p = 1.00) between AEL and AL conditions. Thus, H2b was not supported. Table 7 shows the results for cognitive trust scores between AEL and AL conditions. In summary, the difference of AEL and AL conditions only in affective trust revealed the effectiveness of empathic behaviors of the robot on affective trust, but not on general or cognitive trust.
4.5 Effect of AEL and AELVO Behavior on Trust
According to H3a, AEL behavior which consisted of both verbal and nonverbal components could result in higher trust than AELVO behavior, which only included verbal components. As shown in Table 8, Bonferroni post-hoc test results indicated that there was not a significant difference in the score of overall general trust between AEL behavior (M = 48.80, SD = 45.08) and AELVO behavior (M = 25.90, SD = 38.32, p = 0.523), even though AEL behavior was scored considerably higher than AELVO behavior. Therefore, H3a was not supported and nonverbal behaviors could not shape a large difference for participants’ general trust perception of the robot. Furthermore, no significant difference was found between the AEL and AELVO behavior, when individually considering the sub-dimensions of general trust.
H3b predicted that affective trust was significantly higher when a robot showed AEL behavior than AELVO behavior. The results of post-hoc analysis, as shown in Table 8, indicated that the participants assessed the robot with AEL behavior (M = 1.63, SD = 1.15) as having higher affective trust than AELVO behavior (M = 0.69, SD = 1.18, p = 0.020). Thus, the difference between AEL and AELVO behavior for affective trust was significant (p < 0.05), and nonverbal behaviors in the AEL condition elicited higher affective trust. Figure 5 shows the mean and SD of affective trust for AEL and AELVO conditions. Thus, H3b was verified based on these results.
For overall cognitive trust, however, no significant difference was detected when comparing AEL behavior (M = 0.48, SD = 0.62) and AELVO behavior (M = 0.31, SD = 0.55, p = 1.00). Similarly, there was not significant difference for cognitive trust factors (social reputation and personal credit) between AEL and AELVO behavior. Therefore, the evidence was not statistically efficient in supporting H3c. Table 8 shows the scores of cognitive trust for the comparison between AEL and AELVO conditions.
Further analyses have been done comparing AELVO condition with NAL and AL conditions to ensure the effectiveness of nonverbal behaviors on trust perception in HRI. According to the results shown in Table 8, AELVO behavior was significantly different from NAL behavior in all trust scores with large effect sizes, except for decision-making. However, there was not any significant difference in trust scores comparing AELVO behavior and AL behavior, even for affective trust. Thus, the result could confirm the impact of nonverbal behaviors included in AEL behavior on affective trust, more confidently.
5 Discussion
Designing trustworthy social robots using listening behavioral strategies has not been widely discussed in HRI. There is insufficient empirical evidence on how social robots’ listening behaviors can shape their trustworthiness. Therefore, we investigated the effect of social robots’ listening behaviors on the perception of trust. We suspected that AEL behavior of a social robot could elicit higher trust than other types of listening behaviors. Additionally, AL behavior was more effective in evoking trust than NAL behavior. Moreover, we expected that a social robot with dual communication of verbal and nonverbal listening behavior would be perceived as more trustworthy than a robot that contains the same importance of verbal communication.
5.1 Human-robot Trust and Listening Behaviors
As expected, AL and AEL behavior of social robots were both successful in fostering trust in various domains of general, affective and cognitive trust. AL behavior was found to affect the evaluation of general trust during interaction with the robot compared with NAL behavior (H1a). The difference in participants’ trust perception of the robot before and after interaction was more toward the robot with AL behavior than the robot with NAL behavior. This result is consistent with psychological evidence on interpersonal trust [24, 53, 55], and assures us that to make a more trustworthy robot, we can associate it with AL behavior. Trust construction is based on showing the competency of other parties [89], and as AL behavior is attributed to care and attention to the speaker, it leads to trustworthiness. The results confirmed that users act toward the robot as they behave with their fellows. AL behavior was also found to affect all general trust factors, except decision-making and reliability. For participants’ evaluations of decision-making, the mean scores were in line with our predictions. However, these differences were not statistically significant. Decision-making did not result in significant difference in any of comparisons. It could be because, decision-making is the process of selecting from alternatives to be acted upon in the future to attain certain goals and it is always related to place, situation, and time [90]. It needs a problem or goal to make certain decisions based on values and weighing the evidence. The interaction between the robot and user in the current study was limited to a short conversation, and it did not support any circumstances to make critical decisions that could be a possible reason for the ineffectiveness of listening behaviors on decision-making factor. The factor reliability in this study referred to questions about keeping information safe and warning people of risks. No secure or individual information was exchanged between participants and robot in this study, that explains why significant difference was not found for this factor. Function and trouble-free were evaluated with the highest effect size in comparison between AL and NAL behavior (Table 7). As it was mentioned in Sect. 3.7, the general trust scale developed by Schaefer, is mostly performance based and these two factors also referred to more functional attributes of the social robot, that could be the reason behind of receiving high effectiveness in trust perception.
The results also confirmed that AEL behavior of social robots was considerably more trust-evoking during interaction than NAL behavior (H1b). In the field of HC, effective listeners that show empathy and friendship generally project more positive impressions, as they are perceived to be more trustworthy, friendly, or attractive [57]. Emotional sharing and responsiveness both verbally and nonverbally advance the formation of interpersonal trust and even rebuild the damaged trust because emotions guide people’s behavioral propensities [91]. Therefore, AEL behavior, which is ascribed to emotional expressions, was found to improve trust more in interpersonal trust, and our results followed the same findings. Although participants knew that the robot was not alive and it was programmed to be intelligent, they accepted emotional expressions by the robot and believed in its behaviors and reactions. This provides an opportunity for further consideration and research to develop trustworthy robots equipped with AEL behavior or other affective behaviors. Furthermore, AEL behavior of the robot was perceived higher in most of general trust factors except for decision-making, that we explained in previous part. This result suggested that AEL behavior could be a powerful and successful behavior for social robots in establishing trust, and it covers the attributes required to be trustworthy by users.
However, considering the difference of AL and AEL behavior in HRT, thought-provoking results were revealed. Contrary to hypotheses, the difference in terms of general trust and cognitive trust between AL and AEL behavior was not significant (H1c). However, the results admitted a significant difference with a large effect size for affective trust. The difference between AL and AEL behavior in this study was the emotional statements of the robot in accompany with nonverbal behaviors. The results suggested that AEL behavior was highly influential in fostering affective trust. However, general trust which is mostly performance-based in HRT, was less associated with emotional behaviors of the robot in AEL condition. It means that empathic behaviors of the robot in AEL condition could not improve general trust more than AL behavior. It seems logical as participants evaluated the empathic behaviors of the robot effecting affective trust rather than general trust. The findings suggested that people distinguished affective behaviors of robots and associated them with emotional assessments than performance-based evaluations. Furthermore, there was not statistically significant difference for general trust factors between AL and AEL behavior, although the mean score for all these factors was higher in AEL behavior. Emotional experiences enrich the quality of communication, and if AEL behavior could be conceived as an expression of affection [69], it would improve diverse relationships considerably. Thus, speakers with more emotional expressions had a higher perception of being trustworthy than speakers who cared for and supported other parties rationally [48, 56, 92]. It is evident that robots are still far away from ways in which humans convey emotions, which could be the possible reason why AEL behavior did not surpass in all features from AL behavior.
5.2 Correlation of Listening Behaviors, and Affective, and Cognitive Trust in HRI
For affective trust, as it was expected in H2a, AEL behavior of the robot received a higher score and it was evaluated by users to be perceived as affectively more trustworthy than an active listener robot. Based on the features of affective trust, it is constructed on emotional experiences and feelings between partners, and as emotional connections deepen, trust goes beyond available knowledge and rational judgments [67]. AEL behavior creates an affection exchange between the listener and speaker as it conveys a message of love, kindness, and care [69]. Therefore, it evokes and is positively correlated with affective trust in HCs. This study showed that this conviction exists in HRI field, and social robots are able to elicit affective trust if their listening behavior is engaged in emotion and affectionate behaviors, and users appreciate a robot that demonstrates empathy in relation. This is because empathy is a big driver and it is directly associated with trust [13]. Several designers and researchers have prompted robots with anthropomorphic characteristics, such as sociability, passionability, and intelligence [2], to suggest robots as living entities for users. AEL behavior as an anthropomorphic behavior seems to be effective and believable in social robots, and it can help them make more realistic interactions with users and enrich affective trust. More interestingly, the difference of AEL behavior and AL behavior in affective trust was noticeably higher than that for general trust. This finding indicates that empathic behaviors of social robots are powerful in conveying affective messages and building affective trust.
On the other side, however, the study revealed that AEL behavior was not significantly different from AL behavior, regarding cognitive trust (H2b). There is a narrow border between emotion and logic, and they are closely associated with each other. Although some emotions are generated by the rationalization process, unconscious thoughts also lead to emotions about things [64, 91], and emotions often outweigh logic. Therefore, it was supposed that the affective behaviors of the robot could result in more cognitive trust as well. However, the findings did not cover this hypothesis and AEL behavior of the robot could only lead in higher affective trust. AEL behavior of the robot involved the same rational statements of AL behavior plus sending affectionate messages and emotional body language. Thus, it could be the explanation why the evaluations were similar considering cognitive trust. Furthermore, the mean score difference of AEL and AL behavior of the robot in affective trust was greater than the difference in cognitive trust. This result indicated that AEL and AL behavior were perceived as more similar and closer to each other on the cognitive aspect. However, the emotional understanding of users was discernibly higher toward AEL behavior, which suggests that the emotional manipulation of robots can be successful and consequential. As shown in Table 7, AEL and AL behavior were also significantly higher than NAL behavior in both cognitive and affective trust which admitted the superiority of them to NAL behavior.
5.3 Impact of Verbal and Nonverbal Components of Listening Behaviors in HRT
This study discussed the effect of verbal and nonverbal aspects of social robots’ listening behaviors on trust. Previously, the results indicated that AEL behavior of the robot was outstanding in provoking trust. Because AEL behavior consisted of verbal and nonverbal communication aspects, we determined their impact on trust. Although, participants expressed more general trust in the robot with nonverbal reactions in its AEL behavior beside the utterance, the difference with verbal AEL behavior was not significant (H3a). Similar to previous findings in H1a and H1b, as shown in Table 8, AELVO behavior was also not scored differently from AL behavior in general trust (p = 1.00), but it was evaluated higher than NAL behavior (p = 0.000). In HC, nonverbal behaviors are claimed to be effective means to indicate attention, interest, understanding, satisfaction, and many other social information (see [76]). Nonverbal communication strategies are rich and compelling means of communication, and in listening behavior, nonverbal aspects convey considerable parts of messages, as many scholars confirm that effective empathic listening incorporates nonverbal immediacy behaviors (e.g. [69]). However, the current results of this study could not approve the effectiveness of nonverbal behaviors of social robots in HRT and it needs more research. NAO provided limited nonverbal behaviors such as head and arm movements, which were insufficient conveying emotional reactions. Furthermore, some participants got uncomfortable and distressed at the movements of the robot, due to its mechanical voices or the possibility of falling down. Thus, the participants did not generally differentiate AELVO behavior from AEL behavior in trustworthiness.
Regarding affective and cognitive trust, participants evaluated AEL behavior of the robot higher in affective trust than AELVO condition (H3b). People often indicate their emotions in myriad verbal expressions and nonverbal communication in their relationships. However, nonverbal immediacy has an advantage over utterance, as it is associated with messages of positive feelings, intimacy, and affection [69]. Emotions change our voice, body movements, and facial expressions unconsciously; for instance, smiling is a way of communicating happiness to others or waving the arms is a symbol of excitement. Thus, when communication is supported by nonverbal immediacy, affectionate messages are transmitted more easily and faster, and this affects the growth of affective trust in HRI as well. However, interestingly enough, AELVO behavior resulted in a moderate significant difference (p = 0.055) with AL behavior in affective trust. Considering the results of Sect. 5.2, we could conclude the superior effectiveness of nonverbal behaviors on affective trust. It is because that eliminating the nonverbal behaviors in AELVO condition decreased its contribution to affective trust and situated it more similar to AL behavior.
However, on the side of cognitive trust, the results did not reach a scientifically significant difference, although the mean score was higher for AEL. This could be because both AEL and AELVO behaviors demonstrated exactly similar verbal content, which mostly incorporates cognitive-driven trust. Verbal communication is highly language-based and depends on the understanding of the meaning of words. It is found that words are stronger engagement skills, whereas body language affects social-emotional concepts [76]. Thus, a robot with AEL behavior that consists of both verbal and nonverbal means of communication can be the best option for designers to develop a trustworthy robot with the optimum outcome.
5.4 Limitations and Future Work
This study had certain limitations that should be considered. First, this is mainly due to the type of robot. Although NAO is a popular robot in educational studies, it has some inherent limitations, specifically in designing nonverbal behaviors because it does not support facial expressions. Therefore, the designed nonverbal behaviors in this study lacked facial expressions such as smile and eyebrow movements. Additionally, NAO is perceived differently by gender and may be considered as a child rather than an adult. Considering the effect of gender on trust relations, future research should consider other types of robots that specify gender differences in shape or voice. Second, some limitations are owing to the nature of participants, as international students belonged to different cultures and English was not their mother language; thus, this might affect the flow of conversation with the robot. Notably, the personality of the participants can affect their interaction with the robot; for instance, some participants felt shy interacting with the robot or some had a negative attitude. Further studies should consider gender, age, and cultural differences of participants as well as personality perspectives.
6 Conclusion
This study focused on building different listening behaviors for social robots and exploring their impact on users’ trust perception of robots. The results indicated that AEL behavior was the most trust-eliciting among all other listening behaviors specifically in affective trust. Thus, manipulating the robots in this way, we hope that we can improve HRT in different applications in the field of healthcare, business, and education. However, we did not define any special situation in our experiment, which could be an opportunity for future studies. Furthermore, AL behavior of the robot was accepted as more trustworthy than NAL behavior in all aspects of trust, and less trustworthy than AEL behavior in affective trust. This result indicated that adding emotional behaviors to social robots and supporting users by affectionate companions improved the level of affective trustworthiness and made the robots more believable and benevolent for users. If we cannot provide a robot with emotional aspects because of any limitations, AL behavior can be considered as a second option to reach trustworthiness in HRI. However, more research is needed considering gender or cultural differences because these factors are influential in interpersonal trust. Additionally, the results indicated the effectiveness of nonverbal behaviors over utterances in affective trust, which admitted the power of nonverbal behaviors in conveying emotional messages. When the robot indicated nonverbal communication such as head nodding, eye gaze, body movement, and gestures, users evaluated it as more reliable and affectively impressive. This result indicated the use of proper nonverbal behaviors for robots to obtain better outcomes in HRI, specifically trust formation. However, we were not able to simulate nonverbal behaviors in all dimensions because of limitations of the robot in terms of facial expressions or body movement, and this provided another possibility for further investigations in the future. In general, we provided a research in relation to the listening behavior and trust of social robots to narrow the gap to develop trustworthy robots in order to design succeeding robots.
Data Availability
Derived data supporting the findings of this study are available from the corresponding author on request.
Notes
To differentiate Schaefer’s human-robot scale from affective and cognitive trust scale, we used the term of “general trust”.
References
Breazeal C, Dautenhahn K, Kanda T (2016) Social Robotics. In: Siciliano B, Khatib O (eds) Springer Handbook of Robotics., 2nd ed. Springer International Publishing, pp 1935–1971
Beer JM, Liles KR, Wu X, Pakal S (2017) Affective Human–Robot Interaction. In: Emotions and affect in human factors and Human-Computer Interaction. Elsevier Inc., pp 359–381
Stanton CJ, Stevens CJ (2017) Don’t stare at me: the impact of a Humanoid Robot’s gaze upon Trust during a Cooperative Human–Robot Visual Task. Int J Soc Robot 9:745–753. https://doi.org/10.1007/s12369-017-0422-y
Hancock PA, Billings DR, Schaefer KE, et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527. https://doi.org/10.1177/0018720811417254
Robbins BG (2014) On the Origins of Trust. University of Washington
Cameron D, Loh EJ, Chua A, et al (2016) Robot-stated limitations but not intentions promote user assistance. AISB Annu Conv 2016, AISB 2016
Rotter JB (1967) A new scale for the measurement of interpersonal trust. J Pers 35:651–665
Muir BM (1987) Trust between humans and machines. Int J Man Mach Stud 27:327–339
Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Towards safe and trustworthy social robots: ethical challenges and practical issues. In: In: Tapus A., André E., Martin JC., Ferland F., Ammi M. (eds) Social Robotics. ICSR 2015. Lecture Notes in Computer Science. Springer, Cham, pp 584–593
Rempel JK, Holmes JG, Zanna MP (1985) Trust in Close relationships. J Pers Soc Psychol 49:95–112. https://doi.org/10.1037/0022-3514.49.1.95
Madhavan P, Wiegmann DA (2007) Similarities and differences between human–human and human–automation trust: an integrative review. Theor Issues Ergon Sci 8:277–301. https://doi.org/10.1080/14639220500337708
Williams M (2001) In whom we trust: Group Membership as an affective context for Trust Development. Acad Manag Rev 26:377–396
Plank RE, Reid DE (2010) The interrelationships of empathy, trust and conflict and their impact on sales performance: an exploratory study. Mark Manag J 20:119–139
Schoorman FD, Mayer RC, Davis JH (2007) An integrative model of organizational trust: past, present, and future. Acad Manag Rev 32:344–354. https://doi.org/10.1023/B:JRNC.0000040887.00868.02
Verosky SC, Todorov A (2010) Differential neural responses to faces physically similar to the self as a function of their valence. Neuroimage 49:1690–1698. https://doi.org/10.1016/j.neuroimage.2009.10.017
Van Den Akker OR, van Assen MALM, Van Vugt M, Wicherts JM (2020) Sex differences in trust and trustworthiness: a meta-analysis of the trust game and the gift-exchange game. J Econ Psychol 81:102329. https://doi.org/10.1016/j.joep.2020.102329
DeSteno D, Breazeal C, Frank RH, et al (2012) Detecting the trustworthiness of Novel Partners in Economic Exchange. Psychol Sci 23:1549–1556. https://doi.org/10.1177/0956797612448793
DeBruine LM (2005) Trustworthy but not lust-worthy: context-specific effects of facial resemblance. Proc R Soc B Biol Sci 272:919–922. https://doi.org/10.1098/rspb.2004.3003
Todorov A (2008) Evaluating faces on trustworthiness: an extension of systems for recognition of emotions signaling approach/avoidance behaviors. Ann N Y Acad Sci 1124:208–224. https://doi.org/10.1196/annals.1440.012
Farmer H, McKay R, Tsakiris M (2014) Trust in Me: trustworthy others are seen as more physically similar to the self. Psychol Sci 25:290–292. https://doi.org/10.1177/0956797613494852
Hillen MA, De Haes HCJM, Van Tienhoven G, et al (2015) All eyes on the patient: the influence of oncologists’ nonverbal communication on breast cancer patients’ trust. Breast Cancer Res Treat 153:161–171. https://doi.org/10.1007/s10549-015-3486-0
Brunner BR (2008) Listening, communication & trust: practitioners’ perspectives of business/organizational relationships. Int J List 22:73–82. https://doi.org/10.1080/10904010701808482
Ramsey RP, Sohi RS (1997) Listening to your customers: the impact of perceived salesperson listening behavior on relationship outcomes. J Acad Mark Sci 25:127–137. https://doi.org/10.1007/BF02894348
Lester D (2002) Active listening. In: Lester D (ed) Crisis intervention and counseling by telephone, 2nd ed. Charles C Thomas, Springfield, pp 92–98
Gordon T (1975) Parent effectiveness training. New American Library, New York
Kraus M, Kraus J, Baumann M, Minker W (2018) Effects of gender stereotypes on trust and likability in spoken human-robot interaction. In: LREC 2018–11th International Conference on Language Resources and Evaluation. pp 112–118
Gallimore D, Lyons JB, Vo T, et al (2019) Trusting robocop: gender-based effects on trust of an autonomous robot. Front Psychol 10:1–9. https://doi.org/10.3389/fpsyg.2019.00482
Bernotat J, Eyssel F, Sachse J (2019) The (fe)male Robot: how Robot body shape Impacts First Impressions and Trust towards Robots. Int J Soc Robot. https://doi.org/10.1007/s12369-019-00562-7
Ghazali AS, Ham J, Barakova EI, Markopoulos P (2018) Effects of robot facial characteristics and gender in persuasive human-robot interaction. Front Robot AI 5:1–16. https://doi.org/10.3389/frobt.2018.00073
Kiesler S, Goetz J (2002) Mental Models and Cooperation with Robotic Assistants. In: CHI’02 extended abstracts on Human factors in computing systems. ACM, pp 576–577
Bainbridge WA, Hart J, Kim ES, Scassellati B (2008) The effect of presence on human-robot interaction. In: Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN. IEEE, pp 701–706
Nass C, Lee KM (2001) Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction. J Exp Psychol Appl 7:171–181. https://doi.org/10.1037/1076-898X.7.3.171
Tapus A, Mataric MJ, Scassellati B (2007) Socially assistive robotics [Grand challenges of robotics]. IEEE Robot Autom Mag 14:35–42. https://doi.org/10.1109/MRA.2007.339605
Volpe G, Camurri A (2011) A system for embodied social active listening to sound and music content. ACM J Comput Cult Herit 4:1–23. https://doi.org/10.1145/2001416.2001418
Ogawa H, Watanabe T (2000) InterRobot: a speech driven embodied interaction robot. Proc - IEEE Int Work Robot Hum Interact Commun 322–327. https://doi.org/10.1109/ROMAN.2000.892517
Ogasawara Y, Okamoto M, Nakano YI, Nishida T (2005) Establishing natural communication environment between a human and a listener robot. Proc Symp Conversational Informatics Support Soc Intell Interact Situational Environ Inf Enforc Involv Conversat 42–51
Mohammad Y, Nishida T (2008) Towards natural listening on a humanoid robot. Proc - Int Conf Informatics Educ Res Knowledge-Circulating Soc ICKS 2008 153–156. https://doi.org/10.1109/ICKS.2008.4
Kobayashi Y, Yamamoto D, Koga T, et al (2010) Design targeting voice interface robot capable of active listening. 2010 5th ACM/IEEE int conf Human-Robot Interact 161–162. https://doi.org/10.1109/hri.2010.5453214
Lala D, Milhorat P, Inoue K, et al (2017) Attentive listening system with backchanneling, response generation and flexible turn-taking. SIGDIAL 2017–18th Annu Meet Spec Interes Gr discourse dialogue, Proc Conf 127–136. https://doi.org/10.18653/v1/w17-5516
Johansson M, Hori T, Skantze G, et al (2016) Making turn-taking decisions for an active listening Robot for Memory Training. ICSR 2016, LNAI 9979 940–949. https://doi.org/10.1007/978-3-319-47437-3
DeVault D, Artstein R, Benn G, et al (2014) SimSensei kiosk: A virtual human interviewer for healthcare decision support. 13th Int Conf Auton Agents Multiagent Syst AAMAS 2014 2:1061–1068
Kanda T, Kamasima M, Imai M, et al (2007) A humanoid robot that pretends to listen to route guidance from a human. Auton Robots 22:87–100. https://doi.org/10.1007/s10514-006-9007-6
Worthington DL, Bodie GD (2018) Defining listening: a historical, theoretical, and pragmatic Assessment. In: Worthington DL, Bodie GD (eds) The sourcebook of listening research: methodology and measures, 1st ed. John Wiley & Sons, pp 3–18
Bauer C, Figl K (2008) Active listening in written online communication - A case study in a course on Soft Skills for Computer Scientists. In: Proceedings - Frontiers in Education Conference, FIE
Weissglass J (1990) Constructivist listening for empowerment and change. Educ Forum 54:351–370. https://doi.org/10.1080/00131729009335561
Rogers C, Farson RE (1957) Active listening. Industrial Relations Center, University of Chicago, Chicago
Browning S, Waite R (2010) The gift of listening: JUST listening strategies. Nurs Forum 45:150–158. https://doi.org/10.1111/j.1744-6198.2010.00179.x
Comer LB, Drollinger T (1999) Active empathetic listening and selling success: a conceptual Framework. J Pers Sell Sales Manag 19:15–29
Gearhart CC, Bodie GD (2011) Active-empathic listening as a General Social Skill: evidence from Bivariate and Canonical Correlations. Commun Reports 24:86–98. https://doi.org/10.1080/08934215.2011.610731
Drollinger T, Comer LB, Warrington PT (2006) Development and validation of the active empathetic listening scale. Psychol Mark 23:161–180. https://doi.org/10.1002/mar.20105
Bodie GD, St. Cyr K, Pence M, et al (2012) Listening competence in initial interactions I: distinguishing between what listening is and what listeners do. Int J List 26:1–28. https://doi.org/10.1080/10904018.2012.639645
Weger H, Castle GR, Emmett MC (2010) Active listening in peer interviews: the influence of message paraphrasing on perceptions of listening skill. Int J List 24:34–49. https://doi.org/10.1080/10904010903466311
Nugent WR, Halvorson H (1995) Testing the Effects of active listening. Res Soc Work Pract 5:152–175. https://doi.org/10.1177/104973159500500202
Fassaert T, van Dulmen S, Schellevis F, Bensing J (2007) Active listening in medical consultations: development of the active listening Observation Scale (ALOS-global). Patient Educ Couns 68:258–264. https://doi.org/10.1016/j.pec.2007.06.011
Lasky S (2000) The cultural and emotional politics of teacher-parent interactions. Teach Teach Educ 16:843–860. https://doi.org/10.1016/S0742-051X(00)00030-5
Drollinger T, Comer LB (2013) Salesperson’s listening ability as an antecedent to relationship selling. J Bus Ind Mark 28:50–59. https://doi.org/10.1108/08858621311285714
Weger H, Castle Bell G, Minei EM, Robinson MC (2014) The relative effectiveness of active listening in initial interactions. Int J List 28:13–31. https://doi.org/10.1080/10904018.2013.813234
Johnson-George C, Swap WC (1982) Measurement of specific interpersonal trust: construction and validation of a scale to assess trust in a specific other. J Pers Soc Psychol 43:1306–1317. https://doi.org/10.1037/0022-3514.43.6.1306
Rousseau DM, Sitkin SB, Burt RS, Camerer C (1998) Not so different after all: a cross-discipline view of trust. Acad Manag Rev 23:393–404. https://doi.org/10.5465/AMR.1998.926617
Yamagishi T, Yamagishi M (1994) Trust and commitment in the United States and Japan. Motiv Emot 18:129–166
Koehn D (2003) The nature of and conditions for Online Trust. J Bus Ethics 43:3–19
Riegelsberger J, Sasse MA, McCarthy JD (2005) The mechanics of trust: a framework for research and design. Int J Hum Comput Stud 62:381–422. https://doi.org/10.1016/j.ijhcs.2005.01.001
Mcallister DJ (1995) Affect and cognition based trust as foundations for interpersonal cooperation in organizations. Acad Manag J 38:24–59
Lewis JD, Weigert A (1985) Trust as a social reality. Soc Forces 63:967–985
Costigan RD, Ilter SS, Berman JJ (1998) A multi-dimensional study of Trust in Organizations. J Manag Issues 10:303–317
Zur A, Leckie C, Webster CM (2012) Cognitive and affective trust between australian exporters and their overseas buyers. Australas Mark J 20:73–79. https://doi.org/10.1016/j.ausmj.2011.08.001
Johnson D, Grayson K (2005) Cognitive and affective trust in service relationships. J Bus Res 58:500–507. https://doi.org/10.1016/S0148-2963(03)00140-1
Duncan S, Barrett LF (2007) Affect is a form of cognition: a neurobiological analysis. Cogn Emot 21:1184–1211. https://doi.org/10.1080/02699930701437931
Floyd K (2014) Empathic listening as an expression of interpersonal affection. Int J List 28:1–12. https://doi.org/10.1080/10904018.2014.861293
Robertson K (2005) Active listening: more than just paying attention. Aust Fam Physician 34:1053–1055
Buschmeier H, Malisz Z, Skubisz J, et al (2014) ALICO: A multimodal corpus for the study of active listening. In: Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014. pp 3638–3643
Libow JA, Doty DW (1976) An evaluation of empathic listening in telephone counseling. J Couns Psychol 23:532–537. https://doi.org/10.1037//0022-0167.23.6.532
Mehrabian A (1971) Silent messages: Implicit Communication of Emotions and Attitudes. Wadsworth, California
Jones SM, Guerrero LK (2001) The Effects of Nonverbal Immediacy and Verbal Person Centeredness in the emotional support process. Hum Commun Res 27:567–596. https://doi.org/10.1111/j.1468-2958.2001.tb00793.x
Hillen MA, De haes HCJM, Stalpers LJA, et al (2014) How can communication by oncologists enhance patients’ trust? An experimental study. Ann Oncol 25:896–901. https://doi.org/10.1093/annonc/mdu027
Thepsoonthorn C, Ogawa KI, Miyake Y (2018) The relationship between Robot’s Nonverbal Behaviour and Human’s likability based on Human’s personality. Sci Rep 8:1–11. https://doi.org/10.1038/s41598-018-25314-x
Kraut RE, Lewis SH, Swezey LW (1982) Listener responsiveness and the coordination of conversation. J Pers Soc Psychol 43:718–731. https://doi.org/10.1037//0022-3514.43.4.718
Hadar U, Steiner TJ, Rose FC (1985) Head movement during listening turns in conversation. Nonverbal Behav 9:
Dahlbick N, Jonsson A, Ahrenberg L, Current (1993) Wizard of Oz, why and how. Knowledge-Based Syst 6:258–266
Rossi S, Staffa M, Bove L, et al (2017) User’s personality and activity influence on HRI comfortable distances. In: Social robotics. Springer, Cham, pp 167–177
Ho S, Foulsham T, Kingstone A. (2015) Speaking and listening with the eyes: Gaze Signaling during Dyadic interactions. PLoS One 10(8):e0136905. https://doi.org/10.1371/journal.pone.0136905
McClave EZ (2000) Linguistic functions of head movements in the context of speech. J Pragmat 32:855–878. https://doi.org/10.1016/S0378-2166(99)00079-X
McColl D, Nejat G (2014) Recognizing emotional body Language displayed by a human-like Social Robot. Int J Soc Robot 6:261–280. https://doi.org/10.1007/s12369-013-0226-7
Thimmesch-Gill Z, Harder KA, Koutstaal W (2017) Perceiving emotions in robot body language: Acute stress heightens sensitivity to negativity while attenuating sensitivity to arousal. Comput Human Behav 76:59–67. https://doi.org/10.1016/j.chb.2017.06.036
Ivey AE, Daniels T (2016) Systematic interviewing Microskills and Neuroscience: developing Bridges between the Fields of Communication and Counseling psychology. Int J List 30:99–119. https://doi.org/10.1080/10904018.2016.1173815
Ninomiya T, Fujita A, Suzuki D, Umemuro H (2015) Development of the multi-dimensional robot attitude scale: constructs of people’s attitudes towards domestic robots. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 9388 LNCS:482–491. https://doi.org/10.1007/978-3-319-25554-5_48
Schaefer KE (2016) Measuring Trust in Human Robot interactions: development of the"Trust Perception Scale-HRI”. In: Mittu R, Sofge D, Wagner A, Lawless WF (eds) Robust Intelligence and Trust in Autonomous Systems. Springer, Boston, pp 191–218
Gompei T, Umemuro H (2018) Factors and development of cognitive and Affective Trust on Social Robots. In: et al. Social Robotics. ICSR 2018. vol 11357. Springer, Cham. https://doi.org/10.1007/978-3-030-05204-1_5
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of Organizational Trust. Acad Manag Rev 20:709–734
Slovic P, Lichtenstein S, Fischhoff B (2012) Decision making. Lightning Source
Ma F, Wylie BE, Luo X, et al (2018) Apologies repair children’s trust: the mediating role of emotions. J Exp Child Psychol 176:1–12. https://doi.org/10.1016/j.jecp.2018.05.008
Aggarwal P, Castleberry SB, Ridnour R, Shepherd CD (2005) Salesperson empathy and listening: impact on relationship outcomes. J Mark Theory Pract 13:16–31. https://doi.org/10.1080/10696679.2005.11658547
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Ethics approval
This research was approved by human subjects research ethics board of the Tokyo Institute of Technology. The participants provided their written informed consent to participate in this study.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Anzabi, N., Umemuro, H. Effect of Different Listening Behaviors of Social Robots on Perceived Trust in Human-robot Interactions. Int J of Soc Robotics 15, 931–951 (2023). https://doi.org/10.1007/s12369-023-01008-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-023-01008-x