1 Introduction
A wide range of experimental work in
Human-Robot Interaction (
HRI) has shown that robots can influence the social behavior of humans. For example, prior work suggests that the mere presence of robots can facilitate honest human behavior [
40], that robot nonverbal behavior can induce changes in human nonverbal behavior as well [
49], and that robots can motivate people to engage in activities like exercising [
30,
73]. Furthermore, robot cheating behavior has been found to influence human engagement in HRIs [
88] and robot emotional expressions may influence human willingness to collaborate with these machines [
104]. In group settings, robots have further been shown to influence not only HRIs but also human-human interaction dynamics [
20,
27,
35,
75,
79,
91].
For 25 years, our understanding of robot social influence has been dominated by Reeves and Nass’s Media Equation [
74]. The fundamental premise of the Media Equation is that when predicting how people will act when interacting with a novel technology, one needs only to look at how people normally engage with each other. If wearing identical blue armbands makes two people act more like a team, then we predict a similar outcome when a person and a robot share the same armbands. This idea has had a foundational influence on both the fields of
Human-Computer Interaction (
HCI) and HRI. We explicitly design human-computer and human-robot interactions by following examples provided by human-human interactions with the strong assumption that these will continue to hold true (e.g., [
5]).
There are times, though, when the Media Equation does not offer a prediction because there is no equivalent in human-human behavior. We can make no prediction about how a person will behave when asked to turn a robot off and then on again as this is not something that could be done to a person. Perhaps more importantly, there is a growing collection of results that showcases where the prediction made by the Media Equation falls flat [
33]. People often feel comfortable making mistakes in front of robot tutors when they would not let these errors be seen by human tutors [
54]. We accept that robots can take on roles that we do not consider appropriate for people [
52]. We also feel pressure to conform to groups of humans, even when they give obviously incorrect answers to simple questions, but not to robots [
13].
1Many researchers have tried to address the limitations of the Media Equation; however, these alternate models focus on predicting how people will perceive technology and do not predict the type and magnitude of the social influence exerted by the technology. For instance, several alternative models focus on computers [
85] or voice/virtual agents [
67]. They suggest that people use different communication strategies when interacting with technology, use different cognitive processes, and are less engaged. Several models define interaction with technology as a separate independent type of interaction that involves novel social scripts, triggering different types of control and confidence [
33,
67]. Specifically, in the context of robots, a few models were suggested to address the complexity of how people perceive robots. For example, Zlotowski et al. (2018) suggested that human perceptions of a robot depend on a dual evaluation process that involves both an implicit automatic perception and an explicit controlled evaluation of the robot [
105]. Ruijten et al. (2019) suggested an alternative that involves the ordering of human-like characteristics on a range of perceived human-likeness [
78]. Another recent explanation for robot perception was suggested by Clark and Fisher (2023). They argue that people do not perceive robots as social agents themselves but that they construe robots as interactive depictions [
18]. The authors suggest that robots are perceived from three different perspectives: (1) Base Scene—the robot’s physical aspects (appearance, material, design); (2) Depiction Proper—the features one associates with the base scene, such as what social agent the robot represents; (3) Scene depicted—based on the previous two perspectives, the robot is perceived as a character, which allows for imagining probable scenes involving the robot. The authors suggested that people can effortlessly take any of these perspectives and switch between them. Other theories have emphasized the human-likeness of the robot’s appearance, behavior, social skills, and level of personalization as factors that determine how people perceive robots [
26,
43]. While these models can account for the limitations of the Media Equation in explaining how people perceive robots, they do not demonstrate a difference in the magnitude of the impact on people’s social behavior. If people do not perceive robots similarly to people it is important to identify the similarities and differences of such robotic social influence.
In this article, we propose a model for predicting the magnitude of a
robot’s social influence through two independent factors:
violation of expectations and
belonging. Our model focuses on what we consider to be the two most important and critical factors, at times valuing simplicity over complete accounting of all possible factors. We believe that these two factors capture the unique social context that a robot affords as something that is at times an inanimate object (and not subject to the classical Media Equation) and, at times, a social agent (i.e., the interaction with it is characterized by human-like social effects, [
41]), creating a specific set of expectations around a particular interaction. Furthermore, the robot’s status as being in some ways similar to ourselves (as an in-group member) and at times very different from ourselves (as an out-group member who does not belong) shapes the influence that these artifacts have over us.
In the following sections, we first explain and ground our three factors: the dependent and predicted factor of social influence and the independent factors of violation of expectation and belonging. Then, we provide the first step of validating our model, which we call RoSI (Robot Social Influence), by applying the model to five previously published user studies and predicting their results. We visualize these predictions in use-case diagrams. Through those example predictions, we provide insight into how RoSI should be applied to HRI experimental designs. The goal of this work is to provide a theoretical model that can explain the outcome of an experiment by predicting differences in the robot’s social influence.
2 A Model of Robot Social Influence (RoSI)
Figure
1 describes our theoretical model of RoSI, which predicts the magnitude of a robot’s social influence on a person based on two factors: (1) the degree to which the robot exceeds or fails to meet the person’s expectations and (2) the degree to which the person considers themselves and the robot as belonging to the same group. In particular, the model predicts a larger magnitude of social influence when the robot exceeds expectations and a lower magnitude when it fails to meet them. It also predicts that both high and low belonging will involve high social influence, while a more neutral sense of belonging will involve a low influence.
We chose violations of expectations and belonging specifically as independent variables of our model because they are primary components and powerful predictors of robot social influence across a broad range of HRIs. In some cases, these two factors are predicted to involve different influence patterns for humans and robots (e.g., our expectations of robots can differ from our expectations of people). Previous studies have indicated other factors that can also shape the social influence of a robot, including robot anthropomorphism [
69,
103], robot status or power [
81], robot competence [
38], social pressure [
13], and perceived robot agency [
56,
88]. These other factors are commonly encapsulated in our chosen factors of expectations (e.g., power, competency, and agency) and group belonging (e.g., robot status and anthropomorphism).
The model makes several key assumptions: (1) social influence is presented from the perspective of a person who takes part in an interaction with a robot; (2) the model predicts the magnitude, but not the valance or type of the social influence (which are highly context-dependent); (3) the model predicts the change in the magnitude of social influence across the interaction with the robot. While at the beginning of an interaction, people have initial expectations and a sense of belonging, the robot’s behavior throughout the interaction can alter those, which would result in a different magnitude of social influence.
2.1 Predicted Factor: Robot Social Influence
Prior work in HRI has demonstrated that robots can change how people behave, how people perceive the world around them, and how people think and feel. In other words, robots can exert social influence on people. We define social influence as “a change in one’s beliefs, behavior, or attitudes due to external pressure that may be real or imagined” [
37]. Social influence can take many forms: a person may conform their choice to match that of a robot [
82,
83,
98], a person may comply with the request of a robot (even if it does not make sense) [
6,
76], or a person may follow a social norm introduced or reinforced by a robot [
68,
93].
2.1.1 Theoretical Background and Justification.
Rooted in the human sensitivity to social information in their environment [
61], people perceive robots’ actions as social and, as a result, are often influenced by them. It is argued that humans are social organisms [
94], with brain structures that specifically support awareness of others and communication skills that are fundamental to social coordination [
51,
61,
74]. The inherent propensity to perceive the world through a social lens [
101] is believed to lead to a strong tendency to anthropomorphize objects and to automatically associate autonomous actions with social intent [
2,
25,
29,
51,
90]. This social interpretation given to interactions with autonomous objects is the basis for the various indications for social influence in HRIs.
At the same time, robots do not necessarily have the same social influence as humans do (e.g., [
13]). Different aspects of a robot’s design may shape the type and magnitude of its social influence. The robot’s appearance, capabilities, communication modalities, and role may all contribute to its social influence in a given interaction. We suggest that it is not enough to state that interactions with robots are interpreted as social experiences. Therefore, we present this model of RoSI, that can be used to predict the unique social influence of a robot on the people with whom the robot interacts.
2.2 Model Independent Factor #1: Violation of a Person’s Expectations of the Robot
The first factor in RoSI is how much and in which direction a robot violates a person’s
social expectations. Social expectations are defined as cognitions concerning the behavior anticipated by others when interacting with them [
4,
16]. They guide people’s behavior by reducing uncertainty and directing their comprehension [
11,
14,
16,
84]. In the context of HRI, expectations become even more important due to the inherent uncertainty associated with an autonomous machine [
90]. A robot could meet the person’s expectations, exceed their expectations, or fail to meet the person’s expectations. A robot can also violate the person’s expectations in a way that surprises the person but does not exceed or fail to meet their expectations. We treat these cases as similar to the robot meeting the person’s expectations.
For example, consider a person’s expectations of a food-delivery robot. One way the robot might exceed expectations is by displaying empathy in addition to delivering food, for example, noticing that a person looks sad and saying “you look sad, is everything ok?” A food-delivery robot could fail to meet expectations if it is unable to speak its “here’s your delivery!” phrase as the user has come to expect due to a software failure. A robot might surprise a person by using a voice that the person did not expect but neither exceeds expectations or fails to meet them.
Robots that exceed a person’s expectations typically present complex social behaviors like vulnerability [
92], favoring one human over the other [
45], and touching the human [
19]. Exceeding expectations does not necessarily involve positive robotic behavior as some negative behaviors like cheating and excluding others [
28,
56] may be perceived as exceeding the robot’s anticipated behavior. Robots that fail to meet a person’s expectations often disappoint them since they exhibit lower capabilities than expected [
47]. Robots fail to meet expectations either due to humans’ unrealistic initial expectations or due to mistakes and errors related to the robot’s function (e.g., navigation errors [
80], perceptual and processing judgment errors [
47], and memory-related errors [
102]).
While some robot behaviors may cause expectation violations that always exceed a person’s expectations or always fail to meet them, some robot behaviors may cause distinct effects based on individual differences. For example, consider the possibility that a food-delivery robot receives a software update that has the robot express happiness after a successful delivery by spinning around in a circle. This additional expression of emotion could exceed a person’s expectations of the robot. However, another person may perceive the circle-spinning as a robot malfunction, resulting in a failure of the robot to meet the person’s expectations. It is also possible that the circle-spinning of the robot may not change a person’s perceptions of the robot, leaving that person’s expectations of the robot unchanged (leaving their expectations at “meets expectations”). As this example illustrates, the violation of expectations can be subjective.
2.2.1 Theoretical Background and Justification.
Violation of the expectations from the robot was chosen as one of the model’s factors due to the centrality of expectations in interpersonal interactions [
14,
72,
90] and specifically how they shape human behavior in HRI [
48]. One of the primary concerns associated with interpersonal interactions is the uncertainty about others’ thoughts, attitudes, and behavior. It is suggested that in any given context, people unconsciously develop expectations that assist in predicting different aspects of the interaction [
4]. Such predictions have a persistent influence on the social sensitivity in the interaction [
84,
90].
Expectation may be even more central in HRIs [
72] because human norms cannot be fully applied to interactions with robots [
58], and people commonly have various and changing expectations of robots [
77]. At the same time, robots’ autonomous behavior typically positions them as independent social actors, triggering a set of expectations people apply to social contexts. Accordingly, expectations from robots are formed by various factors ranging from relevant human norms in the social context to experience with real and fictional robots. Technical affinity and the introduction of a robot in a specific interaction were also shown to impact the expectations from the robot [
36,
44,
70]. This complexity increases the need to understand and anticipate robots’ behavior during interactions, emphasizing the importance of deriving accurate expectations. When expectations are violated, one is required to reassess the situation which leads to higher social sensitivity. The direction of the violation (exceeding or failing to meet expectations) and its magnitude determine the intensity of the social impact [
14,
15]. Various studies have already mapped people’s expectations of different robots (e.g., [
77]) and suggested methods for manipulating, structuring, and modifying expectations. These commonly involve a short explanation about the robot given before the interaction [
36,
70].
2.2.2 Prediction of Social Influence.
Expectations are constructed in RoSI as a relative factor predicting the intensity of the robot’s social influence. The model treats each person’s expectations violation as relative to their own initial expectations. Thus, the starting point for a person’s expectations of the robot is at the center of the horizontal axis of Figure
1, where the robot meets the expectations.
When a robot meets the person’s expectations, the model predicts no change in the robot’s social influence. No change in the robot’s influence is also predicted when the violation of expectations is surprising since it neither exceeds the person’s expectation nor fails to meet the person’s expectations. While a surprise at a robot’s actions may influence the HRI in some way, we only expect a significant change in the robot’s social influence when the person’s expectations of the robot are either exceeded, or the robot fails to meet the person’s expectations.
Robotic experiences that fail to meet expectations are predicted to decrease social influence (due to disappointment and a decrease in trust [
99]). On the other hand, robotic experiences that exceed participants’ expectations are predicted to increase social influence (due to the increase in the robot’s perceived capabilities). While the magnitude of the social influence is predicted to increase, it is not necessarily predicted to be positive. For instance, a robotic experience that exceeds expectations may lead to increased conformity and obedience.
The relative nature of this factor also suggests that the robot’s social influence will become more typical over repeating interactions since the expectations from the robot will be updated. If the robot presents consistent behaviors across several interactions, the expectations from it will be adjusted accordingly. This, in turn, would form a relevant social context with a typical social influence.
As an illustrative example, consider a robot that provides product recommendations. Accepting the robot’s recommendation for a product to purchase is predicted to depend on the robot’s perceived group membership and the initial expectations from the robot (whether the robot meets them). If the robot fails to meet expectations (e.g., unable to provide information about the product or if there are long delays in its responses to questions), the chances that its recommendation will be followed drop. On the other hand, if the robot provides a sense of caring by asking for the person’s name and preferences, it is likely to be perceived as exceeding expectations, and its recommendations are more likely to be accepted by the user.
2.3 Model Independent Factor #2: A Person’s Belonging to the Robot’s Group
The second input to RoSI is the person’s perception of their group membership relative to the robot: whether the person and the robot belong to the same group or different groups. Let us consider a card game where two human-robot teams play competitively against each other: participant A and robot Emys play against participant B and robot Glin, as explored by Correia et al. [
22]. Participant A would likely perceive a high degree of belonging to robot Emys’s group (their partner) and a low degree of belonging to robot Glin’s group (their competitor). We view high and low belonging to the robot’s group as a similar concept to intergroup membership (in-group/out-group) [
95]. High belonging to the robot’s group is analogous to the person viewing the robot as an in-group member. Low belonging to the group is analogous to the person viewing the robot as an out-group member. It is also possible for a person’s belonging to the robot’s group to exist somewhere in between low belonging and high belonging (e.g., at a neutral midpoint).
People’s perceptions of their and the robot’s group membership can be powerfully shaped by both the robot’s behavior and environmental factors. People show greater signs of shared group membership (e.g., favor, trust, and liking) with robots that express vulnerability [
60,
92], use team-related verbal expressions [
21,
81], display empathy [
53], and use social touch [
50,
87,
100]. Robots do not even need to have a humanlike appearance to engender themselves to greater group membership and belonging with people. Using expressive gestures, non-humanoid robots have effectively communicated responsiveness [
12] and security [
59] in one-on-one interactions with people. In addition to robot behavior, environmental factors, such as pre-assigned interaction roles [
21,
32,
91], can also influence people’s views of robots’ group membership. For example, when robots are a part of the same team as a person, those robots are seen as in-group members, however, when robots are a part of a competing team to a person, those robots are seen as out-group members [
32].
2.3.1 Theoretical Background and Justification.
The focus on belonging as one of the two factors determining the robot’s social influence is grounded in its profound impact on human behavior in interpersonal interactions [
10,
17]. The dramatic effect of belonging is attributed to a fundamental human drive to form meaningful relationships with others that involve positive and pleasant interactions [
1,
9,
10]. In order to satisfy the need to belong, people must establish meaningful interpersonal connections [
63] commonly supported by sharing group membership [
10,
42]. Belonging to a group leads to the rapid formation of strong group bonds, loyalty, and group identification ties [
42,
57,
86,
86]. Sharing group membership encourages behaviors that enhance the chances of being included, such as showing favoritism to in-group members [
7,
42,
57,
86] and defending the integrity of the intergroup social bonds [
10]. Belonging to a group also influences the perceptions of out-groups [
10,
62] leading to negative attitudes and rejection of those with different group membership [
62]. This variety of group membership effects contributes to the centrality of belonging in shaping interactions.
While group membership effects are also observed in HRI, they are not always similar to those observed in human interactions and present more complex patterns of social influence that underscore their importance to the model. Robots are not naturally perceived as in-group members and can often be considered a part of a potentially competing out-group [
89]. People are more likely to consider themselves as members of a group with people who share more in common with them [
24]. This is supported by evidence that people demonstrate greater in-group favoritism to other humans as opposed to robots [
32], and to more human-like robots as opposed to more machine-like robots [
31]. In some cases, interactions with robots fail to show any belonging effects that are typically observed in human groups [
13]. However, robots’ behavior and various environmental factors may form a strong sense of mutual human-robot group membership. In such contexts, people have shown a preference for in-group robots over out-group humans [
32]. Given these findings, understanding how a person views their group membership relative to a robot is essential in determining the amount of social influence a robot can exert on the person.
2.3.2 Prediction of Social Influence.
The social influence people experience from a robot will depend on how they view the robot’s group membership. In explaining this relationship between social influence and belonging, we consider three categories of belonging: low belonging, neutral belonging, and high belonging. The belonging category may change across repeated interactions due to familiarity effects and the development of the relationship (which can be either positive or negative).
Our model predicts that a robot will have high social influence in both cases of high belonging and low belonging when compared with cases of neutral belonging. When a person views themselves and the robot as members of the same group (high belonging), we expect them to display in-group favoritism towards the robot, and thus an increased likelihood to be influenced by the robot. For example, if a robot viewed with high belonging makes a recommendation about which product to purchase, our model predicts that the person will be likely to follow that recommendation.
When a person views themselves and the robot as members of different groups (low belonging), it also increases the robot’s influence. This can especially happen if the robot is viewed as an authoritative figure [
3,
34]. Additionally, people can display outgroup hostility, with possible negative attitudes and aggressive behavior towards the robot. Taking the same example, if a robot makes a recommendation about which product to purchase and the person views the robot with low belonging, the person may intentionally choose to purchase a different product or nothing at all, acting against the recommendation of the robot.
It is also possible for a person to view a robot in a more neutral way (neutral belonging), where they do not view the robot as either a close in-group member (high belonging) or an opposing out-group member (low belonging). In these cases of neutral belonging, our model predicts that the robot will have low social influence and that it is less likely to change the behavior or attitudes of the person with whom the robot is interacting.
2.4 A Potential Mathematical Expression for RoSI
A detail-oriented reader may have noticed that we omit units from the axes and from the colormap of Figure
1, i.e., there are no ticks with specific coordinates in the
x and
y axes nor in the colormap. We do this to emphasize how the pattern of social influence changes based on the two independent factors more than whether a person’s perspective about a robot has a specific
\((x,y)\) coordinate or social influence takes on a specific numeric value. In other words, RoSI provides the pattern of impact on social influence; it is not intended to provide an exact prediction of the social influence magnitude. The pictorial representation of RoSI in Figure
1 is inspired by other models with visual representations, like the uncanny valley model [
65]. The uncanny valley is often conveyed and reasoned about through a 2-dimensional plot that depicts the relationship between the human likeness of an entity and the perceiver’s affinity for it (e.g., see Figure 1 in [
71]). Similar to how the early uncanny valley plots lacked units and concrete measurements, our pictorial representation for RoSI lacks specific units as well. Also, similar to how follow-up work to the uncanny valley proposal measured concrete examples along the uncanny valley curve and has proposed refinements to the early model (cf. [
8,
46]), we expect future work to also refine our model of robot social influence for specific contexts of use.
It was important for us to generate RoSI diagrams in this article using a systematic and reproducible procedure. Although we did not want these diagrams to focus on specific \((x,y)\) and social influence values as described before, defining this systematic procedure required choosing some underlying mathematical formulation for RoSI. With such a formulation, one could then reason about robot social influence across different potential interactions with a robot, as further described in the next section.
We considered different mathematical formalizations for RoSI that matched the general shape of the RoSI’s pictorial representation in Figure
1(d); ultimately, though, we chose to err on the side of simplicity per Occam’s razor or the principle of parsimony. We chose to generate all RoSI diagrams shown in this article with the following expression:
\({\it Social\_Influence}(x,y) = x + \alpha y^2\), where
x represents violations of expectations,
y represents belonging, and
\(\alpha\) is a scaling factor between the independent terms. For this mathematical expression, we assumed that
\(\alpha \in \mathbb {R}_{\gt 0}\) is a positive scalar such that both independent factors contribute to social influence. Also, we assumed that
\(y \in [-c, c]\) with
\(c \in \mathbb {R}_{\gt 0}\) and square the contribution of
y such that
\({\it Social\_Influence}(x,y)\) follows the 3D surface depicted in Figure
1(d).
2 It is important to note that while squaring
y increases its impact in comparison to
x, the
\(\alpha\) coefficient can further define the tradeoff between these factors. We could have chosen the absolute value for the
y component of the equation instead of the squared exponential, but this would have resulted in a more sensitive variation in influence around
y = 0. Instead of that variation, we wanted to accentuate that we expect social influence to be more pronounced with high or low belonging, farther away from
y = 0. Other mathematical expressions that follow the shape of the surface in Figure
1(a) are possible for RoSI, e.g., instead of squaring
y, one could use other exponentials that are convex functions as well, such as
\(y^4\). Additional possibilities include more complex functions like a Tukey loss curve. We leave the exploration of other mathematical forms for RoSI to future work.
3 Employing Use-case Diagrams to Display RoSI Predictions
When focusing on a specific HRI, RoSI predicts the pattern of magnitude change in the robot’s social influence according to the implementation of the two predicting factors (
violation of expectations and
belonging) in that specific interaction. This type of integrated impact can be visualized in model use-case diagrams, as exemplified in Figure
2.
Use-case diagrams are a direct application of the social influence surface depicted in Figure
1. The independent variables represented by the
x and
y axes and their underlying relationship to social influence are the same between the RoSI model in Figure
1 and the RoSI use-case diagrams. More specifically, the
x axis of a use-case diagram corresponds to the degrees to which a robot exceeds or fails to meet the person’s expectations (factor #1 in Section
2.2). The
y axis corresponds to how much the person considers themselves and the robot as belonging to the same group (factor #2 in Section
2.3). Finally, social influence in a use-case diagram is predicted to vary in a similar fashion to the surface depicted in Figure
1(a) and as explained in Section
2.4. Thus, the colormaps used to convey social influence in Figure
1 and in the use-case example in Figure
2 are the same.
There is one main difference between the RoSI model plots in Figure
1 and use-case diagrams: use-case diagrams highlight specific points in time during interactions, e.g., according to experimental conditions in a user study. Thus, one can think of a user diagram as multiple RoSI model plots, like the ones in Figure
1, overlaid on top of each other. Each of the plots conveys how social influence changes over time according to different interactions. This is achieved by visualizing a trajectory from an initial set of values for the model’s independent factors (
\(x_1,y_1\)) when an interaction starts to a new set of values (
\(x_2,y_2\)) later in time, when an interaction has already taken place. The usefulness of RoSI is in predicting the pattern of magnitude change in the robot’s social influence during the interaction. Consequently, the value of a use-case diagram stems from being able to visualize these predictions all in one place, facilitating comparisons. Importantly, the use-case diagrams assume that in all the interactions that they consider, the initial expectation of interest (factor #1) is uniform across the interactions when the human-robot encounters start. That is,
\(x_1\) is the same across all interactions visualized in the use-case diagram. This assumption is important to be able to compare different interaction trajectories with respect to violations of expectations in one diagram because this is intrinsically a relative construct. One has to have some initial set of expectations for them to be violated in some way.
Use-case diagrams highlight the person’s perspective when an interaction begins with the
symbol. This symbol is always at the midpoint of the
x axis because violations of expectations is a relative construct, as explained before. The position of the
symbol on the
y axis depends on the degree to which the interaction started with the person perceiving themselves and the robot as sharing group membership.
The social influence at the end of the interaction with the robot is indicated by a rectangle in the use-case diagrams. The rectangle’s position represents the violation of expectations experienced in the interaction and the final perception of sharing group membership with the robot. The estimated magnitude of the social influence is indicated by the thickness of the rectangle’s frame and its edge color, which correspond with the social influence magnitude pattern presented in Figure
1. The arrow from
to the rectangle shows how the person’s perspective changed during the HRI, e.g., according to an experimental manipulation during a user study.
3.1 Applying the Model to a Study
In this section, we will go through an example of how to apply the RoSI model to a given study. We would like to highlight that the following discussion is based on the experience and judgment of the researchers. Therefore, we recommend discussing the planned experiment among experts when applying the model to make predictions. We will use as an example the hypothetical study shown in Figure
2, where the aim is to predict the social influence of a robot on a participant given the ideas it provides during a brainstorming session. Let us say that there are two types of robot behaviors that we are interested in comparing: one where the robot supports the ideas the user is providing while brainstorming and one where the robot will provide several ideas that are irrelevant to the topic of the brainstorming session.
The first step is to determine the starting location
in the diagram. As previously mentioned, the expectations factor is relative, so the
would be set in the middle of the
x-axis. Since the robot engages in a collaborative brainstorming session with the user, we might set the start somewhat above neutral on the
y-axis (belonging). Next, the changes in expectations and belonging need to be determined given the robot’s behaviors in the two conditions. First, let us consider the robot that provides irrelevant ideas. The group membership would likely not change as the user and the robot are still collaborating on the same task, and the robot is not increasing nor decreasing its belonging perception. However, it will likely fail to meet the user’s expectations as the ideas it is providing are unrelated to the task. Therefore, we would likely place the end-point of the irrelevant ideas condition at the same height of belonging but to the left of the starting point. Next, we evaluate the robot that supports the user’s ideas. As the robot is positive about the user’s ideas, their feelings of belonging would likely increase. Additionally, for a robot to support the user’s ideas it would need an understanding of what the user is saying and be able to verbally express its support. This usually exceeds what many people expect robots to be able to do. Therefore, the end-point of the robot that supports ideas would likely be placed to the right (exceeds expectations) and higher up (increased belonging) than the starting point.
Now that we have determined the endpoints of each of the conditions compared to the start point, we can deduce which of the conditions would likely have higher social influence. As can be seen in Figure
2, the “irrelevant ideas” condition is displayed as an aqua color (and a thin outline), which represents a lower degree of social influence. On the other hand, the “support ideas” condition is displayed in dark blue (and a thicker outline), which displays a moderate amount of social influence. Therefore, the robot that “supports ideas” is likely to have greater social influence (e.g., it is more likely that the person will comply with its requests in a later interaction) than a robot that provides “irrelevant ideas”.
A similar step-by-step process can be used for any study one is planning on conducting. First, determining the starting location, and in sequence, determining how the robot’s behavior (or how different experimental conditions) would exceed or fail to meet expectations and how the user’s feeling of belonging would change. The end positions of each experimental condition can be compared to deduce which one would likely have a higher degree of social influence on the user. We use a similar process as described in this section to generate the graphs in Section
4.
5 Discussion
We present a model of RoSI, a theoretical model for understanding and predicting robots’ influence on humans’ behavior and emotions in social contexts. This work extends beyond prior work, which predicts how people perceive robots [
26,
43,
78,
105] or assume like the Media Equation [
74] that robots impact would follow the pattern of human social influence. Instead, the model predicts a person’s response to a robot’s behavior based on robot-specific interaction factors. The model’s factors, the violation of a person’s expectations of the robot (
violation of expectations) and the social belonging a person feels toward the robot (
belonging) capture the varied impact robots may exert on humans and are also fundamental to interpersonal interactions in general. When applied to HRIs, these factors predict social influence that, in some cases, matches human social influence and in others is unique to interactions with robots. Using these two primary factors, the model provides a basis for predicting a wide range of social effects in HRIs.
As a first step toward validating the theory, we demonstrated how the model’s prediction corresponds with the results of five previously published user studies that display a variety of manipulations in both
violation of expectations and
belonging. We also suggest possible interpretations for unpredicted findings in some of those studies that cannot be explained by assuming that people’s responses to robots are similar to their responses to people (i.e., the Media Equation). While these analyses of previous studies do not directly validate the model due to their post-hoc nature, they provide initial support for the relation between the independent factors and social influence. We encourage future work to further validate this model by manipulating both
violation of expectations and
belonging in empirical studies and mapping the resulting social influence to the model’s prediction depicted in Figure
1.
One limitation of this model that we want to highlight is its lack of a valence prediction for the robot’s social influence on a person. While the model estimates the magnitude of the social influence, it does not predict the influence type and whether it is positive or negative. To predict valence, a social influence model should involve numerous factors that account for a variety of interaction contexts. Instead, we chose to design a simple model that includes only two fundamental factors that predict the magnitude of a robot’s social influence on a person in a wide range of interactions. The simplicity of the model also leads to a non-bijective prediction of social influence. Since the model predicts increasing influence for both high and low belonging, extremely opposite HRIs may result in a similar prediction by the model. It is possible that future elaborations of the model and the addition of independent factors may resolve this concern and lead to bijective predictions. Another limitation concerns the single perspective assumed by our model. As suggested by Clark and Fisher (2023), people can take different perspectives when interacting with a robot. These range from the robot’s physical aspects to perceiving the robot as a character with various attributes and features. Taking different perspectives can highly impact the expectations of the robot and the sense of belonging. To keep our model simple, one of our assumptions is that people will take a single perspective while interacting with the robot and will not switch between perspectives which may change their expectations and sense of belonging. Future work should further elaborate the model to address this possibility.
Our model focuses on what we consider to be the two most important and critical factors, at times valuing simplicity over complete accounting of all possible factors. There are certainly other independent factors that result in social influence [
23,
55]. For example, the model may predict different social influences for diverse populations and group identification. Different cultures may mediate participants’ sense of belonging and diverse experience with robots may mediate the impact of violating expectations. Other examples include group size, group cohesion, the social environment, the number of robots in the interaction, the task, and the importance of the interaction to the participant. Mediating factors may also change the pattern of influence within each factor. For example, under specific circumstances, negativity bias may lead to a greater social influence of low belonging (e.g., exclusion) compared to high belonging (e.g., inclusion). Specific circumstances can also lead to different relations between our two main factors, shaping the interaction between them and its impact on the robot’s social influence. While this may be the case for any theory aiming to predict a meaningful effect, we suggest that an interesting avenue of future work is extending RoSI to consider these other factors while keeping the model interpretable.
Despite the limitations mentioned above, we believe that this model of RoSI provides a strong alternative to the oversimplified perception assuming that social interactions with robots are similar to social interactions between humans. The model identifies two fundamental factors that predict a robot’s unique influence on humans in social interactions: violation of expectations and belonging. By mapping HRIs with these two factors, our model demonstrates how they can account for the magnitude of social influence in various social contexts of HRIs.